Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense 25495,2,,20656,1/1/2021 5:57,,4,,"

Generally, if one googles "quantum machine learning" or anything similar the general gist of the results is that quantum computing will greatly speed up the learning process of our "classical" machine learning algorithms.

This is correct. A lot of machine learning methods involve linear algebra, and it often takes far fewer quantum operations to do things in linear algebra than the number of classical operations that would be needed. To be more specific, for a matrix of size $N\times N$, if a classical computer needs $f(N)$ operations to do some linear algebra operation, such as diagonalization which can take $f(N)=\mathcal{O}(N^3)$ operations on a classical computer, a quantum computer would often need only $\log_2 f(N)$ operations, which inn (in and only in) the language of computational complexity theory, means exponential speed-up. The "only in" part is there because we have made here an assumption that "fewer quantum operations" means "speed-up", which for now is something we only know to be true in the world of "computational complexity theory".

However, "speed-up" itself does not seem very appealing to me as the current leaps made in AI/ML are generally due to novel architectures or methods, not faster training.

I disagree. Take vanilla deep learning for example (without GANs or any of the other things that came up in the last decade). Hinton and Bengio had been working on deep learning for decades, so why did interest in deep learning suddenly start growing so much from 2011-2014 after a roughly monotonic curve from 1988-2010? Not that this rise started before newer advances such as GANs and DenseNet were developed:



Notice also the similarity between the above graph and these ones:

These days pretty much everyone doing deep learning uses GPUs if they have access to GPUs, and what is possible to accomplish is extremely tied to what computing power a group has. I don't want to undermine the importance of new methods and new algorithms, but GPUs did play a big role for at least some areas of machine learning, such as deep learning.

Are there any quantum machine learning methods in development that are fundamentally different from "classical" methods?

I think you mean: "Most quantum machine learning algorithms are simply based on classical machine learning algorithms, but with some sub-routine sped-up by the QPU instead of a GPU -- are there any quantum algorithms that are not based on classical machine learning algorithms, and are entirely different".

The answer is yes, and more experts might be able to tell you more here.

One thing you might consider looking at is Quantum Boltzmann Machines.

Another things I'll mention is that a child prodigy named Ewin Tang who began university at age 14, discovered at around the age of 17 some classical algorithms that were inspired by quantum algorithms rather than the other way around, and the comments on the Stack Exchange question Quantum machine learning after Ewin Tang might give you more insight on that. This is related to something called dequantization of quantum algorithms.

By this I mean that these methods are (almost*) impossible to perform on "classical" computers. *except for simulation of the quantum computer of course

Unfortunately quantum computers can't do anything that's impossible for classical computers to do, apart from the fact that they might be able to do some things faster. Classical computation is Turing complete, meaning anything that can be computed can be computed on a big enough classical computer.

",19524,,,,,1/1/2021 5:57,,,,4,,,,CC BY-SA 4.0 25496,2,,25437,1/1/2021 8:52,,1,,"

input to output is linear refers to the input X i.e image and output is the output logits/softmax from the network.

So how does linearity help in constructing adversarial examples? Imagine a simple logistic regressor and a simple 2D space. There is a definite boundary beyond which the label that the model(i.e logisitc regressor in this case) changes. So if we move perpendicular to the boundary (i.e the line represented by the model in this case) we can get to another class's space. So if we perturb the input in this direction, the model outputs wrong class. { Refer the slide with the title Adversarial Examples from Excessive Linearity for diagram }

Now imagine the neural network trained on imagenet, there are so many boundaries and a small change can just shift change the class the model would predict. Now it is important to note that these subspaces of the image remain nearly the same if we train VGG or ResNet etc. So this explains how an adversarial example on one network effects another.

You may ask how such small change can effect. this is because the vectors we deal it is not 2d or 3d, it is very large and small changes add up.

",35616,,,,,1/1/2021 8:52,,,,5,,,,CC BY-SA 4.0 25497,2,,25491,1/1/2021 9:44,,4,,"

I guess the issue is you lost track of where the samples came from and since you requested a math explanation I'll try to go step by step using my notation and without checking other material to avoid being biased by how other authors present it

So we start from

$$ L(D,G) = E_{x \sim p_{r}(x)} \log(D(x)) + E_{x \sim p_{g}(x)}\log(1 - D(x)) $$

then you apply the definition of $E_{\cdot}(\cdot)$ operator in the continuous case

$$ L(D,G) = \int_{x} \log(D(x)) p_{r}(x)dx + \int_{x}\log(1 - D(x))p_{g}(x)dx $$

then you Monte Carlo sample it to approximate it

$$ L(D,G) = \frac{1}{n} \sum_{i=1}^{n} \log(D(x_{i}^{(r)})) + \frac{1}{m} \sum_{j=1}^{m}\log(1 - D(x_{j}^{(g)})) $$

As you can see here I have kept the samples from the 2 distributions separated and used a notation that allows to track their origin so now you can use the right label in the Cross Entropy

$$ L(D,G) = \frac{1}{n} \sum_{i=1}^{n} L_{ce}(1, D(x_{i}^{(r)})) + \frac{1}{m} \sum_{j=1}^{m} L_{ce}(0, D(x_{j}^{(g)})) $$

But you could also have decided to merge the 2 integrals before to have

$$ L(D,G) = \int_{x} \left( \log(D(x)) p_{r}(x) + \log(1 - D(x))p_{g}(x) \right) dx $$

which is mathematically legit operation, however the issue is when you try to discretize this with Monte Carlo sampling.

You can't just replace the integral with one sum since you are Monte Carlo sampling and here, contrary to what we have done above, you do not have 1 distribution per integral to sample but in the same integral you have 2 distributions and for each sample you have to say what distribution it comes from which is where the issue is in your notation since you lost track of this information and it seems all the samples come from one distribution

",1963,,41615,,1/3/2021 23:23,1/3/2021 23:23,,,,3,,,,CC BY-SA 4.0 25498,1,25515,,1/1/2021 10:38,,1,472,"

I am new to NLP and AI in general. I am just expecting springboard information so that I can skip all the introduction to NLP websites. I have just started studying NLP and want to know how to go about solving this problem. I am creating a chatbot that will take voice input from customers ordering food at restaurants. The customer input I am expecting as;

I want to order Chicken Biryani

Can I have a Veg Pizza, please

Coca-cola etc

I want to write an algorithm that can separate the name of the food item from the user input and compare it with the list of food items in my menu card and come up with the right item.

I am new to NLP, I am studying it online for this particular project, I can do the required coding, I just need help with the overall algo or sort of flow chart. It will save my time tremendously. Thanks.

",43434,,32410,,4/24/2021 3:34,4/24/2021 3:34,How to design a NLP algorithm to find a food item in menu card list?,,1,0,,,,CC BY-SA 4.0 25499,1,25504,,1/1/2021 13:13,,2,83,"

I was reading the book Deep Learning by Ian Goodfellow. I had a doubt in the Maximum likelihood estimation section (Pg 131). I understand till the Eq 5.58 which describes what is being maximized in the problem.

$$ \theta_{\text{ML}} = \text{argmax}_{\theta} \sum_1^m \log(p_{\text{model}}(x^{(i)};\theta)) $$

However the next equation 5.59 restates this equation as:

$$ \theta_{\text{ML}} = \text{argmax}_{\theta} E_{x \sim \hat{p}_{\text{data}}}(\log(p_{\text{model}}(x;\theta)) $$

where $$\hat{p}_{\text{data}}$$ is described as the empirical distribution defined by the training data. Could someone explain what is meant by this empirical distribution? It seems to be different from the distribution parametrized by theta as that is described by $$ p_{\text{model}} $$

",43272,,16521,,1/1/2021 19:12,1/2/2021 23:17,What is emperical distribution in MLE?,,1,0,0,,,CC BY-SA 4.0 25500,1,25501,,1/1/2021 15:00,,0,43,"

This is the back-propogation rule for the output layer of a multi-layer network:

$$W_{jk} := W_{jk} - C \dfrac{\delta E}{\delta W_{jk}}$$

What does this rule do in the more ambiguous cases such as:

(1) The output of a hidden node is near the middle of a sigmoid curve?

(2) The graph of error with respect to weight is near a maximum or minimum?

",42926,,16521,,1/1/2021 15:52,1/1/2021 16:08,Backpropogation rule for the output layer of a multi-layer network - What does the rule do in ambiguous cases?,,1,1,,,,CC BY-SA 4.0 25501,2,,25500,1/1/2021 16:02,,1,,"

I assume you are considering a network where the activation function of the last layer is a sigmoid, so the output of your network is $$\tilde{y}=\sigma(W^{L}\cdot f(X, W^1, \dots, W^{L-1})),$$ where $X$ is the input vector, and $f$ is obtained by feeding the input to the network up to the layer $L-1$. Let's also call $Z:= W^{L}\cdot f(X, W^1, \dots, W^{L-1})$.

The error term is computed as $$E(y, \tilde{y})=E(y, \sigma(Z)),$$ where $y$ is the actual output. Let's get the derivative of the error with respect to the output of the last node (the input of the sigmoid) $$\frac{\partial E}{z_i}=\frac{\partial E}{\partial \tilde{y}}\frac{\partial\tilde{y}}{\partial z_i}=\frac{\partial E}{\partial \tilde{y}}\frac{\partial\sigma}{\partial z_i}.$$ The update rule is $$z_i =z_i - C\frac{\partial E}{z_i}= z_i - C\frac{\partial E}{\partial \tilde{y}}\frac{\partial\sigma}{\partial z_i}.$$ Now we can analyse your questions.

  1. To be close to the middle of the sigmoid means that $z_i$ is close to $0$; moreover the derivative $\frac{\partial\sigma}{\partial z_i}$ reaches its maximum value when it is evaluated in $0$. This means that the term $\frac{\partial\sigma}{\partial z_i}$ gets "large" as $z_i$ approaches the center of the sigmoid, contributing more to the update of the weight. Of course it is difficult to say what happens in general, as the term $\frac{\partial E}{\partial \tilde{y}}$ is also in the expression and it is possible that this term gets really small (or big) for $z_i$ close to $0$. You just know that the term $\frac{\partial\sigma}{\partial z_i}$ is trying to push $z_i$ further away from $0$, so the idea should be that the closer to the center the larger the update.
  2. To be close to an extremum point of the loss means that the derivative of the loss with respect to $\tilde{y}$ is close to $0$. Since $\frac{\partial E}{\partial \tilde{y}}$ is backpropagated in a multiplicative fashion, the rule of thumb is that the closer you are to an extremum the smaller the updates get. Though, as above, is kind of difficult to say what will happen when updating a general node, as some of the term in the multiplication can be very large, making the update large even if $\frac{\partial E}{\partial \tilde{y}}$ is small.

For instance, what happens if you are close to the center of the sigmoid but also close to an extremum of the loss? You will have a multiplication of $2$ terms, one trying to make the update small and the other trying to make the update large, and what matter are the orders of magnitude involved.
In conclusion, the rules of thumb are as in the points 1. and 2., but they are no guarantee that you won't find any special cases.

",42424,,42424,,1/1/2021 16:08,1/1/2021 16:08,,,,0,,,,CC BY-SA 4.0 25502,2,,22514,1/1/2021 16:47,,0,,"

There's nothing stopping you from training a model with whatever tags you want.

Using what you describe as "usual" format means you would have approx half as many tags as using the IOB format. In theory this means your model will develop higher accuracy faster and with less training data. On the downside, you will need to do more work when interpreting the results in order to be confident where one named entity ends and another one begins.

I made a notebook to do some empirical tests on this which uses 17 tags in the IOB format and 9 in the "usual" format.

TL;DR using the "usual" format did not produce a noticeable improvement in model quality.

",43256,,,,,1/1/2021 16:47,,,,0,,,,CC BY-SA 4.0 25504,2,,25499,1/1/2021 17:43,,2,,"

The idea behind this kind of reasoning is that there is a "true" distribution (unknown to us, mere mortals) and that the data is generated following this distribution. But what we don't really know the shape of the distribution, all we know is the distribution of the data that we have. This is called the empirical distribution. Let's see a simple example to illustrate the point.
Let's consider a die. Each number is equally likely to show up if we throw the die, so the true underlying distribution is uniform over the set $\{1, 2,\dots6\}$. Now let's say you ask your friend to throw the die 60 times, what you will see is likely something close to uniform over the set $\{1, 2,\dots6\}$ (not really uniform though, as that would be highly unlikely). This distribution is the empirical one, and as you collect more and more samples it will converge to the actual underlying distribution.
In your case what happened is the following:
$x_1,\dots, x_m$ is your sample (in the example above, the $60$ numbers that you see as your friend throws the die). This sample defines a distribution, $\hat{p}_{data}$. In the example above, $\hat{p}_{data}$ would likely be close to the uniform distribution over $\{1, \dots, 6\}$. Now you can think about the sum over $x_1\dots, x_m$ of $\log(p_{model}(x^{(i)};\theta))$ as the average of $\log(p_{model}(x;\theta))$, where $x$ is drawn according to $\hat{p}_{data}$.
Let me make another example with some actual numbers. Let's say you toss a fair coin $7$ times and you see $$\{H, H, H, T, H, T, H\}.$$ The empirical distribution is $\mathbb{P}(H)=5/7$, $\mathbb{P}(T)=2/7$. So if you compute the expectation $$\mathbb{E}_{x \sim \hat{p}_{data}}[\log(p_{model}(x;\theta)]$$ you get $$\log(p_{model}(H;\theta)\cdot\mathbb{P}(H) +\log(p_{model}(T;\theta)\cdot\mathbb{P}(T) = \\ \frac{5}{7}\log(p_{model}(H;\theta) +\frac{2}{7}\log(p_{model}(T;\theta)),$$ which is what you will get if you compute the first sum you wrote.

",42424,,36821,,1/2/2021 23:17,1/2/2021 23:17,,,,0,,,,CC BY-SA 4.0 25505,1,25510,,1/1/2021 18:59,,2,258,"

When implementing a genetic algorithm, I understand the basic idea is to have an initial population of a certain size. Then, we pick two individuals from a population, construct two new individuals (using mutation and crossover), repeat this process X number of times and the replace the old population with the new population, based on selecting the fittest.

In this method, the population size remains fixed. In reality in evolution, populations undergo fluctuations in population sizes (e.g. population bottlenecks, and new speciations).

I understand the disadvantages of variable populations sizes from a biological view are, for example, a bottleneck will reduce the population to minimal levels, so not much evolution will occur. Are there disadvantages to using variable population sizes in genetic algorithms, from a programming perspective? I was thinking the numbers per population could follow a distribution of some sort so they don't just randomly fluctuate erratically, but maybe this does not make sense to do.

",42926,,42926,,1/1/2021 19:10,1/6/2021 10:44,Are there any disadvantages to using a variable population size in genetic algorithms?,,2,0,,,,CC BY-SA 4.0 25506,1,25508,,1/1/2021 19:19,,0,83,"

I understand that in each generation of a genetic algorithm, that generation must re-prove it's fitness (and then the fittest of that population is taken for the next population).

In this case, I guess it's a presumption that if you take the fittest of each generation, and use them to form the basis of the next generation, that your population as a whole is getting fitter with time.

But algorithmically, how can I detect this? If there's no end goal known, then I can't measure the error/distance from goal? So how can you tell how much each generation is becoming fitter by?

",42926,,,,,1/6/2021 10:28,How to detect that the fitness landscape of a genetic algorithm is changing over time?,,2,1,,,,CC BY-SA 4.0 25507,1,,,1/1/2021 19:46,,0,51,"

I'm just starting to explore topics within computer vision and curious if there are any concepts in that area that could be applied to segmenting multivariate time series with the goal of grouping individual data points similar to how a human might do the same. I know that there are a number of time series segmentation methods, but in-depth explanations of multivariate methods are more scarce and it seems like somewhat of an underdeveloped topic overall. Since segmentation is such a fundamental part of CV and is inherently multidimensional, I'm wondering if concepts there can be modified to apply to time series.

Specifically, I'd like to be able segment a time series and reformulate a prediction problem as something closer to a language processing problem. The process would look something like this:

  1. Segment a multivariate time series into near-homogenous segments of variable length. Some degree of preprocessing might be required but I can worry about that separately.
  2. Encode the properties of each segment based on summary statistics (e.g., mean, variance, derivative values, etc.) such that the segments fall into discrete buckets.
  3. Each bucket will represent a "word" and the goal of the model will be to predict the next word given a series of words, i.e., the next segment given a series of segments.

In a few days of reading about CV, it seems like there's a ton to learn. If there are traditional time series segmentation techniques that are more suitable, that would be of interest, but I'd still be curious about a CV approach since that approach likely better aligns with how a person might look at a graph to identify segments.

",30154,,2444,,1/31/2021 17:09,1/31/2021 17:09,Time series analysis using computer vision principles,,0,2,,,,CC BY-SA 4.0 25508,2,,25506,1/1/2021 19:53,,1,,"

There is no exact way to assess that a genetic algorithm has located a global optima. Indeed there may be multiple global optima. You must fall back to heuristic methods. The fitness of a population is the maximum fitness of any individual. Unless specific measures are taken to maintain diversity the population will converge to an optima, local or global. At that point all individuals will, except for mutation, be identical. You could take the fittest individual of such a population as your solution, but you will not know if the solution is a global or local optima.

Two reasonable heuristics are these. First, run the algorithm till it converges and maintains its fitness for a number of further generations. Or second run the algorithm multiple times, and take the fittest of all the located solutions. Neither is exact.

",26382,,,,,1/1/2021 19:53,,,,1,,,,CC BY-SA 4.0 25510,2,,25505,1/1/2021 21:05,,1,,"

Population size is a tricky issue even in pure biological models. Biological population sizes obviously vary. The two great protagonists of the argument were Ronald Fisher and Sewell Wright, with argument being between Fisher favouring few large populations against Wright’s many small interconnected populations. There is evidence that evolution occurs more rapidly in Wright’s model but the evidence is inconclusive. The theory concentrates on the probability that a mutation will occur and then become dominant in a population. In a small population a beneficial mutation is more likely to be selected for reproduction, but premature convergence is a serious danger. While in a larger population a mutant is less likely to be removed from the population during reproduction. I would strongly recommend a read of Games of life by Karl Sigmund.

",26382,,,,,1/1/2021 21:05,,,,5,,,,CC BY-SA 4.0 25511,2,,25371,1/1/2021 21:24,,1,,"

It is entirely possible!

You see, the agents will perform whatever actions are available to them, and if the evolutionary algorithm is setup correctly, whatever set of actions provides them with a bigger survival rate will be the one that gets explored and reproduced the most.

Here is a very interesting list of "Specification Gaming" in AI, where the agents happened to "game" the rules to reach their goals (metric optimization) without actually doing what the creators intended: https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml (second link)

",190,,,,,1/1/2021 21:24,,,,0,,,,CC BY-SA 4.0 25512,1,25514,,1/1/2021 22:59,,1,126,"

Referring to this post, in the following formula to update the state-action value

$$ Q(s,a) = Q(s,a) + \alpha (G − Q(s,a)),$$

is the value of $G$ (the return) the same for every state-action $(s,a)$ pair?

I am a little confused about this point, so I will thank any clarification.

",33566,,2444,,1/2/2021 12:33,1/2/2021 12:34,"When updating the state-action value in the Monte Carlo method, is the return the same for each state-action pair?",,1,0,,,,CC BY-SA 4.0 25513,1,25523,,1/2/2021 0:15,,0,243,"

I'm working on the code trying to generate new images using DCGAN model. The structure of my code is from the PyTorch tutorial here. I'm a bit confused trying to find and understand how the latent vector is transforming to the feature map from the Generator part (this line of code is what I'm interested in) :

nn.ConvTranspose2d(  nz, ngf * 8, 4, 1, 0, bias=False)

It means the latent vector (nz) of shape 100x1 transforming to the 512 matrices of 4x4 size (ngf=64). How does it happen? Hence, I can't even clear to myself how the length of the latent vector influence to the generated image. P.S. The left part of the Genarator structure is clear.

The only idea that I got is :

  1. E.g. there is a latent vector of size 100 as input (100 random values).
  2. We interact each value of the input latent vector with a 4x4 kernel.
  3. In this way we get 100 different 4x4 matrices (each matrix is for one value from latent vector).
  4. Then we summarize all these 100 matrices and get one final matrix - one feature map.
  5. We get necessary number of feature maps taking different kernels.

Is this right? Or does it happen in other way?

",41783,,41783,,1/2/2021 16:03,1/2/2021 17:22,How is the latent vector transforming to a feature map in DCGAN (Generator structure)?,,1,0,,,,CC BY-SA 4.0 25514,2,,25512,1/2/2021 1:43,,2,,"

The discussion uses poor notation, there should be a time index. You obtain a list of tuples $(s_t, a_t, r_t, s_{t+1})$ and then, for every visit MC, you update

$$Q(s_t, a_t) = Q(s_t, a_t) + \alpha (G_t - Q(s_t, a_t))\;;$$

where $G_t = \sum_{k=0}^\infty \gamma^k r_{t+k}$, for each $t$ in the episode. You can see that the returns for each time step are calculated for time timestep onwards, and so are not necessarily the same across time steps.

",36821,,2444,,1/2/2021 12:34,1/2/2021 12:34,,,,2,,,,CC BY-SA 4.0 25515,2,,25498,1/2/2021 1:45,,2,,"

Since you want a shortcut use the spoonacular API. Below is a test with your words. You can see it had trouble with 'Coca' and 'veg'.

What you need is 'named-entity recognition' for food. This is not a new thing but clearly not a solved problem. The Foodie Favorites repository attempts to solve the problem from scratch.

If want to do some research and dig deeper see FoodBase corpus: a new resource of annotated food entities. From the abstract:

It consists of 12,844 food entity annotations describing 2105 unique food entities. Additionally, we provided a weakly annotated corpus on an additional 21,790 recipes. It consists of 274,053 food entity annotations, 13,079 of which are unique.

",5763,,,,,1/2/2021 1:45,,,,0,,,,CC BY-SA 4.0 25516,1,,,1/2/2021 2:31,,1,235,"

I'm not very knowledgeable in this field but I'm wondering if any research or information already exists for the following situation:

I have some data that may or may not look similar to each other. Each represents a node that represents a vector of size 128.

And they are inserted into a tree graph according to similarity.

and a new node is created with an edge connecting the most similar vertex node found in the entire tree graph.

Except I'm wasting a lot of time searching through the entire graph to insert each new node when I could narrow down my search according to previous information. Imagine a trunk node saying "Oh, I saw a node like you before, it went down this branch. And don't bother going down that other branch." I could reduce the cost of searching the entire tree structure if there was a clever way to remember if a similar node went down a certain path.

I've thought about some ways to use caching or creating a look-up table but these are very memory intensive methods and will become slower the longer the program runs on. I have some other ideas I am playing around with but I was hoping someone could point me in the right direction before I started trying out weird ideas.

Edit: added a better (more realistic) graph picture

",43419,,43419,,1/6/2021 5:43,1/26/2023 11:07,A way to leverage machine learning to reduce DFS/BFS search time on a tree graph?,,1,2,,,,CC BY-SA 4.0 25517,2,,25483,1/2/2021 2:44,,5,,"

The main issue during training is that you haven't right-shifted the input of the decoder, which is probably why you set the diagonals of mask to -inf (when it should be $0$).

Also, just an FYI, although you haven't focused on evaluation/prediction yet, I will explain the evaluation/prediction here as well for completeness, since it works so differently than training, and also since you will need it when generating the graphs for debugging.

Training

Both tgt and tgt_mask should be changed to simulate auto-regressive properties.

You are feeding in tgt as the input to the decoder, where tgt is the ground truth target sequence to predict. tgt should have dimension length(sequence) x batchSize x $\|dict\|$. Additionally, you are feeding in mask where the diagonals are -inf.

Instead, you should do the following:

  • Add 2 new special tokens to your vocabulary that signals the start and end of decoding, i.e. <START> and <END>. So your vocabulary will need to be extended to 16 events.
  • Every input to the decoder (training, evaluation, prediciton) should have <START> as the first token. So during training, when you want to feed in tgt to the decoder, you should right-shift tgt by adding in the <START> token at the beginning.
    • tgt_shifted will now be of dimension length(sequence)+1 x batchSize x $\| dict \|$
    • tgt_shifted_mask will now be of dimension length(sequence)+1 x length(sequence)+1. Diagonal should be all $0$
    • the inputs to the decoder should be tgt_shifted, tgt_shifted_mask, and memory
    • the output of the decoder will have dimension length(sequence)+1 x batchSize x $\| dict\|$, and will not be right-shifted because it will not have <START> as the first word. But it should have <END> as the last word.
    • Append <END> to the ground truth tgt, and turn it into one-hot encoding, so tgt should have dimension length(sequence)+1 x batchSize x $\|dict\|$. Your loss operator should be some sort of element-wise comparison between tgt and the output of the decoder.

Note that you only run the decoder once per training batch during training, i.e. the decoder simultaneously predicts the logits of all length(sequence) tokens at the same time. On the other hand, during evaluation/prediction, you must run the decoder length(sequence) times, since you can only use the decoder to predict one token at a time.

Evaluation/Prediction

During evaluation/prediction, the model should generate its own sentence entirely from scratch, one word at a time:

  1. out should be initialized as a single token, <START>, and have dimensions $1$ x batchSize x $\|dict\|$.
  2. Do the following in a loop until termination or max output length is reached:
    1. Generate a out_mask for the current version of out. out_mask should be a matrix with dimension length(out) x length(out). Diagonal should be all $0$
    2. Pass out, out_mask, and memory into the decoder layer. The output of the decoder will be a length(out) x batchSize x $d_{model}$ tensor (note that in your case, $d_{model} = \|dict\|$), which indicates tokenIdx x batchIdx x vocabularyIdx
    3. Convert the last time step (e.g. decoderOutput[-1,:,:] if all sequences in your batch have the same length) into probabilities, and take all the events that surpasses some probability threshold and set them to $1$. Set all other events to $0$. This is the new time step that the decoder layer predicts. So add this time step to out to append it's length by $1$.
    4. Repeat the previous 3 steps in a loop until you reach the <END> token, which signals the end of decoding, or until the max output length is reached.

During evaluation, you can calculate the validation loss by using the logits of only the final iteration of the decoder (which should have dimension length(sequence)+1 x batchSize x $\|dict\|$), and compare those logits with the ground truth with the <END> token added at the end (resulting in dimensions length(sequence)+1 x batchSize x $\|dict\|$).

",42699,,42699,,1/4/2021 2:03,1/4/2021 2:03,,,,10,,,,CC BY-SA 4.0 25519,2,,23842,1/2/2021 9:23,,1,,"

Your question seems to be talking about two slightly different topics:

  • Pros and cons of 'one vs rest' approach in multi-class classification
  • Use of Neural Networks in single-output vs multi-class classification problems

One vs Rest in Multi-Class Classification

Recognising digits is an example of multi-class classification. The approach you outline is the kind of approach summarised in the "One vs Rest" section of the Wikipedia page on multi-class classification. The page notes the following issues with this approach:

Firstly, the scale of the confidence values may differ between the binary classifiers. Second, even if the class distribution is balanced in the training set, the binary classification learners see unbalanced distributions because typically the set of negatives they see is much larger than the set of positives.

You might also like to look into another approach called One vs One ('One vs Rest' vs 'One vs One') which sets up the classification problem as a set of binary alternatives. In the digit recognition case you'd end up with a classifier for "1 or 2?", "1 or 3?", "1 or 4?" etc. This might help with the "4 vs 9" problem but it does mean an enormous amount of classifiers, that might be better represented in some kind of network. Perhaps even a network inspired by brain neurons.

Use of Neural Networks in single output vs multi-class classification

There is nothing magical about a neural network that means it has to be used for multi-class classification. Nor is there anything magical about it that makes it the only option for multi-class classification.

For example:

Conclusions

A 10-class neural network is used to identify digits because this has turned out to be an efficient way of doing so when compared with one vs rest and one vs all approaches.

A bit off-topic, perhaps, but if you think about this in the context of T5, there does seem to be a trend of moving towards larger more multi-purpose models rather than lots of small specialised models.

",43256,,,,,1/2/2021 9:23,,,,0,,,,CC BY-SA 4.0 25520,1,,,1/2/2021 9:26,,0,82,"

I'm trying to understand how the logic gates (e.g. AND, OR, NOT, NAND) can be built into single-layer perceptrons.

I understand specific examples of weights and thresholds for the gates, but I'm stuck on understanding how to translate these to general inequalities for these gates.

Is my reasoning correct in the table below, or are there cases where these general inequalities do not make sense for this problem? Am I missing other logic gates that can be done in a similar fashion? (e.g., I know XOR cannot)?

In the table below, a perceptron has two input nodes, and one output node. W1 and W2 are the weights (real values) on those input nodes. T is the threshold, above which, the perceptron will fire. I have come up with example values that would work for each logic gate (e.g., for the AND gate, a perceptron with two input weights, W1 = 1 and W2 = 1, and a threshold = 2, will fire, and I'm trying to understand more generally, what is the equation needed for each gate).

Gate Example W1, W2 Threshold General inequalities
AND 1,1 2 W1 + W2 >= t, where W1, W2 > 0
OR 1,1 1 W1 > t or W2 > t
NOT -1 -0.49 W1 > 2(t)
NAND -2,-2 3 W1 + W2 <= t
",42926,,42926,,1/3/2021 10:40,1/3/2021 10:40,What are the general inequalities needed for the logic gate perceptrons?,,0,2,,,,CC BY-SA 4.0 25521,1,,,1/2/2021 9:48,,0,56,"

I am reading an exam question about NN (that I cannot publish, for copyright reasons). The question says: 'Construct a rectangle in 2D space. Define the lines, and then define the weights and threshold that will only fire for points inside the rectangle.'

I understand that this is an example of a rectangle drawn as a NN (i.e. this NN will fire, if the point is in the rectangle, where the rectangle is defined by the lines X = 4; X = 1, Y = 2, Y = 5).

In this diagram, since it's a rectangle, the equations of the line in this example are x = 4, x =1, y=2, y=5, so I left the other weights out (as they equal to 0).

I'm now wondering how this could be translated to a 3D structure. For example, if a 3D shape was defined by the points:

(0,0,0), (0,1,0), (0,0,1), (0,1,1), (1,0,0), (1,1,0), (1,0,1), (1,1,1)

I wanted to draw a hyperplane that separates the corner point (1,1,1) from the other points in this cube. Can this 3D shape be drawn similarly to below (maybe it would be easier to understand, if there were other numbers except 1 and 0 in the co-ordinates)?

Would I draw this with 3 nodes in the input layer, still one node in the output layer, I just don't understand what the hidden layer should look like? Would it have 24 nodes? One for each surface of the cube, with relevant X and Y values?

",42926,,42926,,1/3/2021 12:45,1/3/2021 12:45,How to draw a 3-dimensonal shape's neural network,,0,6,,,,CC BY-SA 4.0 25522,1,25575,,1/2/2021 11:58,,0,244,"

If I have a neural network, and say the 6th output node of the neural network is:

$$x_6 = w_{16}y_1 + w_{26}y_2 + w_{36}y_3$$

What does that make the derivative of:

$$\frac{\partial x_6}{\partial w_{26}}$$

I guess that it's how is $x_6$ changing with respect to $w_{26}$, so, therefore, is it equal to $y_2$ (since the output, $y_2$, will change depending on the weight added to the input)?

",42926,,2444,,1/2/2021 23:21,1/4/2021 22:42,What is the derivative of a specific output with respect to a specific weight?,,1,0,,,,CC BY-SA 4.0 25523,2,,25513,1/2/2021 17:11,,0,,"

Noise vector (batch_size, 100, 1, 1) is deconvoloved with filter_1 (100, 4, 4). Result is feature_map_1 (1, 4, 4). And since there are 512 filters, so there will be 512 feature maps. Output shape will be (batch_size, 512, 4, 4).

I think you need better undersanting for convolutional calculations in general. In this stack it was explained very well.

",41615,,41615,,1/2/2021 17:22,1/2/2021 17:22,,,,1,,,,CC BY-SA 4.0 25524,2,,22859,1/2/2021 17:15,,0,,"

What you should do as part of your exploration is to learn various models of increasing complexity. Start from a simple linear model, ending in multi-layer neural networks (with non-linear activations of course). If the nonlinear models are better then that implies that your data do not follow a linear hyperplane.

Also check this out for recent trends: https://machinelearningmastery.com/auto-sklearn-for-automated-machine-learning-in-python/

",43358,,,,,1/2/2021 17:15,,,,0,,,,CC BY-SA 4.0 25526,1,25528,,1/2/2021 17:22,,0,162,"

If the derivative is supposed to give the rate of change of a function at that point, then why is the derivative of the softmax layer (a vector) the Jacobian matrix, which has a different shape than the output/softmax vector? Why is the shape of the softmax vector's derivative (the Jacobian) different than the shape of the derivative of the other activation functions, such as the ReLU and sigmoid?

",43270,,2444,,1/2/2021 21:33,1/2/2021 21:33,Why is the derivative of the softmax layer shaped differently than the derivative of other neurons?,,1,1,,,,CC BY-SA 4.0 25527,1,,,1/2/2021 17:49,,15,474,"

I learned about the universal approximation theorem from this guide. It states that a network even with a single hidden layer can approximate any function within some bound, given a sufficient number of neurons. Or mathematically, ${|g(x)−f(x)|< \epsilon}$, where ${g(x)}$ is the approximation, ${f(x)}$ is the target function and is $\epsilon$ is an arbitrary bound.

A polynomial of degree $n$ has at maximum $n-1$ turning points (where the derivative of the polynomial changes sign). With each new turning point, the approximation seems to become more complex.

I'm not necessarily looking for a formula, but I'd like to get a general idea on how to figure out the sufficient number of neurons is for a reasonable approximation of a polynomial with a single layer of the neural network (you may consider "reasonable" to be $\epsilon = 0.0001$). To ask in other words, how would adding one more neuron affect the model's ability to express a polynomial?

",38076,,2444,,1/2/2021 23:06,1/2/2021 23:06,What is the number of neurons required to approximate a polynomial of degree n?,,0,2,,,,CC BY-SA 4.0 25528,2,,25526,1/2/2021 19:08,,3,,"

When you use the softmax activation function is usually as a last layer of your network and to get an output that is a vector. Now your confusion is about shapes, so let's review a bit of calculus.
If you have a function $$f:\mathbb{R}\rightarrow\mathbb{R}$$ the derivative is a function on its own and you have $$f':\mathbb{R}\rightarrow\mathbb{R}.$$ If you increase the dimension of the input space have $$f:\mathbb{R}^n\rightarrow\mathbb{R}.$$ The "derivative" in this case is called gradient and it is a vector collecting the $n$ partial derivatives of $f$. The input space of the gradient function is $\mathbb{R}^n$ (the same as for $f$), but the output is the collection of the $n$ derivatives, so the output space is also $\mathbb{R}^n$. In other words $$\nabla f:\mathbb{R}^n\rightarrow\mathbb{R}^n,$$ which makes sense as for each point $x$ of the input space you get a vector ($\nabla f(x)$) as output.
So far so good, but what happens if you consider a function that takes a vector as input and spits out a vector as output, i.e. $$f:\mathbb{R}^n\rightarrow\mathbb{R}^m?$$ How to compute the equivalent of the derivative? (This is the softmax case, where you have a vector as input and a vector as output.)
You can reduce this case to the previous case considering $f=(f_1, \dots, f_m)$, where $f_i:\mathbb{R}^n\rightarrow\mathbb{R}.$ Now for each $f_i$ you can compute the gradient $\nabla f_i$ and end up with $m$ gradients. When you evaluate them at a point $x\in\mathbb{R}^n$ you get $m$ $n-$dimensional vectors. These vector can be collected in a matrix, which is the Jacobian; formally $$Jf:\mathbb{R}^n\rightarrow\mathbb{R}^{m\times n}.$$ Finally, to answer your question, you get a Jacobian "instead" of a gradient (they all represent the same concept) because the output of the softmax is not a single number but a vector.
By the way the sigmoid and relu are functions with one dimensional input and output, so they don't really have a gradient but a derivative. The trick is that people write $\sigma(W)$, where $W$ is vector or a matrix, but they mean that $\sigma$ is applied component-wise, as a function from $\mathbb{R}$ to $\mathbb{R}$ (I know, it's confusing).
(I know, I kind of skipped your other question, but this answer was already long and I think you can convince yourself that the dimensions match correctly (with the Jacobian) for the update rule to work. If not I'll edit)

",42424,,,,,1/2/2021 19:08,,,,1,,,,CC BY-SA 4.0 25529,1,,,1/2/2021 20:07,,1,51,"

I'm reading this book chapter, and I'm looking at the questions on the last page. Can someone explain question 2 on the last page to me, or show me an example of a solution so I can understand it?

The question is:

Consider a simple perceptron with $n$ bipolar inputs and threshold $\theta = 0$. Restrict each of the weights to have the value $−1$ or $1$. Give the smallest upper bound you can find for the number of functions from $\{−1, 1 \}^n$ to $\{−1, 1\}$ which are computable by this perceptron. Prove that the upper bound is sharp, i.e., that all functions are different.

What I understand:

  1. A perceptron is a very simple network with $n$ input nodes, a weight assigned to each input nodes, which are then summed to be above/not above a threshold ($\theta$).

  2. In this example, there are $n$ input nodes, and the value of each input node is either $−1$ or $1$. And we want to map them to outputs of either $−1$ or $1$.

What I'm confused about: Is it asking how many different ways can you map input values of $\{−1, 1\}$ to $\{−1, 1\}$ output?

For example, is the answer, where each tuple in this list is input1, input2 and label, as described above:

$$[(1,1,1), (1,1,-1), (-1,1,-1), (-1,1,1), (1,-1,1), (1,-1,-1), (-1,-1,-1)]$$

",42926,,2444,,1/3/2021 12:01,1/3/2021 12:01,What is the smallest upper bound for a number of functions in a range that are computable by a perceptron?,,0,2,,,,CC BY-SA 4.0 25530,2,,25478,1/2/2021 20:09,,1,,"

This is mostly an implementation architecture problem, and the thing is that basically you can implement anything in the traditional setting. To do so instead of having Env<->Agent1<->Agent2, you should have Agent1<->SuperEnv<->Agent2 where SuperEnv contains Env, and simply uses the reward given to SuperEnv by Agent1 and passes it to Agent2.

I know this might seem a little counter-intuitive when comparing the implementation to the real-world problem setting, but the consistency of the RL structure (i.e. Environment that interacts with all the agent) is very important for your solutions to be easily understandable by others.

",42903,,,,,1/2/2021 20:09,,,,2,,,,CC BY-SA 4.0 25531,1,,,1/3/2021 0:58,,1,25,"

I'm doing character embedding for NLP tasks using one-dimensional convolutional neural networks (see Chiu and Nichols (2016) for the motivation). I haven't found any empirical evidence of whether or not marking the word boundaries makes a difference. As an example, a 1-D CNN with kernel size 2 would take "the" as input and use {"th", "he"} in its filters. But if I explicitly marked the boundaries it would give me {"t", "th", "he", "h"}.

Is there a go-to paper or project that definitively answers this question?

",19703,,19703,,1/3/2021 18:30,1/3/2021 18:30,Is there any research work that shows that we should explicitly mark the word boundaries for 1D CNNs?,,0,1,,,,CC BY-SA 4.0 25532,1,25533,,1/3/2021 1:59,,1,64,"

Does it make sense to incorporate constant states in the Markov Decision Process and employ a reinforcement learning algorithm to solve it?

For instance, for applications of personalization, I would like to include users' attributes, like age and gender, as states, along with other changing states. Does it make sense to do so? I have a sequential problem, so I assume the contextual bandit does not fit here.

",23707,,2444,,1/3/2021 19:33,1/3/2021 19:33,Does it make sense to include constant states into reinforcement learning formulation?,,1,0,,,,CC BY-SA 4.0 25533,2,,25532,1/3/2021 7:27,,0,,"

It is, I suppose, a philosophical question whether data that describes a whole episode and does not respond to events within it is part of the state, or is part of some other structure.

However, the practical response is to view such descriptive data as defining an instance of a class of related environments, and to include it in the state features. This may be done for two main reasons:

  • The static data is a relevant parameter of the environment, affecting state transitions and rewards.

  • It is possible to generalise over the population of all values that the parameters can take.

In simple environments, generalisation might only be that the same agent can learn about all variations in a single combined training session. You could use a tabular RL method, starting randomly with one of the possible variations until all were sufficiently covered.

In more complex environments, generalisation may also occur through functon approximation, in a similar manner to contextual bandits. In your personalisation example, you are not expecting to train the agent for all possible user descriptions, but hope that people with similar age, gender etc descriptions will respond similarly to an agent that personalises content.

Philosophically, the contextual data is either part of a larger state space (with a restriction that transitions between different contexts do not happen within an episode), or it is metadata that impacts the "real" state transitions and rewards. Pragmatically, to allow the data to influence value functions and policies, it is necessary to use it in the arguments of those functions. Whether you then view it as part of the state feature vector or as something that is concatenated to state features is a personal choice. Most of the literature I have seen assumes without comment that it is part of the state.

",1847,,1847,,1/3/2021 10:35,1/3/2021 10:35,,,,0,,,,CC BY-SA 4.0 25534,1,25577,,1/3/2021 8:01,,1,221,"

I have an interesting problem related to training the model on two different datasets for the target feature on images taken on different conditions, which might affect the model's ability to generalize.

To explain I will give examples of images from the two different datasets.

Dataset 1 sample image:

Dataset 2 sample image:

As you see the images are captured in two completely different conditions. I am afraid that the model will infer from the background information that it shouldn't use to predict the plant diseases, what makes the problem worse is that some plant diseases only exist in one dataset and not the other, if all the diseases are contained in both datasets then I wouldn't think there would be a problem.

I am assuming I need a way to unify the background by somehow detecting the leaf pixels in the images and unifying the background in a way that makes the model focuses on the important features.

I've tried some segmentation methods but the methods I tried don't always give desirable results for all the images.

What is the recommended approach here? All help is appreciated

Further explanation of the problem.

Ok so I will explain one more thing, my model on the two datasets works fine when training and validating, It got a 94% accuracy.

The problem is, even though the model performs well on the datasets I have, I am afraid that when I use the model on real-life conditions (say someone capturing an image with their phone) the model will be heavily biased towards predicting labels in the second dataset (the one with actual background) since the background is similar and it somehow associated the background with the inference process.

I have tried downloading a leaf image of a label that is contained on the first dataset ( the one with the white background), where the image had a real-life background, the model as expected failed to predict the correct label and predicted a label contained in the second dataset, I am assuming it was due to the background. I have tried this experiment multiple times, and the model consistently failed in similar scenarios

I used some Interpretability techniques as well to visualize the important pixels and it seems like the model is using the background for inference, but I am not an expert in interpreting these graphs so I am not 100% sure.

",43480,,2444,,1/3/2021 14:06,1/4/2021 20:09,Training a classifier on different datasets with different image conditions for different labels causes the model to infer using the background,,1,0,,,,CC BY-SA 4.0 25535,2,,25465,1/3/2021 8:38,,1,,"

Your problem fundamentally is that you are confusing what the state and actions are in this setting. Webpages are not your states; your state is the entire priority queue of (website-outlink) pairs + the (new_website-outlink) pairs. Your action is which pair you select.

Now this is a variable sized state-space and variable sized action-space problem setting at the same time. To deal with this lets start by noting that state==observation need not be (in general). So what is your observation? Your observation is a variable-sized batch of either:

  1. (website-outlink) pairs or
  2. next_website (where each next_website is determined by its corresponding pair)

Both of these observations could work just fine, choosing between one or the other is just a matter of whether you want your agent to learn "which links to open before opening them" or "which links are meaningful (after opening them)".

What your priority queue is essentially doing is just adding a neat trick that:

  • Saves the computational complexity of keeping the state ordered (remember that your state is not a website, but the list/batch of website-outlink)
  • Avoids needlessly recomputing the Q-values for each of your actions (remember that an action is not selecting an outlink from new_website, but selecting an outlink from all available choices in the queue)

Note however that to actually have the second saving it is crucial to store the Q-values for each pair!!!

Last important thing to note is that in a scenario where you use a Replay Buffer (which I guess is likely given that you chose a DQN), you can't use the priority queue whilst learning from the RB. To see why (and to see in detail how your learning process actually looks like), start by remembering that your Q-value updates are given by the formula here; your state s_t is a (quasi-ordered1) batch of pairs. Q(s_t, a_t) is just the output of running your DQN regression on just the best website/pair in this batch (you have to add an index to denote the best choice when adding transitions to the RB, in order to be consistent about which action was taken from this state). To compute the estimate of the optimal future value however you will have to recompute the Q-value of every single website/pair in the next state. You CANNOT use the priority queue when training from the RB.

1 You have the priority queue ordered for all websites you had in it whilst looking at the last website, but all the new_website-outlink pairs you are now adding are not ordered yet. You still have to run the agent on them and then you can order them with the rest of the priority queue to generate the next state (which still will not be ordered because you will have new_new_website-outink pairs).

",42903,,,,,1/3/2021 8:38,,,,3,,,,CC BY-SA 4.0 25537,1,,,1/3/2021 9:59,,1,47,"

I was facing a problem I mentioned in a previous question but after a while, I realize that maybe the problem in the dataset not in the learning rate.

I build the dataset from white positions only i.e the boards when it's white's turn.

Each data set consists of one game.

First, I tried to let the agent plays a game then learn from it immediately, but that did not work, and the agent converges to one state of playing (he only lose or win against itself, or draw in a stupid way).

Second, I tried to let the agent plays about 1000 games against its self then train on each game separately, but that also does not work.

Note: the first and second approach describes one iteration of the learning process, I let the agent repeat them so in total it trained on about 50000 games.

Is my approach wrong? or my dataset must be built in another way? maybe train the agent on several games at once?

My project is here if someone needs to take a closer look at it: CheckerBot

",36578,,36578,,1/3/2021 12:25,1/3/2021 12:25,Is training on single game each time appropriate for an agent to learn to play checkers,,0,0,,,,CC BY-SA 4.0 25539,1,,,1/3/2021 10:33,,1,182,"

I have a problem that arose as part of a NEAT (Neuro Evolution Through Augmenting Topologies) implementation that I am writing. I am wanting it to produce topologies or graphs that describe neural networks, similar to the one below.

Here, nodes 0 and 1 are inputs, and 4 is the output node, the rest of the nodes are hidden nodes. Each of these nodes can have some activation function defined for them (not necessary that all the hidden nodes have the same activation function)

Now, I want to perform the forward pass of this neural network with some data, and, based on how well it performed in that task, I assign it with a fitness value, which is used as part of the NEAT evolutionary algorithm to move towards better architectures and weights.

So, as part of the evolution process, I can have connections that can cause internal loops in the hidden layers and there is the possibility that a skip connection is made. Because of this, I feel the regular matrix-based forward pass (of fully connected MLPs) will not work in order to perform the forward pass of these evolved neural networks, and hence I want to know if an algorithm exists that can solve this problem.

In short, I want this neural network to just take the inputs and provide me outputs - no training involved at all, so I'm not interested in the back-propagation part now.

The only way to solve this that I see is to use something on the lines of a job queue (the queue will consist of the nodes that needs processing in order). I feel this is extremely inefficient and I cannot allocate this simulation method a proper stop condition. Or, even when to take output from the neural network graph and consider it.

Can anybody at least point me in the right direction?

",43484,,2444,,1/3/2021 14:23,1/28/2021 8:00,"How can I perform the forward pass in a neural network evolved with NEAT, given that some connections may not exist or there may be loopy connections?",,0,1,,,,CC BY-SA 4.0 25540,2,,23964,1/3/2021 11:09,,3,,"

Another good resource is the free CatalyzeX browser extension — it adds in-line links to any relevant code wherever you come across papers on various websites: AI/ML Papers with Code Everywhere - CatalyzeX

Full disclosure: I'm one of the creators. It's actively maintained and all feedback and requests are welcome!

",43485,,2444,,5/10/2021 0:20,5/10/2021 0:20,,,,0,,,,CC BY-SA 4.0 25542,1,,,1/3/2021 12:56,,1,33,"

Given the original paper (https://arxiv.org/pdf/1809.02864.pdf), I would like to implement the Accelegrad algorithm for which I report the pseudocode of the paper:

In the pseudocode, the authors refer to a compact convex set $K$ of diameter $D$. The question is whether I can know such elements. I think that they are theoretical conditions to satisfy some theorems. The problem is that the diameter $D$ is used in the learning rate and also the convex set $K$ is used to perform the projection of the gradient descent. How can I proceed?

",32694,,2444,,1/3/2021 14:22,1/3/2021 14:22,How to derive compact convex set K and its diameter D to program Accelegrad algorithm in practice?,,0,0,,,,CC BY-SA 4.0 25543,1,,,1/3/2021 15:08,,1,1100,"

I have been trying to solve the OpenAI lunar lander game with a DQN taken from this paper

https://arxiv.org/pdf/2006.04938v2.pdf

The issue is that it takes 12 hours to train 50 episodes so something must be wrong.

import os
import random
import gym
import numpy as np
from collections import deque
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import Model

ENV_NAME = "LunarLander-v2"

DISCOUNT_FACTOR = 0.9
LEARNING_RATE = 0.001

MEMORY_SIZE = 2000
TRAIN_START = 1000
BATCH_SIZE = 24

EXPLORATION_MAX = 1.0
EXPLORATION_MIN = 0.01
EXPLORATION_DECAY = 0.99

class MyModel(Model):
    def __init__(self, input_size, output_size):
        super(MyModel, self).__init__()
        self.d1 = Dense(128, input_shape=(input_size,), activation="relu")
        self.d2 = Dense(128, activation="relu")
        self.d3 = Dense(output_size, activation="linear")

    def call(self, x):
        x = self.d1(x)
        x = self.d2(x)
        return self.d3(x)

class DQNSolver():

    def __init__(self, observation_space, action_space):
        self.exploration_rate = EXPLORATION_MAX

        self.action_space = action_space
        self.memory = deque(maxlen=MEMORY_SIZE)

        self.model = MyModel(observation_space,action_space)
        self.model.compile(loss="mse", optimizer=Adam(lr=LEARNING_RATE))

    def remember(self, state, action, reward, next_state, done):
        self.memory.append((state, action, reward, next_state, done))

    def act(self, state):
        if np.random.rand() < self.exploration_rate:
            return random.randrange(self.action_space)
        q_values = self.model.predict(state)
        return np.argmax(q_values[0])

    def experience_replay(self):
        if len(self.memory) < BATCH_SIZE:
            return
        batch = random.sample(self.memory, BATCH_SIZE)
        state_batch, q_values_batch = [], []
        for state, action, reward, state_next, terminal in batch:
            # q-value prediction for a given state
            q_values_cs = self.model.predict(state)
            # target q-value
            max_q_value_ns = np.amax(self.model.predict(state_next)[0])
            # correction on the Q value for the action used
            if terminal:
                q_values_cs[0][action] = reward
            else:
                q_values_cs[0][action] = reward + DISCOUNT_FACTOR * max_q_value_ns
            state_batch.append(state[0])
            q_values_batch.append(q_values_cs[0])
        # train the Q network
        self.model.fit(np.array(state_batch),
                        np.array(q_values_batch),
                        batch_size = BATCH_SIZE,
                        epochs = 1, verbose = 0)
        self.exploration_rate *= EXPLORATION_DECAY
        self.exploration_rate = max(EXPLORATION_MIN, self.exploration_rate)

def lunar_lander():
    env = gym.make(ENV_NAME)
    observation_space = env.observation_space.shape[0]
    action_space = env.action_space.n
    dqn_solver = DQNSolver(observation_space, action_space)
    episode = 0
    print("Running")
    while True:
        episode += 1
        state = env.reset()
        state = np.reshape(state, [1, observation_space])
        scores = []
        score = 0
        while True:
            action = dqn_solver.act(state)
            state_next, reward, terminal, _ = env.step(action)
            state_next = np.reshape(state_next, [1, observation_space])
            dqn_solver.remember(state, action, reward, state_next, terminal)
            dqn_solver.experience_replay()
            state = state_next
            score += reward
            if terminal:
                print("Episode: " + str(episode) + ", exploration: " + str(dqn_solver.exploration_rate) + ", score: " + str(score))
                scores.append(score)
                break
        if np.mean(scores[-min(100, len(scores)):]) >= 195:
            print("Problem is solved in {} episodes.".format(episode))
            break
    env.close
if __name__ == "__main__":
    lunar_lander()

Here are the logs

root@b11438e3d3e8:~# /usr/bin/python3 /root/test.py
2021-01-03 13:42:38.055593: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-01-03 13:42:39.338231: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-01-03 13:42:39.368192: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.368693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.8095GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-01-03 13:42:39.368729: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-01-03 13:42:39.370269: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-01-03 13:42:39.371430: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-01-03 13:42:39.371704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-01-03 13:42:39.373318: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-01-03 13:42:39.374243: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-01-03 13:42:39.377939: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-01-03 13:42:39.378118: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.378702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.379127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-01-03 13:42:39.386525: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3411185000 Hz
2021-01-03 13:42:39.386867: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fb44c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-01-03 13:42:39.386891: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
2021-01-03 13:42:39.498097: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.498786: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fdf030 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-01-03 13:42:39.498814: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce GTX 1080, Compute Capability 6.1
2021-01-03 13:42:39.498987: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.499416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
pciBusID: 0000:01:00.0 name: GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.8095GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-01-03 13:42:39.499448: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-01-03 13:42:39.499483: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-01-03 13:42:39.499504: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-01-03 13:42:39.499523: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-01-03 13:42:39.499543: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-01-03 13:42:39.499562: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-01-03 13:42:39.499581: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-01-03 13:42:39.499643: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.500113: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.500730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-01-03 13:42:39.500772: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-01-03 13:42:39.915228: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-01-03 13:42:39.915298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263]      0 
2021-01-03 13:42:39.915322: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0:   N 
2021-01-03 13:42:39.915568: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.916104: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.916555: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6668 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
Running
2021-01-03 13:42:40.267699: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10

This is the GPU stats

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.66       Driver Version: 450.66       CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 00000000:01:00.0  On |                  N/A |
|  0%   53C    P2    46W / 198W |   7718MiB /  8111MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+

As you can see, TensorFlow does not compute on the GPU but reserves the memory so I'm assuming it's because the inputs of the neural networks are too small and it uses the CPU instead.

To make sure the GPU was installed properly, I ran a sample from their documentation and it uses the GPU.

Is it an issue with the algorithm or the code? Is there a way to utilize the GPU in this case?

Thanks!

",30875,,,,,1/3/2021 16:14,Simple DQN too slow to train,,2,5,,1/3/2021 17:17,,CC BY-SA 4.0 25544,2,,25543,1/3/2021 16:02,,1,,"

When it comes to GPU usage,

nvidia-smi

shows the usage at the time it was executed. You should try running

watch -n0.01 nvidia-smi

to see the usage of GPU every 0.01 second. It should output some small usage for current model, like 5%. You could try to increase you model, to e.g.

self.d1 = Dense(1024, input_shape=(input_size,), activation="relu")
self.d2 = Dense(1024, activation="relu")
self.d3 = Dense(output_size, activation="linear")

to see if the usage of GPU increased.

",,user43110,,,,1/3/2021 16:02,,,,1,,,,CC BY-SA 4.0 25545,2,,25543,1/3/2021 16:14,,2,,"

Is it training at all? Or is agent performance not improving over time? Q learning can be pretty unstable. I would recommend logging the sum of rewards received by the agent at the end of each episode and the model loss to help in the debugging process. The sum of rewards will show you if the agent is improving over time and the model loss will give you a rough idea about how stable the convergence is. I would recommend using tensorboard to log these metrics (https://www.tensorflow.org/tensorboard/get_started#using_tensorboard_with_other_methods). You will be able to monitor these metrics throughout the training process. You could also just print these metrics at the end of every epoch and monitor them in your console. You really just need someway to see what's going on during training.

In the paper you linked, it also mentioned double q learning, which in your code does not seem to be implemented. Vanilla q learning can have a reputation of being overoptimistic in the values that it assigns to states. This results in compounding approximation errors, which tend to destabilize learning. Using double q learning may help speed up convergence. If you need help with double q learning check out this paper: https://arxiv.org/pdf/1509.06461.pdf, and this github page: https://github.com/jihoonerd/Deep-Reinforcement-Learning-with-Double-Q-learning/blob/master/ddqn/agent/ddqn_agent.py

If you use double q learning, you may have to write your own custom training loop. This can be achieved by using the gradient tape object. Make sure to wrap this new function in a tf.function decorator. This will tell the TensorFlow back-end to compile that bit of code, making it run faster (https://www.tensorflow.org/guide/function). There are also some handy speed up tips in this post (https://www.tensorflow.org/tutorials/reinforcement_learning/actor_critic). They even wrap the environment step functions in tf.functions.The article uses actor-critic, which is a combination of policy gradient and q learning techniques, but you can swap out their neural network update code with the q learning functionality that you need.

If you need help with double q learning check out this paper: https://arxiv.org/pdf/1509.06461.pdf, and this github page: https://github.com/jihoonerd/Deep-Reinforcement-Learning-with-Double-Q-learning/blob/master/ddqn/agent/ddqn_agent.py

",40428,,,,,1/3/2021 16:14,,,,2,,,,CC BY-SA 4.0 25547,2,,12671,1/3/2021 20:36,,1,,"

As opposed to what is written in this answer, you can have the analytical expression of the function that the neural network computes, even if that neural network computes a non-linear function. Take a look at this example: i.e. it is just the expression that you use to perform the forward pass of the neural network. The problem is how to interpret this function, how it generally behaves, and how it is different from the usually unknown function that the neural network is supposed to approximately compute: that's why neural networks are called black-box models, because they are not easily interpretable.

",2444,,,,,1/3/2021 20:36,,,,0,,,,CC BY-SA 4.0 25548,1,,,1/3/2021 21:49,,0,167,"

I am reading this paper, that is discussing the use of distance metrics for character recognition predicton.

I can see the advantages of using a distance metrics in predictions like character recognition: You can model a set of known character's pixel vectors, and then measure a new unseen vector against this model, get the distance, and if the distance is low, predict that this unseen vector belongs to a particular class.

I'm wondering if there's any disadvantages to using distance metrics as the cost function in character recognition? For example, I was thinking maybe the distance calculation is slow for large images (would have to calculate the distance between each item in two long image vectors)?

",42926,,,,,10/1/2021 3:16,What are the disadvantages to using a distance metric in character recognition prediction,,1,0,,,,CC BY-SA 4.0 25551,2,,25548,1/4/2021 0:27,,1,,"

As I see it, the question boils down to the comparison between distance (function/metric) based Optical Character Recognition (OCR) and (for example) OCR done by means of Convolutional Neural Networks (CNNs). Particularly, it focuses on the cons of the former option.

There are a few potential problems associated with using distance based OCR systems. First of all, this approach requires an appropriate distance metric to deliver good results. Different types of distance metrics/functions are sensitive to different features in the input images. For example, some functions might penalize absolute differences, while others penalize squared differences, where the latter punishes differences of magnitude $> 1$ stronger than the former, while the former penalizes differences of (absolute) magnitude $< 1$ stronger than the latter. Which type of distance metric works best for a given problem has commonly to be determined empirically.

Also, distance based OCR systems may possibly be more sensitive to, for example, different levels in illumination than CNNs. While both sorts of classifiers could profit from data augmentation (i.e. adding variations of the existing training data to the dataset), OCR based on CNNs has the benefit that CNN training procedures produce classifiers that commonly generalize well to slight variations in the incoming data, while some slight novel tilt (or again variation in illumination) may break the distance based classification procedure, but may have not too detrimental effects on CNN based classifiers due to their rather strong generalization capabilities.

Of course, one can also try to increase the robustness of distance based OCR systems, but this is commonly associated with developing exhaustive preprocessing pipelines to standardize the appearance of incoming character images. Thus, in terms of system design, it is often easier to set up a CNN-based neural network architecture and train it (possibly drawing upon regularization to boost generalization of the trained system even further) than trying to design complicated preprocessing strategies to shift, rotate, and normalize character images exhaustively to boost performance of a distance based OCR system.

To sum up, after all, there are a lot of potential issues related to using plain distance (metric/function) based OCR systems, most of which, however, can strongly be alleviated when drawing upon clever (pre-)processing pipelines to standardize input images before performing distance based classification.

From all those issues mentioned above, the strongest disadvantage of this technique, however, (which cannot be alleviated so easily) is that it doesn't generalize too easily to novel input data, where CNNs might perform better with appropriate training data (+ augmentation and regularization in general).

You also mentioned the aspect of computational cost associated with distance based classification. First of all, I am not an expert in computational complexity/cost related questions. However, most image related computations can efficiently be executed on graphics cards. So, subtracting images pixel-wise, for example,is not very costly given it can be executed on a graphics card. But of course, when using distance based OCR, the computational cost associated with computing distances and comparing distance values scales with the reference dataset size. Using CNNs, the time needed to perform a classification is always the same irrespective of the training dataset size.

",37982,,,,,1/4/2021 0:27,,,,0,,,,CC BY-SA 4.0 25552,2,,25481,1/4/2021 0:39,,1,,"

In image segmentation the target is actually an image, with the same dimensions as the input, where each pixel has a label depending on which class it represents. It is not uncommon for such a dataset to have a "background" class that essentially consists of the pixels not belonging to any other class. If not you can always group together classes typically associated with background (e.g. "sky", "cloud", "grass", "mountain", etc.) to form the class "background". Likewise you could group all other possible classes of interest (e.g. "person", "car", "horse", etc.) into the class "foreground". With this dataset you could train an image segmentation model that predicts if a pixel belongs to the background or the foreground, without actually classifying it into a "person" or a "car".

So suppose you want to make your own removal.ai, you could:

  • find one or more diverse image segmentation datasets (it needs to be diverse so that it will work on any generic photo uploaded to the site)
  • check all the unique classes in the labels
  • group all classes associated with backgrounds into class 0 (i.e. "background" class)
  • group all classes associated with foregrounds into class 1 (i.e. "foreground" class)
  • train an image segmentation model with these two classes
  • profit
",26652,,,,,1/4/2021 0:39,,,,0,,,,CC BY-SA 4.0 25553,1,,,1/4/2021 1:43,,5,602,"

I was studying the off-policy policy improvement method. Then I encountered importance sampling. I completely understood the mathematics behind the calculation, but I am wondering what is the practical example of importance sampling.

For instance, in a video, it is said that we need to calculate the expected value of a biased dice, here $g(x)$, in terms of the expected value of fair dice, $f(x)$. Here is a screenshot of the video.

Why do we need that, when we have the probability distribution of the biased dice?

",43507,,2444,,1/4/2021 22:21,4/7/2021 9:40,Why do we need importance sampling?,,1,0,,,,CC BY-SA 4.0 25555,1,,,1/4/2021 6:26,,2,150,"

The basic approach to non-maximum-suppression makes sense, but I am kind of confused about how you handle nested bounding boxes.

Suppose you have two predicted boxes, with one completely enclosing another. What happens under this circumstance (in particular, in the case of the Single Shot MultiBox Detector)? Which bounding box do you select?

",32390,,2444,,1/7/2021 17:55,1/7/2021 17:55,How are nested bounding boxes handled in object detection (and in particular in the case of the SSD)?,,0,0,,,,CC BY-SA 4.0 25557,1,,,1/4/2021 6:47,,1,150,"

Let us assume that I am working on a dataset of black and white dog images.

Each image is of size $28 \times 28$.

Now, I can say that I have a sample space $S$ of all possible images. And $p_{data}$ is the probability distribution for dog images. It is easy to understand that all other images get a probability value of zero. And it is obvious that $n(S)= 2^{28 \times 28}$.

Now, I am going to design a generative model that sample from $S$ using $p_{data}$ rather than random sampling.

My generative model is a neural network that takes random noise (say, of length 100) and generates an image of the size $28 \times 28$. My function is learning a function $f$, which is totally different from the function $p_{data}$. It is because of the reason that $f$ is from $R^{100}$ to $S$ and $p_{data}$ is from $S$ to $[0,1]$.

In the literature, I often read the phrases that our generative model learned $p_{data}$ or our goal is to get $p_{data}$, etc., but in fact, they are trying to learn $f$, which just obeys $p_{data}$ while giving its output.

Am I going wrong anywhere or the usage in literature is somewhat random?

",18758,,,,,1/4/2021 8:16,Confusion between function learned and the underlying distribution,,1,0,,,,CC BY-SA 4.0 25558,2,,25557,1/4/2021 8:16,,3,,"

You're right! The generative model $f$ is not the same as the probability density (p.d.f.) function $p_{data}$. The kind of phrases you've referred to are to be interpreted informally. You learn $f$ with the hope that sampling a latent vector $z$ from some known distribution (from which it is easy to sample), results in $f(z)$ that has the probability density function $p_{data}$. However, merely learning $f$ does not give you the power to estimate what $p_{data}(x)$ is for some image $x$. Learning $f$ only gives you the power to sample according to $p_{data}(\cdot)$ (if you've learned an accurate such $f$).

",36974,,,,,1/4/2021 8:16,,,,0,,,,CC BY-SA 4.0 25559,2,,25553,1/4/2021 9:20,,7,,"

Importance sampling is typically used when the distribution of interest is difficult to sample from - e.g. it could be computationally expensive to draw samples from the distribution - or when the distribution is only known up to a multiplicative constant, such as in Bayesian statistics where it is intractable to calculate the marginal likelihood; that is

$$p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)} \propto p(x|\theta)p(\theta)$$

where $p(x)$ is our marginal likelihood that may be intractable and so we can't calculate the full posterior and so other methods must be used to generate samples from this distribution. When I say intractable, note that

$$p(x) = \int_{\Theta} p(x|\theta)p(\theta) d\theta$$

and so intractable here means that either a) the integral has no analytical solution or b) a numerical method for computing this integral may be too expensive to run.

In the instance of your die example, you are correct that you could calculate the theoretical expectation of the bias dice analytically and this would probably be a relatively simple calculation. However, to motivate why importance sampling may be be useful in this scenario, consider calculating the expectation using Monte Carlo methods. It would be much simpler to uniformly sample a random integer from 1-6 and calculate the importance sampling ratio $x \frac{g(x)}{f(x)}$ than it would be to draw samples from the bias dice, not least because most programming languages have built in methods to randomly sample integers.

As your question is tagged as reinforcement learning I will add why it is useful in the RL domain. One reason is that it may be our policy of interest is expensive to sample from, so instead we can just generate actions from some other simple policy whilst still learning about the policy of interest. Second, we could be interested in a policy that is deterministic (greedy) but still be able to explore, so we can have an off-policy distribution that explores much more frequently.

NB: it may not be clear how you can use importance sampling if the distribution is only known up to a constant so see this answer for an explanation.

",36821,,36821,,4/7/2021 9:40,4/7/2021 9:40,,,,0,,,,CC BY-SA 4.0 25560,1,,,1/4/2021 9:50,,1,196,"

For action recognition or similar tasks, one can either use 3D CNN or combine 2D CNN with optical flow. See this paper for details.

Can someone tell the pros/cons of each, in terms of accuracy, cost such as computation and memory requirement, etc.? In other words, is the computation overhead of 3D CNN justified by its accuracy improvement? Under what scenarios would one prefer one over another?

3D CNNs are also used for volumetric data, such as MRI images. Can 2D CNN + optical flow be used here?

I understand 2D CNNs and 3D CNNs, but I do not know about optical flow (my background is not computer-vision).

",20491,,2444,,1/5/2021 12:03,1/5/2021 12:03,What are the pros and cons of 3D CNN and 2D CNN combined with optical flow for action recognition?,,0,0,,,,CC BY-SA 4.0 25561,1,,,1/4/2021 10:38,,5,141,"

I've been working on research into reproducing social behavior using multi-agent reinforcement learning. My focus has been on a GridWorld-style game, but I was thinking that maybe a simpler Prisoner's Dilemma game could be a better approach. I tried to find existing research papers in this direction, but couldn't find any, so I'd like to describe what I'm looking for in case anyone here knows of such research.

I'm looking for research into scenarios where multiple RL agents are playing Iterated Prisoner's Dilemma with each other, and social behaviors emerge. Let me specify what I mean by "social behaviors." Most research I've seen into RL/IPD (example) focuses on how to achieve the ideal strategy, and how to get there the fastest, and what common archetypes of strategies emerge. That is all nice and well, but not what I'm interested in.

An agent executing a Tit-for-Tat strategy is giving positive reinforcement to the other player for "good" behavior, and negative reinforcement for "bad" behavior. That is why it wins. My key point here is that this carrot-and-stick method is done individually rather than in groups. I want to see it evolve within a group.

I want to see an entire group of agents evolve to punish and reward other players according to how they behaved with the group. I believe that fascinating group dynamics could be observed in that scenario.

I programmed such a scenario a decade ago, but by writing an algorithm manually, not using deep RL. I want to do it using deep RL, but first I want to know whether there are existing attempts.

Does anyone know whether such research exists?

",25904,,2444,,1/12/2021 0:01,10/9/2021 17:07,Research into social behavior in Prisoner's Dilemma,,1,2,,,,CC BY-SA 4.0 25562,1,,,1/4/2021 11:05,,0,66,"

I was wondering which AI techniques and architectures are used in environments that need predictions to continually improve by the feedback of the user. So let's take some kind of recommendation system, but not for a number of $n$ products, but for some problem of higher space. It's initially trained, but should keep improving by the feedback and corrections applied by the user. The system should continue to improve its outcomes on-the-fly in production, with each interaction.

Obviously, (deep) RL seems to fit this problem, but can you really deploy this learning process to production? Is it really capable of improving results on-the-fly?

Are there any other techniques or architectures that can be used for that?

I'm looking for different approaches in general, in order to be able to compare them and find the right one for problems of that kind. Of course, there always is the option to retrain the whole network, but I was wondering whether there are some online, on-the-fly techniques that can be used to adjust the network?

",43542,,2444,,1/4/2021 15:19,1/4/2021 15:19,What are good techniques for continuous learning in production?,,0,4,,,,CC BY-SA 4.0 25563,1,,,1/4/2021 12:27,,1,27,"

When I look up the generalised delta rule equation for back-propogation, I am seeing two conflicting equations.

For example, here (slide 20), given $o$ (the output, defined in slide 18), $z$ (the activated output) and a target $t$, defined in slide 17, then:

$\frac{\delta E}{\delta Z} = o(1-o)(o-t)$

When I look for the same equation else, e.g. here, slide 14, it says, given $o$ the output and $y$ the label, then (using slightly different notation $\beta_k$):

$\beta_k = o_k(1-o_k)(y_k-o_k)$

I can see here that these two equations are almost the same, but not quite. One subtracts the output from the target, and one subtracts the target from the output.

The reason why I'm asking this is I'm trying to do question 29 and 30 of this paper, and they are using the second equation ($\beta_k$) but my college notes (that I can't copy and paste due to copyright) define the equation according to the first equation $\frac{\delta E}{\delta Z}$. I'm wondering which way is correct, do you subtract the target from the obtained output, or vice versa?

",42926,,2444,,1/5/2021 0:31,1/5/2021 0:31,"For the generalised delta rule in back-propogation, do you subtract the target from the obtained output, or vice versa?",,0,1,,,,CC BY-SA 4.0 25564,1,,,1/4/2021 12:40,,0,32,"

I have a time series with both continuous and categorical features, and I want to do a prediction task.

I will elaborate:

The data is composed of 100Hz sampling of some voltages, kind of like an ecg signal, and of some categorical features such as "green", "na" and so on.

In total, the number of features can reach 300, of which most are continuous.

The prediction should take in a chunk of frames and predict a categorical variable for this chunk of frames.


I want to create a deep learning model that can handle both categorical and continuous features.

Best I can think of is two separate losses, like MSE and cross entropy, and a hyperparameter to tune between them, kind of like regularization.

Best I could find on this subject was this, with an answer from 2015.

I wonder if something better was invented, since then, or maybe just someone here knows something better.

",21645,,21645,,1/5/2021 10:47,1/5/2021 10:47,Correct way to work with both categorical and continuous features together,,0,4,,,,CC BY-SA 4.0 25567,2,,24666,1/4/2021 14:33,,0,,"

This should not matter that much as long as you do enough preprocessing?

But one could use e.g. the Facenet architecture to extract the embedding vector for each face of the same subject.

Then you could compute the covariance matrix and perform PCA on it. This would give you the most significant features with which you can decide which face to take.

Alternatively, you could do something in the direction of the eigenfaces: https://en.wikipedia.org/wiki/Eigenface.

",42601,,,,,1/4/2021 14:33,,,,1,,,,CC BY-SA 4.0 25573,1,,,1/4/2021 16:10,,2,98,"

Say I have an Machine/Deep learning algorithm I developed on a desktop pc to achieve a real-time classification of time series events from a sensor. Once the algorithm is trained and performs good, I want to implement it on an low power embbeded system, with the same sensor, to classify events in real-time:

  • How can I know if the low power embedded system is fast enough to allow real-time classification regarding the algorithm (knowing it in advance would avoid to implement and try multiple architectures) ?
  • Machine/Deep learning algorithm are usually developed in python. Is there easy ways to transfer the code from python to a more embeddable langage ?
",43239,,43239,,1/11/2021 8:49,1/11/2021 8:49,How to know if a real-time classifier is achivable in a low-power emdedded system?,,1,0,,,,CC BY-SA 4.0 25575,2,,25522,1/4/2021 19:36,,2,,"

Formally speaking $x_6$ is a function of $w_{16},\ w_{26}$ and $w_{36}$, that is $$x_6 =f(w_{16}, w_{26}, w_{36})=w_{16}y_1 + w_{26}y_2 + w_{36}y_3.$$ The derivative w.r.t. $w_{26}$ is $$\frac{\partial x_6}{\partial w_{26}}= \frac{\partial w_{16}y_1}{\partial w_{26}} +\frac{\partial w_{26}y_2}{\partial w_{26}} +\frac{\partial w_{36}y_3}{\partial w_{26}} = 0 +y_2 \frac{\partial w_{26}}{\partial w_{26}} + 0= y_2.$$ The first equality is obtained using the fact that the partial derivative is linear (so the derivative of the sum is the sum of the derivatives); the second equality comes from again from the linearity and from the fact that $w_{16}y_1$ and $w_{36}y_3$ are constants with respect to $w_{26}$, so their partial derivative w.r.t. this variable is $0$.

Bonus

Not really asked in the original question, but since I'm here let me have a bit of fun ;).

Let's say $x_6$ is the output of the sixth node after you apply an activation function, that is $$x_6 =\sigma(f(w_{16}, w_{26}, w_{36}))=\sigma(w_{16}y_1 + w_{26}y_2 + w_{36}y_3).$$ You can compute the partial derivative applying the properties illustrated above, with the additional help of the chain rule $$\frac{\partial x_6}{\partial w_{26}}=\frac{\partial \sigma(w_{16}y_1 + w_{26}y_2 + w_{36}y_3)}{\partial w_{26}}=\sigma'\frac{\partial w_{16}y_1}{\partial w_{26}} +\sigma'\frac{\partial w_{26}y_2}{\partial w_{26}} +\sigma'\frac{\partial w_{36}y_3}{\partial w_{26}}=y_2\sigma'$$ $\sigma'$ denotes the derivative of sigma with respect to its argument.

",42424,,2444,,1/4/2021 22:42,1/4/2021 22:42,,,,0,,,,CC BY-SA 4.0 25576,1,,,1/4/2021 19:49,,1,49,"

I am re-implementing vpg and using Spinning Up as reference implementation. I noticed that the default epoch size is 4000. I also see cues in papers that big batch size is quite standard.

My implementation doesn't batch XP together just applies the update after every episode. It turns out my implementation is more sample efficient than the reference implementation on simple problems (like CartPole or LunarLander) even though I haven't added the critic yet! Of course this could be due to number of reasons, for example I've only done parameter search on my implementation.

But it would make sense anyway: bigger batch size is generally considered better only because the GPU is faster processing many samples parallel. Is this the reason here? It would make sense but it is surprising for me as I thought sample efficiency is considered more important than computing efficiency in RL.

",43565,,,,,1/4/2021 19:49,Why do we use big batch/epoch size in policy gradient methods (vpg specifically)?,,0,0,,,,CC BY-SA 4.0 25577,2,,25534,1/4/2021 20:09,,1,,"

I am afraid that the model will infer from the background information that it shouldn't use to predict the plant diseases, what makes the problem worse is that some plant diseases only exist in one dataset and not the other

I am afraid that when I use the model on real-life conditions (say someone capturing an image with their phone) the model will be heavily biased towards predicting labels in the second dataset (the one with actual background) since the background is similar and it somehow associated the background with the inference process.

You are right to be concerned about this. However, you are not alone in having a neural network model that relies heavily on the background or other confounding factors for your classification task.

A small paper that addresses a similar issue is Context Augmentation for Convolutional Neural Networks. Table 1 from this paper shows how the model trained and tested on only the background performs better than model trained and tested on only the foreground. They address this by making a training set where the foreground objects are given many backgrounds. Given your imbalanced datasets, with some diseases only present in one dataset and not the other, I think context augmentation might work for you. The main barrier here is that you do not have segmentation maps for these. Are your datasets too large to segment yourself? You might consider automatic segmentation networks - especially for the images with white backgrounds.

As for your interpretability methods, you should not expect too much from them though some groups are able to get good results with them using GradCAM.

Here is an alternative idea that may also solve your problem: If you are trying to make a system that can be used in the field, is it possible to have the technician or photographer put a white board under the diseased leaf?

",21471,,,,,1/4/2021 20:09,,,,0,,,,CC BY-SA 4.0 25578,2,,25516,1/4/2021 20:19,,0,,"

It sounds like you may be looking for the A* search algorithm. It is a search algorithm, like DFS and BFS, but it will explore only the most promising branches based on a heuristic function you supply. The difficult part of implementing this is deciding on a low-cost, admissible heuristic.

Excited to reflect nbro's suggestion from comments.

",21471,,21471,,1/5/2021 16:16,1/5/2021 16:16,,,,6,,,,CC BY-SA 4.0 25584,1,,,1/5/2021 5:40,,1,35,"

I am curious about the working of a Siamese network. So, let us suppose I am using a triplet loss for my network and I have instantiated single CNN 3 times and there are 3 inputs to the network. So, during a forward pass, each of the networks will give me an embedding for each image, and I can get the distance and calculate the loss and compare it with the output, so that my model is ready to propagate the gradients to update the weights.

The Question: How do these weights get updated during the back propagation? Just because we are using 3 inputs and 3 branches of the same network and we are passing the inputs one by one (I suppose), how do the gradients are updated? Are these series? Like the one branch will update, then the second and then the third. But won't it be a problem because each branch would try to update based on its output? If in parallel, then which branch is responsible for the gradients update? I mean to say that I am unable to get the idea how weights are updated in Siamese network. Can someone please explain in simpler terms?

",36062,,2444,,1/6/2021 12:13,1/6/2021 12:13,How do gradients are flown back into the Siamese network when branching is done?,,0,1,,,,CC BY-SA 4.0 25585,2,,20005,1/5/2021 9:24,,2,,"

In NEAT, the innovation of a node does not affect the evolution directly. Only the connection genes and their innovation will matter. So you can simply have whole numbers as IDs under each Genome / Network.

--EDIT-- (Complete reasoning)

In the original paper, it is clearly stated that the nodes from the better genome is taken during crossover and then only the connections are cross-over'ed (in some method) and hence the innovation numbers for the connections. NEAT is connection-centric and does not care much about evolving nodes.

Adding to that from basic neural networks theory, the nodes will never matter in a neural network because all the calculation and the learning happens in the connections. Think about a regular feed-forward network. You only care about the weight matrix which is a property of the connections instead of the nodes though both are present. Similarly in a NEAT generated network, the nodes will not matter, as all the learning takes place in the way these nodes are connected and the weights of the network.

Further, the nodes list can easily be derived from the connection list and hence marking the connections, are enough.

",43484,,43484,,1/8/2021 17:03,1/8/2021 17:03,,,,0,,,,CC BY-SA 4.0 25586,1,25590,,1/5/2021 13:11,,6,686,"

I'm currently reading the paper Federated Learning with Matched Averaging (2020), where the authors claim:

A basic fully connected (FC) NN can be formulated as: $\hat{y} = \sigma(xW_1)W_2$ [...]

Expanding the preceding expression $\hat{y} = \sum_{i=1}^{L} W_{2, i \cdot } \sigma(\langle x, W_{1,\cdot i} \rangle))$, where $ i\cdot$ and $\cdot i$ denote the ith row and column correspondingly and $L$ is the number of hidden units.

I'm having a hard time wrapping my head around how it can be boiled down to this. Is this rigorous? Specifically, what is meant by the ith row and column? Is this formula for only one layer or does it work with multiple layers?

Any clarification would be helpful.

",43582,,43582,,1/6/2021 8:17,1/8/2021 17:56,How to express a fully connected neural network succintly using linear algebra?,,1,0,,,,CC BY-SA 4.0 25587,1,25594,,1/5/2021 15:15,,0,222,"

Further to my last question, I am training a custom entity of FOODITEM to be recognized by Spacy's Name Entity Recognition engine. I am following tutorials online, following is the advise given in most of the tutorials;

Load the model or create an empty model

We can create an empty model and train it with our annotated dataset or we can use the existing spacy model and re-train with our annotated data.

But none of the tutorials tell how/why to choose between the two options. Also, I don't understand how will the choice affect my final output or the training of the model.

How do I make the choice between a pre-trained model or a blank model? What are the factors to consider?

",43434,,,,,1/6/2021 5:02,Should we use a pre-trained model or a blank model for custom entity training of NER in spacy?,,1,0,,,,CC BY-SA 4.0 25588,1,,,1/5/2021 18:22,,3,180,"

I was going through the paper on U-Net. U-net consists of a contracting path followed by an expanding path. Both the paths use a regular convolutional layer. I understand the use of convolutional layers in the contracting path, but I can't figure out the use of convolutional layers in the expansive path. Note that I'm not asking about the transpose convolutions, but the regular convolutions in the expansive path.

",31749,,2444,,1/6/2021 17:17,1/6/2021 17:17,What is the use of the regular convolutional layer in expansion path of U-Net?,,1,0,,,,CC BY-SA 4.0 25589,1,25592,,1/5/2021 18:26,,2,107,"

Many authors of research papers in AI (e.g. arXiv) write their neural networks from the ground-up, using low-level languages like C++ to implement their theories. Can existing open source frameworks also be used for this purpose, or are their implementations too limited?

Can, for example, TensorFlow be used to craft an original network architecture that shows improvements on existing benchmarks? Can original mathematical work be coded into a high-level framework like TensorFlow such that original research on network architectures/approaches be demonstrated in a paper?

A quick search reveals many papers using C++ in their implementation:

",43587,,43587,,1/6/2021 2:31,1/6/2021 2:31,"Can TensorFlow, PyTorch, and other mainstream ML frameworks be used for research-grade work in AI?",,1,5,,,,CC BY-SA 4.0 25590,2,,25586,1/5/2021 18:47,,8,,"

The equation $$\hat{y} = \sigma(xW_\color{green}{1})W_\color{blue}{2} \tag{1}\label{1}$$ is the equation of the forward pass of a single-hidden layer fully connected and feedforward neural network, i.e. a neural network with 3 layers, 1 input layer, 1 hidden layer, and 1 output layer, where

  • the input layer is connected to the hidden layer (all scalar inputs are connected to every neuron in the hidden layer)
  • the hidden layer is connected to the output layer (all neurons in the hidden layer are connected to all neurons in the output layer

As an example, suppose that we have $N$ real-valued features, there are $L$ hidden units (or neurons) and $M$ output units, then the elements (feature vector and parameters) of equation \ref{1} would have the following shape

  • $x \in \mathbb{R}^{1 \times N}$
  • $W_\color{green}{1} \in \mathbb{R}^{N \times L}$
  • $W_\color{blue}{2} \in \mathbb{R}^{L \times M}$

$\sigma$ is an activation function that is applicable to all elements of the matrix separately (i.e. component-wise). So, $\hat{y} \in \mathbb{R}^{1 \times M}$.

The equation

$$ \hat{y} = \sum_{\color{red}{i} = 1}^{L} W_{\color{blue}{2}, \color{red}{i} \cdot} \sigma(\langle x, W_{\color{green}{1}, \cdot \color{red}{i}} \rangle )\label{2}\tag{2} $$

is another way of writing equation \ref{1}.

Before going to the explanation, let's try to understand equation \ref{2} and its components.

  • $W_{\color{green}{1}, \cdot \color{red}{i}} \in \mathbb{R}^N$ is the $\color{red}{i}$th column of the matrix that connects the inputs to the hidden neurons, so it is a vector of $N$ elements (note that we sum over the number of hidden neurons, $L$).

  • Similarly, $ W_{\color{blue}{2}, \color{red}{i} \cdot} \in \mathbb{R}^M$ is also a vector, but, in this case, it is a row of the matrix $ W_{\color{blue}{2}}$ (rather than a column: why? because we use $\color{red}{i} \cdot$ instead of $\cdot \color{red}{i}$, which refers to the column).

  • So, $\langle x, W_{\color{green}{1}, \cdot \color{red}{i}} \rangle$ is the dot (or scalar) product between the feature vector $x$ and the $\color{red}{i}$th column of the matrix that connects the inputs to the hidden neurons, so it's a number (or scalar). Note that both $x$ and $W_{\color{green}{1}, \cdot \color{red}{i}}$ have $N$ elements, so the dot product is well-defined in this case.

  • $ W_{\color{blue}{2}, \color{red}{i}} \sigma(\langle x, W_{\color{green}{1}, \color{red}{i}} \rangle))$ is the product between a vector of shape $M$ and a number $\sigma(\langle x, W_{\color{green}{1}, \cdot \color{red}{i}} \rangle )$. This is also well-defined. You can multiply a real-number with a vector, it's like multiplying the real-number with each element of the vector.

  • $\sum_{\color{red}{i} = 1}^{L} W_{\color{blue}{2}, \color{red}{i} \cdot} \sigma(\langle x, W_{\color{green}{1}, \cdot \color{red}{i}} \rangle )$ is thus the sum of $L$ vectors of size $M$, which makes $\hat{y}$ also have size $M$, as in equation \ref{1}.

Now, the question is: is equation \ref{2} really equivalent to equation \ref{1}? This is still not easy to see because $xW_\color{green}{1}$ is a vector of shape $L$, but, in equation \ref{2}, we do not have any vector of shape $L$, but we have vectors of shape $N$ and $M$ (and the vectors of shape $M$ are summed $L$ times). First, note that $\sigma(xW_\color{green}{1}) = h\in \mathbb{R}^L$, so $hW_\color{blue}{2}$ are $M$ dot products collected in a vector (i.e. $\hat{y} \in \mathbb{R}^M$), where the $j$th element of $\hat{y}$ was computed as a summation of $L$ elements (a dot product of two vectors is the element-wise multiplication of the elements of the vectors followed by the summation of these multiplications). Ha, still not clear!

The easiest way (for me) to see that they are equivalent is to think that $\sigma(xW_\color{green}{1})$ is a vector of $L$ elements $\sigma(xW_\color{green}{1}) = \ell = [l_1, l_2, \dots, l_L]$. Then you know that to multiply this vector with $\ell$ (from the left), you actually perform a dot product between $\ell$ and each column $W_\color{blue}{2}$. A dot product is essentially a sum, and that's why we sum in equation \ref{2}. So, essentially, in equation \ref{2}, we first multiple $l_1$ with the first row of $W_\color{blue}{2}$ (i.e. by all elements of the first row). Then we multiply $l_2$ by the second row of $W_\color{blue}{2}$. We do this for all $L$ rows, then we sum the rows (to conclude the dot product). So, you can think of equation 2 as first perform all multiplications, then summing, rather than dot product-by-dot product.

So, in my head, I have the following picture. To simplify the notation, let $A$ denote $W_\color{blue}{2}$, so $A_{ij}$ is the element at the $i$th row and $j$th column of matrix $W_\color{blue}{2}$. So, we have the following initial matrix

$$ A = \begin{bmatrix} A_{11} & A_{12} & A_{13} & \dots & A_{1M} \\ A_{21} & A_{22} & A_{23} & \dots & A_{2M} \\ \vdots & \vdots & \vdots & \dots & \vdots \\ A_{L1} & A_{L2} & A_{L3} & \dots & A_{LM} \\ \end{bmatrix} = \begin{bmatrix} W_{\color{blue}{2}, \color{red}{1} \cdot} \\ W_{\color{blue}{2}, \color{red}{2} \cdot} \\ \vdots \\ W_\color{blue}{2, \color{red}{L} \cdot} \\ \end{bmatrix} $$

Then, in the first iteration of equation \ref{2}, we do the following

$$ \begin{bmatrix} l_1 A_{11} & l_1 A_{12} & l_1 A_{13} & \dots & l_1 A_{1M} \\ A_{21} & A_{22} & A_{23} & \dots & A_{2M} \\ \vdots & \vdots & \vdots & \dots & \vdots \\ A_{L1} & A_{L2} & A_{L3} & \dots & A_{LM} \\ \end{bmatrix} $$

In the second, we do the following

$$ \begin{bmatrix} l_1 A_{11} & l_1 A_{12} & l_1 A_{13} & \dots & l_1 A_{1M} \\ l_2 A_{21} & l_2 A_{22} & l_2 A_{23} & \dots & l_2 A_{2M} \\ \vdots & \vdots & \vdots & \dots & \vdots \\ A_{L1} & A_{L2} & A_{L3} & \dots & A_{LM} \\ \end{bmatrix} $$ Until we have

$$ \begin{bmatrix} l_1 A_{11} & l_1 A_{12} & l_1 A_{13} & \dots & l_1 A_{1M} \\ l_2 A_{21} & l_2 A_{22} & l_2 A_{23} & \dots & l_2 A_{2M} \\ \vdots & \vdots & \vdots & \dots & \vdots \\ l_L A_{L1} & l_L A_{L2} & l_L A_{L3} & \dots & l_L A_{LM} \\ \end{bmatrix} $$ Then we do a reduce sum across the rows to end the dot product (i.e. for each column we sum the elements in the rows). This is exactly equivalent to first performing the dot product between $\ell$ and the first column of $W_\color{blue}{2}$, then the second column, and so on.

",2444,,2444,,1/8/2021 17:56,1/8/2021 17:56,,,,0,,,,CC BY-SA 4.0 25591,1,25593,,1/5/2021 18:51,,2,210,"

I started reading up on SVM and very little is defined of what are support values. I reckon it's they are denoted as $\alpha$ in most formulations.

",43588,,2444,,1/6/2021 12:09,1/6/2021 12:09,What are support values in a support vector machine?,,1,0,,,,CC BY-SA 4.0 25592,2,,25589,1/5/2021 23:57,,3,,"

Your statement that researchers build their network from the ground-up using C++ or some other low level library couldn't be further from the truth.

You could take a look at this analysis showing the popularity of these two frameworks in the top ML conferences. The following Figure is taken from there.

In CVPR-2020, for example, TensorFlow and pytorch combined for over 500 papers! Furthermore, because the two most active research entities (Google and Facebook) are backing these two frameworks, they are used in some of the most impactful research studies.


I want to give some reasons that support the popularity of these frameworks, but first I'm going to rephrase your question a bit:

Why use TensorFlow/Pytorch in python rather than build your model on your own using C++?

Note: The reason I rephrased the question is because TensorFlow and PyTorch both have a C++ APIs.

Why are these frameworks so popular in contrast to lower-level programming languages?

Some reasons are the following

  • Rapid prototyping. Languages link C++, have bloated syntaxes, require low-level operations (e.g. memory management) and cannot be run interactively. This means it takes someone much less time to create and test a model in python than it does in C++.

  • No need to re-invent the wheel. Some operations are common in most networks (e.g. backpropagation), why re-implement them? Other functionalities are hard to implement on your own (e.g. parallel processing, GPU computation). Do data scientists need to have such a strong technical background to research neural networks?

  • Open-source. They benefit from being opensource and can offer a great deal of tools at your disposal for building neural networks. You want to add batchnorm to your network? No worries, just import it and add it in a single line! Also, they offer the perfect opportunity for sharing pretrained models.

  • They are optimized. These frameworks are optimized to run as fast as possible on GPUs (if available) or CPUs. It would be virtually impossible for someone to write code that runs as fast on his own.

",26652,,,,,1/5/2021 23:57,,,,1,,,,CC BY-SA 4.0 25593,2,,25591,1/6/2021 4:07,,2,,"

In the least-squares SVM (LS-SVM) the non-zero Lagrange multipliers ($\alpha$) are the support values. The corresponding data points are the support vectors. Johan Suykens explains this in Least Squares Support Vector Machines.

",5763,,2444,,1/6/2021 12:09,1/6/2021 12:09,,,,0,,,,CC BY-SA 4.0 25594,2,,25587,1/6/2021 5:02,,1,,"

The reason you would load a pre-existing model is that it offers something of value to your task (e.g. named entity recognition for food) and the cost of training it from scratch is not worth it. For example, to train GPT-3 from scratch would cost several million dollars. Typically someone will use a model like BERT and fine tune it. This is called transfer learning. With spaCy you will typically use en_core_web_sm which was trained on the OntoNotes corpus and includes named entities. Making a custom food NER using en_core_web_sm should be more accurate than making one from scratch. You should be able to build a good model with and without transfer learning fairly quickly if you have a GPU.

",5763,,,,,1/6/2021 5:02,,,,0,,,,CC BY-SA 4.0 25595,1,,,1/6/2021 5:38,,4,414,"

The sklearn's documentation of the method roc_auc_score states that the parameter multi_class can take the value 'OvR' (which stands for One-vs-Rest) or 'OvO' (which stands for One-vs-One). These values are only applicable for multi-class classification problems.

Does anyone know in what particular cases we would use OvR as opposed to OvO? In the general academic literature, is there a preference given to one?

",33734,,2444,,1/6/2021 22:00,1/6/2021 22:00,"When computing the ROC-AUC score for multi-class classification problems, when should we use One-vs-Rest and One-vs-One?",,0,2,,,,CC BY-SA 4.0 25596,1,,,1/6/2021 5:46,,-1,129,"

I'm using MATLAB 2019, Linux, and UNet (a CNN specifically designed for semantic segmentation). I'm training the network to classify all pixels in an image as either cell or background to get segmentations of cells in microscopic images. My problem is the network is classifying every single pixel as background, and seems to just be outputting all zeroes. The validation accuracy improves a little at the very start of the training but than plateaus at around 60% for the majority of the training time. The network doesn't seem to be training very well and I have no idea why.

Can anyone give me some hints about what I should look into more closely? I just don't even know where to start with debugging this.

Here's my code:

    % Set datapath
    datapath = '/scratch/qbi/uqhrile1/ethans_lab_data';
    
    % Get training and testing datasets
    images_dataset = imageDatastore(strcat(datapath,'/bounding_box_cropped_resized_rgb'));
    load(strcat(datapath,'/gTruth.mat'));
    labels = pixelLabelDatastore(gTruth);
    [imdsTrain, imdsVal, imdsTest, pxdsTrain, pxdsVal, pxdsTest] = partitionCamVidData(images_dataset,labels);
    
    % Weight segmentation class importance by the number of pixels in each class
    pixel_count = countEachLabel(labels); % count number of each type of pixel
    frequency = pixel_count.PixelCount ./ pixel_count.ImagePixelCount; % calculate pixel type frequencies
    class_weights = mean(frequency) ./ frequency; % create class weights that balance the loss function so that more common pixel types won't be preferred
    
    % Specify the input image size.
    imageSize = [512 512 3];
    
    % Specify the number of classes.
    numClasses = 2;
    
    % Create network
    lgraph = unetLayers(imageSize,numClasses);
    
    % Replace the network's classification layer with a pixel classification
    % layer that uses class weights to balance the loss function
    pxLayer = pixelClassificationLayer('Name','labels','Classes',pixel_count.Name,'ClassWeights',class_weights);
    lgraph = replaceLayer(lgraph,"Segmentation-Layer",pxLayer);
    
    %% TRAIN THE NEURAL NETWORK
    
    % Define validation dataset-with-labels
    validation_dataset_with_labels = pixelLabelImageDatastore(imdsVal,pxdsVal);
    
    % Training hyper-parameters: edit these settings to fine-tune the network
    options = trainingOptions('adam', 'LearnRateSchedule','piecewise', 'LearnRateDropPeriod',10, 'LearnRateDropFactor',0.3, 'InitialLearnRate',1e-3, 'L2Regularization',0.005, 'ValidationData',validation_dataset_with_labels, 'ValidationFrequency',10, 'MaxEpochs',3, 'MiniBatchSize',1, 'Shuffle','every-epoch');
    
    % Set up data augmentation to enhance training dataset
    aug_imgs = {};
    numberOfImages = length(imdsTrain.Files);
    for k = 1 : numberOfImages
        % Apply cutout augmentation
        img = readimage(imdsTrain,k);
        cutout_img = random_cutout(img);imwrite(cutout_img,strcat('/scratch/qbi/uqhrile1/ethans_lab_data/augmented_dataset/img_',int2str(k),'.tiff'));
    end
    aug_imdsTrain = imageDatastore('/scratch/qbi/uqhrile1/ethans_lab_data/augmented_dataset');
    % Add other augmentations
    augmenter = imageDataAugmenter('RandXReflection',true, 'RandXTranslation',[-10 10],'RandYTranslation',[-10 10]);
    % Combine augmented data with training data
    augmented_training_dataset = pixelLabelImageDatastore(aug_imdsTrain, pxdsTrain, 'DataAugmentation',augmenter);
    
    % Train the network
    [cell_segmentation_nn, info] = trainNetwork(augmented_training_dataset,lgraph,options);
    
    save cell_segmentation_nn
",9983,,,,,2/9/2021 1:03,Semantic segmentation CNN outputs all zeroes,,2,0,,,,CC BY-SA 4.0 25597,1,25598,,1/6/2021 7:26,,4,189,"

I am currently working on my master's thesis and going to apply Deep-SARSA as my DRL algorithm. The problem is that there is no datasets available and I guess that I should generate them somehow. Datasets generation seems a common feature in this specific subject as stated in [1]

When a dataset is not available, learning is performed through experience.

I am wondering how to generate datasets when the environment is not as simple as a tic-tac-toe or a maze problem and what the experience means.

PS: The environment consists of 15 mobile users and 3 edge servers, each of which covers a number of mobile users. Each mobile user might generate a computationally heavy-load task and at the beginning of each timestep and can process the task itself or requests its associated edge server to do the processing. If the associated edge server is not capable of processing, due to some reasons, it requests a nearby edge server to lend it a hand. The optimization problem (reward) is to reduce time and energy consumption (multi-objective optimization). Each server has a DRL agent that makes offloading decisions.

I'd really appreciate your suggestions and help.

",43578,,2444,,1/8/2021 0:32,1/8/2021 7:39,How should I generate datasets for a SARSA agent when the environment is not simple?,,1,0,,,,CC BY-SA 4.0 25598,2,,25597,1/6/2021 8:05,,4,,"

I am wondering how to generate datasets when the environment is not as simple as a tic-tac-toe or a maze problem

There is no difference in concept, which is why tic-tac-toe and maze problems are used to teach.

As you have noted, the main difference between reinforcement learning (RL) and supervised learning is that RL does not use labeled datasets. If you are using SARSA then you would not expect to use any record of previous experience either because SARSA is designed to work on-policy and online - which means that data needs to be generated during training. Training data for SARSA is typically stored only temporarily before being used, or is used immediately (you might keep a log of it for analysis or to help document your thesis, but that log will not be used for further training by the agent). This is different to Q-learning and DQN, which could in theory make use of longer-term stored experience.

You have two main choices for acquiring data:

  • Use a real environment. In your case, set up 15 mobile users and 3 edge servers. Instrument the environment to collect state and reward data for the agent. Implement the agent as the real decision maker in this environment.

  • Simulate the environment. Write a simulation that models user behaviour and server loading. Instrument that to provide state and reward data, and integrate your learning agent with it. Typically the agent will call the environment's step function, passing the action choice as an argument and receiving reward and state data back.

If you can simulate the environment, this is likely to be preferable to you since you will likely use less compute resources (than 3 servers and 15 mobile phones) and can run the training faster than real time. Deep reinforcement learning can use a large amount of experience to converge on near-optimal policies, and fast simulations can help because they generate experience faster than reality.

You can also do both approaches. Train an initial agent in simulation, then implement the real version once it reaches a good level of performance in simulation. You can even have the agent continue to learn and refine behaviour in production. Given that you are working with SARSA, this may be an important part of the intent of your project, that the agent continues to adapt to changes in user behaviour and server load over time. In fact this is a key advantage of SARSA over Q-learning, that it should be more reliable and safe to use in such a continuous learning scenario deployed to production.

and what the experience means.

The experience in reinforcement learning is the record of states, actions and rewards that the agent encounters during training.

",1847,,1847,,1/8/2021 7:39,1/8/2021 7:39,,,,3,,,,CC BY-SA 4.0 25599,1,,,1/6/2021 9:05,,1,21,"

I'm trying to replicate a paper from Google on view synthesis/lightfields from 2019: DeepView: View Synthesis with Learned Gradient Descent and this is the PDF.

Basically the input to the neural network comes from a set of cameras which number is variable, and the output is a stack of images which number is also variable. For that they use both a Fully Convolutional Network and Learned Gradient Descent.

I don't know if I am understanding this correctly: (in each LGD iteration) They use the same network for all depth slices AND all views. Is this correct?

This is the LGD network, not much important to the question but it helps you understand the setup. You can see at least 3 LGD iterations. Part b) is just the calculation they do in the "green gradient boxes" on part a).

This is the inside of the CNNs. On each LGD iteration they use basically the same architecture, but the weights are different per iteration.

For me the confusing part is that they represent each view as a different network, but they don't represent each depth slice as a different network. As you can see in the next image they do say that they use the same parameters for all depth slices, and that the order of the views doesn't matter so it must be that they're also reusing the parameters for all views, right? So if I understand correctly, this is a matter of reusing the same model for all depths and all views. BTW note that the maxpool kind of operation is over each view.

Also I have a question on the practicalities of the implementation. I'll be implementing this with normal 2D convolution layers, so if I want them to run independent of the views and depth slices, I guess I could concatenate views and depth slices in the "batch" dimension? I mean, before the maximum k operation, and then reuse the output.

This is what they say:

Thanks

",43595,,43595,,1/6/2021 15:11,1/6/2021 15:11,"In the DeepView paper, do they use the same FCN for all depth slices AND all views?",,0,0,,,,CC BY-SA 4.0 25600,1,25621,,1/6/2021 9:09,,3,438,"

Consider an image that contains one can (or bottle, or any similar oval object), which has texts all over it. In the image below, I have many bottles, but you can assume that each image only contains one such object.

As we can see, in each can, the text can flow from left to right, and any OCR system may miss the text on the left and right sides of the can, as they are not aligned with the camera angle.

So, is there any solution/s for this, like preprocessing in a certain way, so that we can read the text or make this round object into a straight one? (If there is any Python program that can solve this problem, could you please share it with me?)

",37980,,2444,,1/8/2021 0:40,1/8/2021 0:40,"In OCR, how should I deal with the warped text on the sides of oval objects?",,1,1,,,,CC BY-SA 4.0 25601,1,,,1/6/2021 9:16,,12,8216,"

Starting from my own understanding, and scoped to the purpose of image generation, I'm well aware of the major architectural differences:

  • A GAN's generator samples from a relatively low dimensional random variable and produces an image. Then the discriminator takes that image and predicts whether the image belongs to a target distribution or not. Once trained, I can generate a variety of images just by sampling the initial random variable and forwarding through the generator.

  • A VAE's encoder takes an image from a target distribution and compresses it into a low dimensional latent space. Then the decoder's job is to take that latent space representation and reproduce the original image. Once the network is trained, I can generate latent space representations of various images, and interpolate between these before forwarding through the decoder which produces new images.

What I'm more interested is the consequences of said architectural differences. Why would I choose one approach over the other? And why? (for example, if GANs typically produce better quality images, any ideas why that is so? is it true in all cases or just some?)

",16871,,16871,,1/6/2021 12:30,1/6/2021 12:30,What are the fundamental differences between VAE and GAN for image generation?,,1,3,,,,CC BY-SA 4.0 25602,2,,25588,1/6/2021 10:01,,3,,"

The point is that in the expansive path you have two forms of information:

  1. the information from the contracting path, which includes all high-level features extracted from the original image.
  2. the information from the skip-connections, which copy a cropped version of the feature maps in the contracting path. Because, as we go forward through the expansive path these originate from earlier stages in the contracting path, these are progressively richer in detail.

Intuitively you can think of it as this: high-level features help the network tell which areas to group together, while details help it tell where each group starts and ends at the pixel level.

The idea is to combine these two forms of information, i.e. the high-level features and the details, optimally. To do this you need trainable layers that learn this optimal combination. Here is where the convolution layers come to play.

",26652,,,,,1/6/2021 10:01,,,,3,,,,CC BY-SA 4.0 25603,2,,25506,1/6/2021 10:28,,0,,"

There is no general method to detect a change in the fitness landscape, since changes can be very local and can occur in just a small area of the fitness landscape. For this reason nature inspired optimization algorithms usually maintain a diversified population to cope with environmental changes. a common mechanism is using several sub-populations and ensuring that these sub-populations do not overlap. Also, there are some heuristics proposed that can help the algorithms in detecting changes. for instance if you are using multiple sub-populations, you can double test one element of each sub-population at some generation to find out if an environmental change has occurred or not. albeit, there are some comprehensive heuristics have been proposed for change detection, like the one proposed in [R. Mukherjee, S. Debchoudhury, S. Das, Modified differential evolution with locality induced genetic operators for dynamic optimization]. In my opinion, Use simple methods. for instance, at the keep a small by diverse set of population members at some k generations, and reevaluate them after k generations to detect change.

",43208,,,,,1/6/2021 10:28,,,,1,,,,CC BY-SA 4.0 25604,2,,25505,1/6/2021 10:44,,0,,"

using different population sizes at different stages of the optimization process can be beneficial. With large population sizes you can effectively explore the landscape to find proper areas. Large population sizes helps in finding global optima or high fitness local optima. However, using large populations require more fitness evaluations and waste of computational resources. With a small population you can effectively exploit a previously find appropriate area and get high accuracy solutions. For this reason, some works suggest to use a large population at the start of the algorithm and gradually, decrease its size, like [Improving the search performance of SHADE using linear population size reduction]. Also, some dynamic optimization methods, use dynamic population sizes. For instance, they create sub-populations when it is necessary to discover more optima or to cover the landscape, they decrease their sub-populations when they detect a change or new appropriate uncovered area in the landscape.

",43208,,,,,1/6/2021 10:44,,,,1,,,,CC BY-SA 4.0 25605,2,,25427,1/6/2021 11:07,,1,,"

I'm not familiar with your game so I can't tell you what a good heuristic woul be in your specific case, but I can give you some advice on how to look for a good heuristic function.

As a rule of thumb, the heuristic function for a MiniMax algorithm is best kept simple and efficient, so you can get deeper into the tree. But it depends on how costly it is to compute the heuristic function compared to simulating moves in the game.

If the heuristic takes longer than simulating a game move, it might be worth simplifying it so it runs faster and you can look ahead further. This often leads to more emergent and advanced strategies that are hard to express mathematically. An extreme example of a simple heuristic would be the current player score minus the opponent's score. Since scores only change when someone lands on a bonus tile, many paths you take down the tree would have equal value, so you need to be able to look ahead many moves to find a non-zero heuristic and be able to prune parts of the tree. But because the heuristic is so fast to compute, you can do this and discover more non-obvious strategies, simply by brute force simulation. This leads to more emergent behavior, and would tell you more about different ways to play the game (if that is your goal).

If simulating a game move takes longer than your current heuristic, it's probably not worth making the heuristic faster since the game simulation is the dominating factor in how deep you can go down the tree. This case is much more difficult because it means you have to find optimal strategies (and how to express them as a heuristic function) yourself. This is more of an art than a science - ask any chess grandmaster. I'd look up existing literature on the game and see if any existing strategies can be translated into a heuristic function. If there is no literature (e.g. because the game is new or unpopular), you could spend some time playing it yourself to discover what works. Alternatively (and perhaps more interesting), you could use a simple heuristic function with more emergent behavior, increase the time the MiniMax algorithm has to make moves, and play against your NPC opponent a few times to see what strategies it discovers. Or even have two NPCs with different heuristic functions play against each other. Then try to incorporate those into your final heuristic function. This is also a way to determine which heuristic function is better, if you have multiple candidates.

There are ways to optimize the heuristic function automatically using machine learning (specifically reinforcement learning), but it's probably not worth opening that can of worms in your case.

",40060,,,,,1/6/2021 11:07,,,,0,,,,CC BY-SA 4.0 25609,2,,25601,1/6/2021 12:25,,5,,"

GANs generally produce better photo-realistic images but can be difficult to work with. Conversely, VAEs are easier to train but don’t usually give the best results.

I recommend picking VAEs if you don’t have a lot of time to experiment with GANs and photorealism isn’t paramount.

There are exceptions such as Google’s VQ-VAE 2 which can compete with GANs for image quality and realism. There is also VAE-GAN and VQ-VAE-GAN.

As a note, GANs and VAEs are not specifically for images and can be used for other data types/structures.

",5763,,,,,1/6/2021 12:25,,,,1,,,,CC BY-SA 4.0 25610,2,,25486,1/6/2021 14:47,,1,,"

I couldn't understand your question clearly, however, it think you are making a slim mistake. let's look at the flowing code from "Russell " and do pruning step by step:
Assume your are In D and you have traversed its both children, Alpha at D becomes 20. We then return back to B, Beta becomes 20 (note that Alpha is -Inf in D). We go to E then L and then back to E. 30 is greater than Beta, so the rest is pruned. Not that Alpha remains -Inf in E, though its value here isn't important. we go back to B the value of Beta remains on changed 20. If we go to F and return back to B Beta remains unchanged and is still 20 (Note that Alpha is still -Inf). We return to A, Alpha becomes 20, And Beta remains +Inf.
So, Considering the code, the max node does not get the Alpha value from its children, rather it decides on its own by setting Alpha = max(Alpha, and returned value of its child) after visiting each child. the same is true for Beta.

",43208,,43208,,1/6/2021 14:57,1/6/2021 14:57,,,,0,,,,CC BY-SA 4.0 25614,1,,,1/7/2021 7:59,,4,110,"

How can I update the observation probability for a POMDP (or HMM), in order to have a more accurate prediction model?

The POMDP relies on observation probabilities that match an observation to a state. This poses an issue as the probabilities are not exactly known. However, the idea is to make them more accurate over time. The simplest idea would be to count the appeared observation as well as the states and use Naive Bayes estimators.

For example, $P(s' \mid a,s)$ is the probability that a subsequent state $s'$ is reached, given that the action $a$ and the previous state $s$ are known: In that simple case I can just count and then apply e.g. Naive Bayes estimators.

But, if I have an observation probability $P(z \mid s')$ (where $z$ is the observation) depending on a state, it's not as trivial to just count up the observation and the states, as I can not say that a state really was reached (Maybe I made an observation, but I was in a different state than wanted). I can just make an observation and hope I was in a certain state. But I can not say if e.g. I was in $s_1$ or maybe $s_2$. I think the update of the observation probability is only possible in the late aftermath.

So, what are good approaches to estimate my state?

",21157,,2444,,1/8/2021 0:09,1/8/2021 0:09,How to update the observation probabilities in a POMDP?,,0,2,,,,CC BY-SA 4.0 25615,1,,,1/7/2021 8:20,,3,1806,"

To my understanding, transfer learning helps to incorporate data from other related datasets and achieve the task with less labelled data (maybe in 100s of images per category).

Few-shot learning seems to do the same, with maybe 5-20 images per category. Is that the only difference?

In both cases, we initially train the neural network with a large dataset, then fine-tune it with our custom datasets.

So, how is few-shot learning different from transfer learning?

",43623,,2444,,1/7/2021 17:22,5/16/2021 9:09,How is few-shot learning different from transfer learning?,,1,2,,,,CC BY-SA 4.0 25616,1,,,1/7/2021 10:11,,2,334,"

I have sentences with some grammatical errors , with no punctuations and digits written in words... something like below:

As you can observe, a proper noun , winston isnt highlighted with capital in Sample column. 'People' is spelled wrong and there are no punctuations in Sample column. The date in the first row isnt in right format. I have millions of rows like this and want to train a model to learn punctuations and corrections. Can a single BERT or T5 handle this task? or only option is to try 1 model for each task? Thanks in advance

",43626,,43626,,1/13/2021 9:28,4/28/2021 17:25,T5 or BERT for sentence correction/generation task?,,0,1,,,,CC BY-SA 4.0 25617,1,,,1/7/2021 12:34,,2,57,"

I have 2 small images. They are basically the same, but differ in rotation and size. I should estimate the parameters for affine transform to get them similar. What network structure can be suitable for this task? For example, those based on convolutional networks did badly, because the pictures are too small.

",43631,,,,,2/7/2021 21:01,Can you give me a piece of advise of the network sructure that would be suitable for my task?,,2,2,,,,CC BY-SA 4.0 25618,1,25626,,1/7/2021 14:39,,1,338,"

In the original U-Net paper, it is written

The energy function is computed by a pixel-wise soft-max over the final feature map combined with the cross entropy loss function.

...

$$ E=\sum_{\mathbf{x} \in \Omega} w(\mathbf{x}) \log \left(p_{\ell(\mathbf{x})}(\mathbf{x})\right) \tag{1}\label{1} $$

where $w(\mathbf{x})$ is a weight map (I'm not interested in that part right now), and $p_{k}(\mathbf{x})$ is

$$ p_{k}(\mathbf{x})=\exp \left(a_{k}(\mathbf{x})\right) /\left(\sum_{k^{\prime}=1}^{K} \exp \left(a_{k^{\prime}}(\mathbf{x})\right)\right) $$

The pixel-wise softmax with $a_{k}(\mathbf{x})$ being the activation in feature channel $k$ at pixel position $\mathbf{x}$ and $K$ the number of classes. Then $\ell(\mathbf{x})$ from $p_{\ell(\mathbf{x})}$ is the true label of each pixel, i.e. if the pixel at position $\mathbf{x}$ is part of class $1$, then $p_{\ell(\mathbf{x})}$ is equal to $p_1(\mathbf{x})$.

As far as is understand $-E$ should be the cross-entropy function. Right? I've already done the math for the binary case (ignoring $w(\mathbf{x})$) and it seemed to be equal.

",43632,,2444,,1/8/2021 17:19,1/8/2021 17:19,Have I understood the loss function from the original U-Net paper correctly?,,1,0,,,,CC BY-SA 4.0 25619,2,,25617,1/7/2021 15:50,,1,,"

Instead of NNs, you can use RANSAC algorithm to calculate homography matrix, but first you need to find feature points. However, if your images are blob-like, you may not get such a successful results. Here some presentations for better understanding: cs.umd notes and csail.mit notes

(Also, there might be better image processing tools.)

",41615,,,,,1/7/2021 15:50,,,,0,,,,CC BY-SA 4.0 25621,2,,25600,1/7/2021 18:25,,2,,"

There are many papers on this but the following is a good start:

You mentioned you do not want to do a panoramic view but that has more than one meaning. If I assume you mean you do not want to rotate the can while taking multiple photos, or you don't want to take multiple photos from different angles, you could try a pericentric lens. This would require some image processing to do the unwrapping. More resolution is needed as the wrapping is much more severe. The advantage though is that you will have a single image of the full cylindrical surface and won't miss any features or text.

",5763,,5763,,1/7/2021 21:19,1/7/2021 21:19,,,,0,,,,CC BY-SA 4.0 25622,1,,,1/8/2021 0:04,,2,150,"

I am using DDPG to solve a RL problem. The action space is given by the Cartesian product $[0,20]^4\times[0,6]^4$. The actor is implemented as a deep neural network with an output dimension equals to $8$ with tanh activation.

So, given a state s, an action is given by a = actor(s) where a contains real numbers in [-1,1]. Next, I map this action a into a valid action valid_a that belongs to the action space $[0,20]^4\times[0,6]^4$. Than, I use valid_a to calculate the reward.

My question is: how does the DDPG algorithm know about this mapping that I am doing? In what part of the DDPG algorithm should I specify this mapping? Should I provide a bijective mapping to guarantee that the DDPG algorithm learns bad from good actions?

",37642,,,,,1/8/2021 10:50,How does DDPG algorithm know about my action mapping function?,,1,0,,,,CC BY-SA 4.0 25625,1,,,1/8/2021 6:17,,2,73,"

I am new to reinforcement learning. May I ask a simple (and maybe a bit silly) question here? I am trying to use the "one-step actor-critic" method to train a robot on a gridworld. Let's focus on the actor as there is nothing puzzling me for the critic.

I used a feedforward ANN with one hidden layer to parameterize the action preference function (i.e. the $h$ function). The ANN has one bias node in the input layer to connect to all the hidden nodes. Therefore, there are three sets of weights associated with the $h$ function -- the weights connecting the inputs to the hidden nodes (let's call it $W1$ matrix), the weights connecting the hidden nodes to the outputs (let's call it $W2$ matrix), and the weights connecting the bias node to the hidden nodes (let's call it $c$ vector).

I used the exponential soft-max as the policy function (i.e. the $\pi$ function). That is,

$$\pi(a|s,W1,W2,c) = \displaystyle\frac{e^{h(a,s,W1,W2,c)}}{\sum_be^{h(b,s,W1,W2,c)}}.$$

The inputs to the ANN are the state/feature vector, and the outputs of the ANN are the action preference values (i.e. the $h$ values). With these action preference values, the $\pi$ function can compute the probabilities for each action to be chosen.

It is easy to derive that

$$\nabla_{W1/W2/c} \log \pi(a|s,W1,W2,c) = \nabla_{W1/W2/c} h(a,s,W1,W2,c)-\sum_b \big[\nabla_{W1/W2/c}h(b,s,W1,W2,c)\pi(b|s,W1,W2,c)\big]$$

In the above, $/$ means "or".

My puzzle is, I found that $\nabla_{W2} h(\cdot,s,W1,W2,c) \equiv \sigma$, where $\sigma$ is the vector of the sigmoid activation values. That implies, $\nabla_{W2}$ is independent of actions!? Consequently, $\nabla_{W2} \log \pi(\cdot|s,W1,W2,c) \equiv 0$, which implies that $W2$ will not be updated at all...

Where did I get wrong?

(Actually, the above puzzle of mine extends to any policy gradient method as long as $\nabla \log \pi$ is involved and a feedforward ANN is used to approximate the action preferences.)

",43644,,2444,,1/9/2021 1:13,1/12/2021 3:19,$\nabla \log \pi$ with respect to some parameters constantly being zero,,0,0,,,,CC BY-SA 4.0 25626,2,,25618,1/8/2021 7:02,,1,,"

Yes, $E$ is the cross-entropy function and a direct generalization of the binary case.

For the binary case, probability to belong to the class $1$ is given by a sigmoid function $\sigma(x)$ of the output $x$, and the probability to belong to the class $0$ is $1 - \sigma(x)$.

Therefore the binary crossentropy will give: $$ -\sum_i (l_i \log \sigma(x) + (1 - l_i) \log (1 - \sigma(x)) $$ Where the sum is over all samples in the dataset. Because $l_i$ is a binary variable one of these two terms would be zero. The gradient descent forces the model to predict true label with more confidence.

For the multiclass case, now the output is a $K$-vector and the softmax function forces its elements to sum to one: $$ \sum_{i = 1}^{K} p_i = 0 $$ The true label is one-hot encoded vector, with $1$ on the position of the true label, and $0$ elsewhere. The generalization of the binary case is: $$ -\sum_i \sum_{j = 1}^{K}(l_{ij} \log \sigma_j(x) + (1 - l_{ij}) \log (1 - \sigma_j(x)) $$ In this case, there will be $K$ non-vanishing contributions.The minization of $E$ forces the classifier to predict the true label more confidently, and all other (to make $\log(1 - \sigma_j(x))$) as small as possible.

",38846,,,,,1/8/2021 7:02,,,,0,,,,CC BY-SA 4.0 25627,1,25636,,1/8/2021 9:23,,4,3451,"

From the tid-bits, I understand of neural networks (NN), the Loss function is the difference between predicted output and expected output of the NN. I am following this tutorial, the losses are included at line #81 in the nlp.update() function.

I am getting losses in the range 300-100. How to interpret them? What should be the ideal output of this losses variable? I went through Spacy's documentation, but nothing much is written there about losses. Also, please let me know the links to relevant theories to understand this in general.

",43434,,5763,,1/9/2021 7:19,1/9/2021 7:19,How to understand 'losses' in Spacy's custom NER training engine?,,1,1,,,,CC BY-SA 4.0 25628,2,,25573,1/8/2021 9:49,,1,,"

Regarding your first point, it depends on what neural network you would like to use, the sensor temporal resolution, and the capabilities of the embedded system. You can figure out the number of operations required for a forward pass of your network, then when combined with the internal clock of the embedded system, you can calculate the approximate time it would take for one classification event in real time.

A good explanation is given here What is the computational complexity of the forward pass of a convolutional neural network?

If you have the computational complexity of your network, and the number of CPU cycles per seconds, you can roughly approximate the time.

",43651,,43651,,1/8/2021 10:00,1/8/2021 10:00,,,,0,,,,CC BY-SA 4.0 25629,1,,,1/8/2021 10:00,,0,77,"

I'm using Kalman Filter approaches and I've just implemented the extended Kalman filter (EKF) with my object 2D trajectory. However, I have a mess of alternative approaches that may fit better like Unscented Kalman Filter (UFK), particle filters, adaptive filtering, etc.

How can I choose the most suitable algorithm for my case? In addition, are there algorithms that can predict more than one step ahead?

",43650,,43650,,1/8/2021 11:48,1/8/2021 11:48,Which is the best algorithm to predict the trajectory of a vehicle using lat/lon data?,,0,5,,,,CC BY-SA 4.0 25630,1,,,1/8/2021 10:35,,3,270,"

Consider our parametric model $p_\theta$ for an underlying probabilistic distribution $p_{data}$.

Now, the likelihood of an observation $x$ is generally defined as $L(\theta|x) = p_{\theta}(x)$.

The purpose of the likelihood is to quantify how good the parameters are. How can the probability density at a given observation, $p_{\theta}(x)$, measure how good the parameters $\theta$ are?

Is there any relation between the goodness of parameters and the probability density value of an observation?

",18758,,18758,,12/20/2021 22:09,12/20/2021 22:09,How can a probability density value be used for the likelihood calculation?,,1,0,,,,CC BY-SA 4.0 25631,2,,25622,1/8/2021 10:50,,1,,"

I would recommend doing is allowing your network to output any real number and then clipping the output. For instance, I was working with an agent that had to learn an angle between $[0, 2\pi]$ and $[0, 1]$. If the network outputted e.g. 10 in the first dimension then this would just be clipped to $2\pi$.

This way the agent only learns about actions within the action space and the weights of the network would eventually be adjusted to only output actions within this action space, provided that the boundaries aren't the optimal actions.

",36821,,,,,1/8/2021 10:50,,,,0,,,,CC BY-SA 4.0 25632,1,,,1/8/2021 11:07,,1,225,"

I'm studying machine learning and I came into a challenging question.

The answer is 2. But based on my ML notes, all of them are true. Where are the wrong points?

",43653,,43653,,1/8/2021 13:25,6/12/2021 12:02,"If the training data are linearly separable, which of the following $L(w)$ has less optimum answer for $w$, when $y = w^Tx$?",,2,18,,,,CC BY-SA 4.0 25633,2,,25630,1/8/2021 11:11,,6,,"

The probability density is used to 'measure how good' the parameters are because it is a natural way of quantifying if these parameters are good for the observed data.

Also, as the notation often causes some confusion, $L(\theta | x)$ denotes the probability of all of your observed data, not just one value. Also the "$|$" may cause confusion as it looks like we are conditioning on $x$ but this is not the case - it may be better practice to use $L(\theta; x)$ which is the notation used when I was learning likelihood. Further, as you have written $L(\theta | x) = p_\theta(x)$ I would like to clarify that if we are being precise in our definitions then this is only correct if you have one observed data point, assuming that you meant $p_\theta(x)$ is the density of $X$.

In my example below I use $p_\mu(x)$ to denote the density of the normal distribution with mean parameter $\mu$, but the density of the likelihood is the product of all the densities (because we assumed iid data). It is crucial that you understand that in general the likelihood is the probability of your observed data and not just the density or the product of densities as you may not always have iid and so it will not always boil down to taking the product of some densities.

The idea of Maximum Likelihood is to maximising the (log-)likelihood for a given set of data. This means that we need to choose a probability distribution that is parameterised by some parameters $\boldsymbol{\theta}$ and then optimise the parameters such that the likelihood is maximum.

Assuming we don't remove any constants then this makes intuitive sense as maximising the likelihood would be maximising the probability that the data came from this distribution -- i.e. the data we observed is most likely to have come from the given distribution with the given parameters. This is important as it means we then have the most likely model of our data which allows us to use this distribution to make inference about our data.

As an example, imagine if I had some iid data $x_1, x_2, ..., x_n \sim \mathcal{N}(0, 1)$. Now I could try to fit a $\mathcal{N}(\mu, 1)$ to this data and optimise for $\mu$. If I chose e.g. $\mu = -1000000$ then the likelihood (again assuming we don't remove any constants) would be $\approx 0$. If I chose a value of $\mu = 0.1$ then the likelihood would be much higher because this parameter is closer to the true parameter value.

To see why it is higher, recall that the likelihood for iid data is given by $$\prod_{i=1}^n p_\mu(x_i)$$ and if we evaluated our likelihood at $\mu=-10000000$ then you're going to be taking the product of lots of numbers that are $\approx 0$ - if you think about the bell curve shape of a Normal distribution that is centred at $-10000000$ with variance 1 then the density at the true $x_i$ values (recalling they are simulated from a unit Normal) would be approx $0$ - whereas if we evaluated the $x_i$ at the density of a Normal distribution centred at $0.1$ then the density will be non-zero and so your likelihood will have a higher value.

To summarise, the density value can be used to measure how good parameters are for a set of data as maximising wrt the parameters is analogous to maximising the probability that your data arose from said distribution.

As an aside, note that the definition of likelihood is the probability of observing your data under some assumed distribution. For discrete random variables this is fine, but for continuous distributions we have to be a more subtle. For any continuous random variable $X$ we have $\mathbb{P}(X=x) = 0$. However, for a very small $\delta$ we can say that

$$\mathbb{P}(x - \frac{\delta}{2} < X \leq x + \frac{\delta}{2}) = \int_{x - \frac{\delta}{2}}^{x + \frac{\delta}{2}} f_X(x) dx \approx \delta p_X(x) \; ;$$

you can think of this approximation by visualising an integral and recalling that an integral represents the area under the curve, and so for small $\delta$ this integral can be approximated by taking the area of a rectangle which is width $\times$ height, where the width is $\delta$ and the height is $f_X(x)$. This justifies our use of the density function for continuous random variables. Note that typically in maximum likelihood we omit any multiplicative constants as they don't depend on the parameters which is what happens with the $\delta$ from this justification.

",36821,,36821,,1/8/2021 12:53,1/8/2021 12:53,,,,1,,,,CC BY-SA 4.0 25634,1,,,1/8/2021 11:49,,2,106,"

In Barton and Sutton's book, Reinforcement Learning: An Introduction (2nd edition), an expression, on page 289 (equation 12.2), introduced the form of the $\lambda$-return defined as follows

$$G_t^{\lambda} = (1-\lambda)\sum_{n=1}^{\infty} \lambda^{n-1}G_{t:t+n} \label{12.2}\tag{12.2}$$

with the truncated return defined as

$$ G_{t:t+n} \doteq R_{t+1} +\gamma R_{t+2} + \ldots + \gamma^{n-1} R_{t+n} + \gamma^{n}\hat{v}(S_{t+n}, \mathbf{w}_{t+n-1}) \label{12.1}\tag{12.1}$$

However, slightly later in the text, page 290 (equation 12.4), the update algorithm for the offline $\lambda$-return algorithm is defined as

$$ \mathbf{w}_{t+1} \doteq \mathbf{w}_{t}+\alpha\left[G_{t}^{\lambda}-\hat{v}\left(S_{t}, \mathbf{w}_{t}\right)\right] \nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right), \quad t=0, \ldots, T-1 \label{12.4}\tag{12.4} $$

My question is: how do we bootstrap the truncated returns in the update algorithm?

The way the truncated return is currently defined can not plausibly be used, since we would not have access to $\mathbf{w}_{t+n-1}$, as we are in the process of finding $\mathbf{w}_{t+1}$. I suspect $\mathbf{w}_{t}$ is used for bootstrapping in all returns, but that would alter the definition of the truncated return which I just wanted to clarify.


And as a follow-up question: What weights are used for bootstrapping in the online $\lambda$-return algorithm described on page 298?

I assume it's either $\mathbf{w}_{t-1}^{h}$ or $\mathbf{w}_{h-1}^{h-1}$, it's briefly mentioned that the online $\lambda$-return algorithm performs slightly better than the offline one at the end of the episode which leads me to believe the latter is used otherwise the two algorithms would be identical.

Any insight into either question would be great.

",42514,,42514,,1/9/2021 12:43,1/9/2021 12:43,How does bootstrapping work with the offline $\lambda$-return algorithm?,,0,0,,,,CC BY-SA 4.0 25635,2,,21911,1/8/2021 13:41,,2,,"

Your bias formula is a bit incorrect. You should subtract a true value, not an estimator.

  1. $\mathrm{b}(\hat{\theta}) \stackrel{\text { def }}{=} \mathbb{E}[\hat{\theta}]-\theta$

  2. $\mathrm{b}\left(\max _{a} Q\right)=\mathbb{E}\left[\max _{a} Q\right]-\max _{a} q=\mathbb{E}\left[\max _{a} Q\right]-\max _{a} \mathbb{E}[Q]$

  3. $\mathbb{E}\left[\max _{a} Q\right] \geq \max _{a} \mathbb{E}[Q] \Rightarrow \mathrm{b}\left(\max _{a} Q\right) \geq 0$

",42983,,2444,,1/19/2021 11:22,1/19/2021 11:22,,,,0,,,,CC BY-SA 4.0 25636,2,,25627,1/8/2021 14:50,,2,,"

A critical goal of training a neural network is to minimize the loss. Loss is not explained for spaCy because it is a general concept for machine learning and deep learning. Loss is not specific to spaCy and although there are some finer details I don't believe that is your inquiry.

In general, to understand loss functions, I recommend the following resources:

If you like videos watch:

",5763,,5763,,1/8/2021 15:01,1/8/2021 15:01,,,,0,,,,CC BY-SA 4.0 25641,2,,17098,1/8/2021 19:27,,-1,,"

Maybe Just use simple Convnets (Pre-trained perhaps) and train it on the images of the teacher on the blackboard. You could use a GAN to remove the teacher and complete the rest of the image (https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwj9lKKSh43uAhWbfH0KHeaDC1UQFjAAegQIAxAC&url=http%3A%2F%2Fstanford.edu%2Fclass%2Fee367%2FWinter2018%2Ffu_guan_yang_ee367_win18_report.pdf&usg=AOvVaw2tG3bStRlys_NPLX9-XPep) But that would be too troublesome.

The best way would be to take real-time video chunks, use a Convolutional Network to detect the shapes you want, and return bounding boxes for the appropriate shapes (for the location purposes) and their 'classification' - whether a straight line, curved line, or some other user defined shape. You could also choose to use some other YOLO (You Only Look Once) technique. You can check out with: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiei4C3iI3uAhXXfX0KHQXxAVAQFjAJegQIARAC&url=https%3A%2F%2Ftowardsdatascience.com%2Fobject-detection-using-deep-learning-approaches-an-end-to-end-theoretical-perspective-4ca27eee8a9a&usg=AOvVaw2TkW094_O5Q7-mcVMJ5SEN ;

You also won't have to deal with the pesky teacher with the above method (assuming he doesn't stand in one place and constantly block a part of the BB). These methods are near SOTA and would be extremely effective than conventional algos. Not to mention that using Keras is a piece of cake with a giant community and endless resources to help you in case you get stuck on some problem. Since it is easy to use, you can setup a prototype in almost no time.

Beginner's guide and Introduction-> https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiei4C3iI3uAhXXfX0KHQXxAVAQFjALegQIGhAC&url=https%3A%2F%2Fmachinelearningmastery.com%2Fobject-recognition-with-deep-learning%2F&usg=AOvVaw3M-b1gYnbTdzzwPahTTWWR ;

A research paper from ArXiv--> https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiei4C3iI3uAhXXfX0KHQXxAVAQFjAPegQIHhAC&url=https%3A%2F%2Farxiv.org%2Fpdf%2F1807.05511&usg=AOvVaw33CWoOX4LlrA3T_f75zVZu

Training with YOLO: https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/

",36322,,36322,,1/8/2021 19:38,1/8/2021 19:38,,,,3,,,,CC BY-SA 4.0 25642,2,,25617,1/8/2021 19:42,,1,,"

Just use IMGAug library for Applying the 'zoom' augmentation on the images and a convnet (or even MLP) would have no problem in this task. Zooming on the image would be more than enough to use as, as long as your zoom is of sufficient power.

So apply "zoom" of the same factor on all images, make them bigger, and Convolutional Neural Networks would do fine. Also, If color is not an important feature, then you would be better off rescaling images b/w 0 and 1 and making them GrayScale

",36322,,,,,1/8/2021 19:42,,,,0,,,,CC BY-SA 4.0 25643,1,,,1/8/2021 19:48,,0,27,"

I would like some references of works that try to understand the functioning of any kind of RNN in natural language processing tasks. They can be any work that tries to explain the functioning of the model by studying the structure of the model itself. I have the feeling that it is very common for researchers to use models, but there is still little theory about how they work in solving natural language processing tasks.

",36175,,2444,,1/9/2021 1:37,1/9/2021 1:37,Is there a reference that describes Recurrent Neural Networks for NLP tasks?,,0,3,,,,CC BY-SA 4.0 25645,1,,,1/8/2021 20:21,,3,88,"

I am trying to understand the solution to part 4 of problem 3 from the midterm exam 6.867 Machine learning: Mid-term exam (October 15, 2003).

For reproducibility, here is problem 3.

We consider here linear and non-linear support vector machines (SVM) of the form: $$ \begin{equation} \begin{aligned} \min w_{1}^{2} / 2 & \text { subject to } y_{i}\left(w_{1} x_{i}+w_{0}\right)-1 \geq 0, \quad i=1, \ldots, n, \text { or } \\ \min \mathbf{w}^{T} \mathbf{w} / 2 & \text { subject to } y_{i}\left(\mathbf{w}^{T} \Phi_{i}+w_{0}\right)-1 \geq 0, \quad i=1, \ldots, n \end{aligned} \end{equation} $$ where $\Phi_{i}$ is a feature vector constructed from the corresponding real-valued input $x_{i}$. We wish to compare the simple linear SVM classifier $\left(w_{1} x+w_{0}\right)$ and the non-linear classifier $\left(\mathbf{w}^{T} \Phi+w_{0}\right)$, where $\Phi=\left[x, x^{2}\right]^{T}$.

Here is part 4 of the same problem.

In general, is the margin we would attain using scaled feature vectors $\Phi=\left[2 x, 2 x^{2}\right]^{T}$

  • greater
  • equal
  • smaller
  • any of the above

The correct answer is the first (greater). Why is that the case?

",43665,,43665,,1/9/2021 19:30,1/9/2021 19:30,"Why is the margin attained with $\Phi=\left[2 x, 2 x^{2}\right]^{T}$ greater than the margin attained with $\Phi=\left[x, x^{2}\right]^{T}$?",,0,3,,,,CC BY-SA 4.0 25646,2,,25632,1/8/2021 20:58,,1,,"

First things first. What is an optimal $w$?. In this case it is supposed to be not only the one that minimizes the emprical /sample loss, but also non-trivial as we shall sonn see. Now inspect the loss functions, we see a term $-y_iw^Tx_i$ coming up. What exactly is this term? It can be anything. The correct term would have been $-y_i(w^Tx_i + b)$, or atleast I will assume this term, but the reasoning will exactly be the same without this term.

Now this term is well defined and since the Eucledian distance of a point, along with sign, $x_i$ from the hyperplane $w^Tx + b=0$ (the proof for this can be found easily), is given by $w^Tx_i + b$. So if the datapoint is supposed to have $y_i=-1$ but according to the classifier it gives $w^Tx_i + b = l_i > 0$, then one cane see $-y_il_i > 0$, hence $\max (0, -y_i l_i) = -y_i l_i$ a positive loss, and the same will apply to case when $y_i = 1$ and $w^Tx_i + b = l_i <0 $

We see another assumption, the datapoints are linearly seperable. This means, there exists a classifier (or a hyperplane) which can completely seperate the 2 classes. Now if one inspects the loss function $1,3,4$ all of them have one goal to make $\max (0, -y_i l_i) = 0, \forall i$. A point to note is that $1$ does this by directly reducing $\sum_{i=1}^n \max (0, -y_i l_i)$, $3$ does this using the $0-1$ loss i.e if the instance is wrongly classified (or $-y_il_i > 0$), then incur $\frac{1}{n}$ loss else $0$, while $4$ incurs a loss of $\frac{1}{n}(w^Tx_i + b)^2$ for each wrong classification. It is very important to note that the solution of all these 3 problems are all the same $w^*,b^*$ which classifies the dataset correctly, which makes the problem equivalent. (This may not hold under some very similar looking loss functions, so it is a point to take care about, a problem is equivalent only if it results in the same solution).

In $2$, first $C \rightarrow \infty$ is an incorrect statement, since then not only does it become mathematically unsound (atleast for me), it also means $2$ is the same as $1,3,4$. Why? $C$ is a weighting factor, it decides how much priority the algorithm minimizing loss will give to the term $C \sum_{i=1}^n \max(0,-y_il_i)$. If say $C=0$ then you are basically optimizing $||w||_2^2$ whose optimal value is at $w=0$. hence, we get nothing useful as optimizing $||w||_2^2$ has no connection with our objective of reducing misclassifications. But if, $0.5||w||_2^2 + C \sum_{i=1}^n \max(0,-y_il_i)$ is used the algorithm is now giving some weightage to minimizing both the $||w||_2^2$ as well as the misclassifications. If $C$ is very small the algorithm will prefer to minimize $||w||_2^2$ rather than $C \sum_{i=1}^n \max(0,-y_il_i)$. Thus giving a solution which may not be optimal, as per the dataset. But, if $C$ is very large the algorithm will mostly try to minimize $C \sum_{i=1}^n \max(0,-y_il_i)$, but still it will give some weightage in minimizng $||w||_2^2$, hence may not find an optimal hyperplane. If $C \rightarrow \infty$ it is matheatically ill defined (as per my knowledge) so I won't go into it.

But, all these aforementioned explanation (i.e the tradeoff between optimizaing two different objectives) will be useful only when there are some additional constraints, like instead of $\max(0,-y_il_i)$ it is $\max(0,1-y_il_i)$ where making $w$ bigger actually makes sense as it will also increase $l_i$ (i.e the same hyperplane denoted by a larger $w$ say $kw$ will make l_i large hence $-l_iy_i$ small if correctly classified, but this doesn't make sense if we are talking about the same hyperplane), hence the $||w||_2^2$ term opposes such an increase. This is a very famous formulation of loss used in SVM.

But, the challenging part in this question is the assumption 'linearly seperable' and also the additional constraint is missing (which is present in SVMs). So if $w^*,b^*$ incurs $0$ classification error so will $kw^*, b^*$ where $k$ is any scaling factor. And since $C$ is very large the second term in $2$ must be $0$. Thus, any $kw^*, b^*$ will work to make the second term 0. But, the first term now becomes $0.5k^{2}||w^*||_2^2$, where $w^*$ denotes the optimal hyperplane, and hence fixed. Thus the first term will choose $|k| \rightarrow 0$ or basically $k=0$ to minmize the 1st term and the second term now simply becomes $C\sum_{i=1}^n \max(0,-y_ib)$ which can be made $0$ by choosing $b=0$. Thus, a $0$ loss when $kw^*=0,b^*=0$ or a trivial solution and definitely not optimal as this will be true for all linearly seperable datasets.

I assumed and extra $b$ term to leverage linear seperability, without it the reasoning becomes easier and you can follow the exact same line of reasoning, albeit the problem might not be linearly seperable anymore. The solution may seem not very elegant but this is a standard line of reasoning in optimization problems where the optimization variable is changed from $w \rightarrow k$, where $w \in \mathbb{R^n}$ and $k \in \mathbb{R}$.

In short for $2$ the solution produced will be $w^*=0,b^*=0$ which will be the best solution but easily seen as trivial and non optimal at all.

",,user9947,,user9947,1/13/2021 10:53,1/13/2021 10:53,,,,0,,,,CC BY-SA 4.0 25647,2,,25632,1/8/2021 21:51,,0,,"

Since the data is linearly separable linear model $y = w^Tx$ will be able to perfectly classify all the examples. That means that loss functions $L_1(w), L_3(w)$ and $L_4(w)$ will have a value of 0 (since all examples are correctly classified). For the loss $L_2(w)$ second term will be 0 if all examples are correctly classified. The first term of $L_2(w)$ \begin{equation} \frac{1}{2}||w||^2_2 \end{equation} will be not be minimized to 0 (unless optimal $w = \mathbf{0}$) so $L_2(w) > 0$. Optimizer will try to minimize $L_2(w)$ to be 0, so because of the penalty to $w$ it will try to find the best tradeoff between minimizing $w$ and loss due to misclassification so the resulting $w$ may not be the best to achieve optimal classification.

",20339,,20339,,1/8/2021 21:57,1/8/2021 21:57,,,,0,,,,CC BY-SA 4.0 25653,1,,,1/9/2021 8:48,,1,284,"

In a continuous action space (for instance, in PPO, TRPO, REINFORCE, etc.), during training, an action is sampled from the random distribution with $\mu$ and $\sigma$. This results in an inherent exploration. However, during testing, when we no longer need to explore but exploit, the action should be deterministic, i.e. just $\mu$, right?

",32517,,32517,,1/9/2021 22:14,1/9/2021 22:14,Are actions deterministic during testing in continuous action space PPO?,,0,6,,,,CC BY-SA 4.0 25656,2,,25596,1/9/2021 15:35,,0,,"

I have never used MATLAB for ml before, so it is difficult for me to understand all your code. My first association to your problem is class imabalance. Since you seem to have got a handle on that, the problem could be dying ReLU or bloated activations. To check if the ReLU is dying, you could look at the activations of the early layers of your network. If many values are zero, it should be dying ReLU.

",43632,,,,,1/9/2021 15:35,,,,0,,,,CC BY-SA 4.0 25657,1,25667,,1/9/2021 16:22,,3,513,"

I would like to employ DQN to solve a constrained MDP problem. The problem has constraints on action space. At different time steps till the end, the available actions are different. It has different possibilities as below.

  • 0, 1, 2, 3, 4
  • 0, 2, 3, 4
  • 0, 3, 4
  • 0, 4

Does this mean I need to learn 4 different Q networks for these possibilities? Also, correct me if I am wrong, it looks like if I specify the action size is 3, then it automatically assumes the actions are 0, 1, 2, but, in my case, it should be 0, 3, 4. How shall I implement this?

",23707,,2444,,1/9/2021 20:08,1/9/2021 20:33,How to use DQN when the action space can be different at different time steps?,,1,0,,,,CC BY-SA 4.0 25665,1,,,1/9/2021 19:27,,2,45,"

In a Markov Decision Process, is it possible that there exists no "dominated action"?

I define a dominated action the following way: we say that $(s,a)$ is a dominated action, if $\forall \pi, a \notin \text{argmax}\ q^{\pi}(s,.)$, where $\pi$ are policies.

For now, I am only considering the cases where all q-values are distinct and therefore the max is always unique. I also only consider the case of deterministic policies (mappings from state space to action space).

We can consider MDP in which each state has at least 2 actions available to get rid of the corner cases where there is only one possible policy.

I am struggling to find a counter-example or a proof.

",43682,,43682,,1/9/2021 20:23,1/9/2021 20:23,"Does there necessarily exist ""dominated actions"" in a MDP?",,0,3,,,,CC BY-SA 4.0 25667,2,,25657,1/9/2021 20:25,,2,,"

There are two relevant neural network designs for DQN:

  • Model q function directly $Q(s,a): \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, so neural network has concatenated input of state and action, and outputs a single real value. This is arguably the more natural fit to Q learning, but can be inefficient.

  • Model all q values for given state $Q(s,\cdot): \mathcal{S} \rightarrow \mathbb{R}^{|\mathcal{A}|}$, so neural network takes input of current state and outputs all action values related to that state as a vector.

For the first architecture, you can decide which actions to evaluate by how you construct the minibatch. You pre-filter to the allowed actions for each state.

For the second architecture, you must post-filter the action values to those allowed by the state.

There are other possibilities for constructing variable-length inputs and outputs to neural networks - e.g. using RNNs. However, these are normally not worth the extra effort. A pre- or post- filter on the actions for a NN that can process the whole action space (including impossible actions) is all you usually need. Don't worry that the neural network may calculate some non-needed or nonsense values.

",1847,,1847,,1/9/2021 20:33,1/9/2021 20:33,,,,4,,,,CC BY-SA 4.0 25669,2,,25596,1/10/2021 0:13,,1,,"

Similar to other answers, I don't know Matlab that well but you could try the following steps to debug your problem.

Make sure you can overfit to a single instance

from your dataset, pull out a single image with a good amount of true positives in it. Duplicate that images B times (where B = Batch Size) and then try to train your network with only that small dataset. If you can't overfit to a single instance, then something is really wrong and you should validate all functional aspects of your network. If you can overfit, then it's probably more of an algorithmic or data imbalance issue.

Validate Functional Aspects of the Network

Make sure your input data and labels are correct.

Validate that your images are being correclty inputted into the network. You can do this by printing out any images right before they go into the training function. For the labels, make sure to manually inspect a few labels to be sure that the labels correctly match up the images.

Make sure that your loss function is correct.

Add a unit test or two to your loss function to validate that it is doing what it should be doing. Create a simiple example that you can easily validate.

Validate the rest of the non-algorithmic functionality

Anything that isn't a design choice should be validated. Make sure that the weights are the correct sizes, each layer has the correct number of weights, the intermediate features have the correct shape, etc.

Data Imbalance

If you were able to overfit to a single image, and validated the functional aspects of your network then you may be facing a data imbalance issue. Look at your dataset and see what % of your instances are true positives vs. true negatives. If you have an exteme imbalance (like 10% 90%) or something like that, then build a dataset that is more balanced and see if you can fit your data. If you fit the data with that more balanced dataset, then there's plenty of ways to fix your data imbalance issue. Google around for data imbalance and you should get a few good ideas. Some include focal loss, upsampling, etc...

Check your receptive field

The receptive field on a network is basically the area that can be looked at by the network for a specific region. This is controlled by the size of the convolution filters, stride, etc. If the data that you are segmenting doesn't fit within the receptive field of the model, then you might be saturating the loss function. tldr try playing around with kernel sizes.

",17408,,,,,1/10/2021 0:13,,,,0,,,,CC BY-SA 4.0 25670,1,,,1/10/2021 0:54,,1,201,"

I am currently trying to write a CNN from scratch, but I don't understand how to feed the information from a max-pooling layer to the next convolutional layer. Specifically, I don't know what to do with the 6 filtered and pooled images from the first convolutional and max-pooling layers. How do I feed those images into the next convolutional layer?

",43686,,2444,,1/11/2021 0:44,1/11/2021 0:44,How do you pass the image from one convolutional layer to another in a CNN?,,1,0,,,,CC BY-SA 4.0 25671,1,25680,,1/10/2021 2:27,,3,95,"

I am wondering what the parameter $y$ in the function $g(y,\mu,\sigma)=\frac{1}{(2\pi)^{1/2}\sigma}e^{-(y-\mu)^{2/2\sigma^2}}$ stands for in Section 6 (page 14) of the paper introducing the REINFORCE family of algorithms.

Drawing an analogy to Equation 4 of the same paper, I would guess that it refers to the outcome (i.e. sample) of sampling from a probability distribution parameterized by the parameters $\mu$ and $\sigma$. However, I am not sure whether that is correct or not.

",37982,,2444,,1/10/2021 15:42,1/10/2021 15:42,"What does the parameter $y$ stand for in function $g(y,\mu,\sigma)$ related to REINFORCE algorithm?",,2,1,,,,CC BY-SA 4.0 25672,2,,25671,1/10/2021 11:06,,0,,"

It states that "To simplify notation, we focus on one single unit and omit the usual unit index subscript throughout"

So they are simply removing the i-th index from the equation for simplicity. So g is a function of a given instance "y" and the parameters μ and σ.

",43651,,,,,1/10/2021 11:06,,,,2,,,,CC BY-SA 4.0 25674,1,,,1/10/2021 11:44,,2,74,"

I have an image and a mask. I want the image to be the same, but rotated, scaled and positioned like mask. What can I use?

",43631,,,,,1/10/2021 17:09,What algorithm would you advise me to use for my task?,,1,0,,,,CC BY-SA 4.0 25675,1,,,1/10/2021 12:33,,2,27,"

I found a similar post about this issue, but unfortunately I did not find a proper answer. Are there any references where DQN is better than DoubleDQN, that is DoubleDQN does not improve DQN ?

",36055,,,,,1/10/2021 12:33,Can DQN outperform DoubleDQN?,,0,1,,,,CC BY-SA 4.0 25676,1,25682,,1/10/2021 12:39,,2,492,"

I came up with an NLP-related problem where I have a list of words and a string. My goal is to find any word in the list of words that is related to the given string.

Here is an example.

Suppose a word from the list is healthy. If the string has any of the following words: healthy, healthier, healthiest, not healthy, more healthy, zero healthy, etc., it will be extracted from the string.

Also, I want to judge whether the extracted word/s is/are bearing positive/negative sentiment.

Let me further explain what I mean by using the previous example.

Our word was healthy. So, for instance, if the word found in the string was healthier, then we can say it is bearing positive sentiment with respect to the word healthy. If we find the word not healthy, it is negative with respect to the word healthy.

",37980,,2444,,1/11/2021 0:31,1/11/2021 0:31,"How can I find words in a string that are related to a given word, then associate a sentiment to that found word?",,1,0,,,,CC BY-SA 4.0 25678,1,,,1/10/2021 12:53,,5,945,"

The motivation for the introduction of double DQN (and double Q-learning) is that the regular Q-learning (or DQN) can overestimate the Q value, but is there a brief explanation as to why it is overestimated?

",43246,,2444,,1/10/2021 17:55,1/10/2021 21:13,Why does regular Q-learning (and DQN) overestimate the Q values?,,1,0,,,,CC BY-SA 4.0 25679,2,,42,1/10/2021 13:29,,2,,"

Actually, the hierarchical learning explanation given by mindcrime is not that acceptable anymore (This was also indicated by Ian Goodfellow). Since there are neural networks with 150 layers or more, and this explanation does not make sense for such neural networks. However, we can think of it as solving the knots of high dimensional manifolds, i.e. we transform the input into high dimensional space, and this helps us to find a better representation of the data.

A geometric interpretation was explained as such in the book Deep Learning with Python by François Chollet:

...you can interpret a neural network as a very complex geometric transformation in a high-dimensional space, implemented via a long series of simple steps...

Imagine two sheets of colored paper: one red and one blue. Put one on top of the other. Now crumple them together into a small ball. That crumpled paper ball is your input data, and each sheet of paper is a class of data in a classification problem. What a neural network (or any other machine-learning model) is meant to do is figure out a transformation of the paper ball that would uncrumple it, so as to make the two classes cleanly separable again. With deep learning, this would be implemented as a series of simple transformations of the 3D space, such as those you could apply on the paper ball with your fingers, one movement at a time. Uncrumpling paper balls is what machine learning is about: finding neat representations for complex, highly folded data manifolds. At this point, you should have a pretty good intuition as to why deep learning excels at this: it takes the approach of incrementally decomposing a complicated geometric transformation into a long chain of elementary ones, which is pretty much the strategy a human would follow to uncrumple a paper ball. Each layer in a deep network applies a transformation that disentangles the data a little—and a deep stack of layers makes tractable an extremely complicated disentanglement process.

I suggest you to read this brilliant blog post to learn about the topological interpretation of deep learning.

Also, this toy interactive code may help you.

In the context of machine learning, the concept of a manifold can be illustrated as in the following figure.

In the first part, data are 3-dimensional. However, we can find a transformation to get the second image, which shows that data is actually artificially high dimensional, i.e. it is a 2-dimensional manifold in 3-D space. This example may be thought of as a classification problem, and colors may represent classes, and we can find a trivial representation of the data for classification.

Another example could be following figures from the blog I mentioned. In here, this classification problem cannot be solved without having a layer that has 3 or more hidden units, regardless of depth. So the notion of high dimensional transformation is important.

We can map this data to 3-D, and find a plane to separate them.

",41615,,2444,,1/10/2021 15:08,1/10/2021 15:08,,,,0,,,,CC BY-SA 4.0 25680,2,,25671,1/10/2021 13:47,,2,,"

If you take a look at the Wikipedia page related to the normal distribution, you will see the definition of the Gaussian density

$$ {\displaystyle f(x)={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x-\mu }{\sigma }}\right)^{2}}} \label{1}\tag{1} $$

and you will see that the $y$ in your formula corresponds to the $x$ in equation \ref{1}.

I've seen this notation in the context of computer vision and image processing, where the Gaussian kernel is used to blur images.

So, as pointed out by someone in a comment, $y$ should indeed be the point where you evaluate the density.

Maybe the confusing part is that all parameters are treated equally in terms of their purpose, while $\mu$ and $\sigma$ are clearly the parameters that define the specific density, so they are not the inputs to the specific density.

After having read the relevant section of the paper, I now understand why you're confused. The author refers to $y$ as the output (not yet sure why: maybe it's the output of another unit that feeds this Gaussian unit?), but I think that this explanation still applies. The output of the Gaussian density $g$ is not $y$, but the density that corresponds to $y$. In fact, in appendix $B$ of the paper, the author says that $Y$ is the support of $g$ and $y$ is an element of $Y$.

",2444,,2444,,1/10/2021 14:59,1/10/2021 14:59,,,,7,,,,CC BY-SA 4.0 25681,2,,25670,1/10/2021 15:24,,1,,"

The application of 1 kernel (aka filter) to an input (with a 2d convolution) is a matrix (a 2d array), which is often known as a feature map (aka activation map). The application of $k$ kernels to the same input is a 3d array (sometimes called tensor, though this may not be exactly correct, or 3d volume) with depth $k$, i.e. you have $k$ concatenated feature maps. This 3d array is the input to the next convolutional layer, which needs to have 3d kernels of depth $k$: this is the main requirement in the case of 2d convolutions (in the case of 3d convolutions, the kernels can have a different depth than the input 3d volume, but you can ignore this for now!).

Another thing that you need to take care of is the padding that you can add around this 3d volume. This, along with the stride (i.e. the step with which you move the kernel), determines the height and width of the output of the convolutional layer.

So, to recapitulate, what you have to do is set the depth of the kernels in the convolutional layer $l+1$ to be equal to the number of kernels that you use in the convolutional layer $l$. So, let's say that the number of kernels is a hyper-parameter (note that this is usually the case!), and that the user sets the number of kernels $k=8$ in the convolutional layer $l$, and, for simplicity, assume that all kernels are $3 \times 3$, then, in layer $l$, you will perform $8$ convolutions, one for each kernel. These $8$ convolutions produce a 3d array of dimensions $w \times h \times 8$ (where $w$ and $h$ depend on the dimensions of the input to layer $l$, the stride, and padding). Consequently, in the convolutional layer $l+1$, the kernels need to be $3 \times 3 \times 8$.

If it's the pooling layer that produces the 3d array to be passed to the next convolutional layer, you should still set the depth of the kernels in the convolutional layer $l+1$ to the number of kernels in the convolutional layer $l$ (provided that, in the pooling layer, you only redimension the width and height of the 3d array, which is usually the case).

",2444,,2444,,1/10/2021 17:51,1/10/2021 17:51,,,,0,,,,CC BY-SA 4.0 25682,2,,25676,1/10/2021 16:39,,1,,"

There are many ways to solve this problem. One way is to apply stemming or lemmatization to reduce your words. Using NLTK's Porter stemmer for example on healthy, healthier, healthiest, not healthy, more healthy, and zero healthy gives:

healthi , healthier , healthiest , not healthi , more healthi , zero healthi

This can help make word comparisons easier.

Sentiment analysis on the phrases will provide positive, neutral, and negative scores. There are a lot of algorithms for doing this but a common one is Valence Aware Dictionary and sEntiment Reasoner (VADER). Here is a recent article with code using NLTK and the VADER lexicon:

The following article also does sentiment analysis using NLTK and includes stemming and lemmatization. Instead of VADER they use a Naive Bayes classifier on a labeled data set of tweets: How To Perform Sentiment Analysis in Python 3 Using the Natural Language Toolkit (NLTK) by Saumik Daityari.

",5763,,5763,,1/10/2021 16:55,1/10/2021 16:55,,,,1,,,,CC BY-SA 4.0 25683,1,,,1/10/2021 16:43,,0,44,"

I am attempting to use time-series classification algorithms for fraud detection applications. I have came across several works in the literature that propose novel techniques for multivariate time-series classification, however, most of these approaches treat each feature as an individual signal.

Now, my processing of my data transforms a transactions dataset into a tensor; 1 dimension where each observation is an account, 1 dimension where each element is a transaction and 1 dimension for the transaction attributes. The transactions dataset has a large number of features, many of which are one-hot encoded categorical variables. Therefore, I am not really sure that multivariate time-series classification algorithms such as CNN or LSTM will work in this case, since it will treat every one-hot encoded feature as a signal on its own.

What would be an alternative approach in this case? Would applying PCA on the data to capture the most significant features help instead of the ordinary features?

",43698,,43698,,1/10/2021 17:02,1/10/2021 17:02,Multivariate time-series classification with many variables,,0,3,,,,CC BY-SA 4.0 25684,2,,25674,1/10/2021 17:09,,1,,"

Assuming that the image is blank everywhere but where the face is drawn...

The first step is to scale the image to the mask. That doesn't require a detailed explanation here as it is too trivial a problem.

Second, rotate the image by 90 degrees three times and save each one.

Third, for the four versions of the image (the original and three rotations), do the image addition. The only one with information will be the one that is rotationally aligned with the mask.

",5763,,,,,1/10/2021 17:09,,,,0,,,,CC BY-SA 4.0 25685,2,,25678,1/10/2021 17:44,,4,,"

The overestimation comes from the random initialisation of your Q-value estimates. Obviously these will not be perfect (if they were then we wouldn't need to learn the true Q-values!). In many value based reinforcement learning methods such as SARSA or Q-learning the algorithms involve a $\max$ operator in the construction of the target policy. The most obvious case is, as you mentioned, Q-learning. The learning update is $$Q(s, a) = Q(s, a) + \alpha \left[r(s, a) + \gamma \max_a Q(s', a) - Q(s, a) \right] \;.$$ The Q-function for the state-action tuple we are considering is shifted towards the max Q-function at the next state where the $\max$ is taken with respect to the actions.

Now, as mentioned our initial estimates of the Q-values are initialised randomly. This naturally leads to incorrect values. The consequence of this is that when we calculate $\max_aQ(s', a)$ we could be choosing values that are grossly overestimated.

As Q-learning (in the tabular case) is guaranteed to converge (under some mild assumptions) so the main consequence of the overestimation bias is that is severely slows down convergence. This of course can be overcome with Double Q-learning.

The answer above is for the tabular Q-Learning case. The idea is the same for the the Deep Q-Learning, except note that Deep Q-learning has no convergence guarantees (when using a NN as the function approximator) and so the overestimation bias is more of a problem as it can mean the parameters of the network get stuck in sub-optimal values.

As someone asked in the comments about always initialising the values to be very low numbers, this would not really work.

Consider the following MDP taken from Sutton and Barto: We start in state A, from which we can either go right with reward 0 leading to a terminal state or go left with reward 0 to state B. From state B we can take, say, 100 different actions, all of which lead to a terminal state and have reward drawn from a Normal distribution with mean -0.1 and variance 1.

Now, clearly the optimal action from state A is to go right. However, when we go left and take an action in state B there is an (almost) 0.5 probability of getting a reward bigger than 0. Now, recall that the Q-value is shifted towards $r(s, a) + \max_a Q(s', a)$; because of the stochastic rewards when transitioning out of state B and the fact that we will likely see a positive reward the $\max_a Q(s', a)$ will be positive.

This means that when we take the left action the Q-value (Q(A, left)) is shifted towards a positive value, meaning that when we are in state A the value of moving left will be higher than moving right (which will gradually be being shifted towards the true value of 0) and so when following the $\epsilon$-greedy policy the greedy action will be to go left when in fact this is sub-optimal.

Now, of course, we know that the true Q-values will eventually converge but if we do have, say, 100 actions then you can probably see that the time it will take for the Q-values to converge to the true value will potentially be a long time as we would have to keep choosing all the overestimated values until we had convergence.

",36821,,36821,,1/10/2021 21:13,1/10/2021 21:13,,,,7,,,,CC BY-SA 4.0 25688,1,,,1/10/2021 22:10,,0,159,"

I am working on an RL problem where the time when the agent obtains the reward for taking action $a$ in time step $t$ is stochastic. In fact, there is no immediate reward for taking action $a$ in time step $t$, and, for example, the agent may obtain the reward in time step $t+k$ (where $k>1$). I was wondering if this kind of reward function is categorized as a non-Markovian reward function, and which RL method works better (to approximate/find the optimum policy) in this environment?

PS: It differs from the sparse reward problems. In my problem, there is a non-zero reward associated with every action taken. However, the agent does not receive any reward immediately. In fact, once the agent takes an action, like $a$, at a time step like $t$, the agent does not have any control over when he/she will receive the reward associated with that action. The time when the reward is received is stochastic.

",43709,,2444,,1/12/2021 10:04,1/12/2021 10:04,Is my reward function non-Markovian?,,0,5,,,,CC BY-SA 4.0 25691,2,,42,1/10/2021 22:56,,0,,"

One aspect that I'd like to add to the previous answers is the so-called Curse of dimensionality. This concept refers to the problem that many algorithms have a time complexity that grows exponentially with the dimension of the data.

As a simple example, let us consider a set $\{0,1\}^{D}$ that has only two values per dimension. For example, $\{0,1\}^{2} = \{(0,0),(0,1),(1,0),(1,1)\}$ and $(0,1,0) \in \{0,1\}^{3}$. Now imagine that you are given a function $f: \{0,1\}^{\times D} \rightarrow \{TRUE, FALSE\}$ that outputs TRUE exactly for one particular input. The goal is to determine that input.

In the example, if nothing else is known about f, the best thing one can do is to try the inputs one after another. However, $\{0,1\}^{D}$ has $2^D$ elements. So the number of inputs one has to try out will in general be roughly $2^D$ as well.

However, there exist examples suffering from a curse of dimensionality that can be solved with deep learning, i.e. using neural networks with many hidden layers. One example of great practical importance is given by high-dimensional partial differential equations, see e.g this report:

http://www.sam.math.ethz.ch/sam_reports/reports_final/reports2017/2017-44_fp.pdf

or this example for heat equations:

https://arxiv.org/abs/1901.10854

I also found this review on using deep learning to overcome the curse of dimensionality:

https://cbmm.mit.edu/sites/default/files/publications/02_761-774_00966_Bpast.No_.66-6_28.12.18_K1.pdf

",43710,,,,,1/10/2021 22:56,,,,1,,,,CC BY-SA 4.0 25692,2,,77,1/10/2021 23:03,,0,,"

Clojure, a dialect of Lisp (implemented for the Java Virtual Machine), was used to implement Clojush, a PushGP system, i.e. a genetic programming (which is a sub-class of evolutionary algorithms) system based on the use of the Push programming language, which is a stack-based programming language. The lead developer of Clojush, Lee Spector, and other people still do research on these topics. So, yes, Lisp is still being used in artificial intelligence!

However, it's also true that Python and C/C++ (to implement the low-level stuff) are probably the two most used programming languages in AI nowadays, especially for deep learning.

",2444,,,,,1/10/2021 23:03,,,,0,,,,CC BY-SA 4.0 25702,1,,,1/11/2021 3:25,,2,40,"

I'm trying to use the Binary Flower Pollination Algorithm (BFPA) for feature selection. In the BFPA, the sigmoid function is used to compute a binary vector that represents whether a feature is selected or not. Here are the relevant equations from the paper (page 4).

$$ S\left(x_{i}^{j}(t)\right)=\frac{1}{1+e^{-x_{i}^{j}(t)}} \tag{4}\label{4} $$

\begin{equation} x_{i}^{j}(t)=\left\{\begin{array}{ll} 1 & \text { if } S\left(x_{i}^{j}(t)\right)>\sigma \\ 0 & \text { otherwise } \end{array}\right. \tag{5}\label{5} \end{equation}

In my case, I noticed that my algorithm sometimes returns a zero vector (i.e. all elements are zeros, such as $[0,0,0,0,0,0,0,0,0]$), which means that no feature is selected (a feature would be selected when it is $1$), which makes the fitness function returns an error.

Is it correct that sigmoid returns result like that?

",43279,,2444,,1/12/2021 0:32,1/12/2021 0:32,"In the Binary Flower Pollination Algorithm (using the sigmoid function), is it possible that no feature is selected?",,0,8,,,,CC BY-SA 4.0 25712,1,,,1/11/2021 13:31,,5,241,"

There are proofs for the universal approximation theorem with just 1 hidden layer.

The proof goes like this:

  1. Create a "bump" function using 2 neurons.

  2. Create (infinitely) many of these step functions with different angles in order to create a tower-like shape.

  3. Decrease the step/radius to a very small value in order to approximate a cylinder. This is what I'm not convinced of

  4. Using these cylinders one can approximate any shape. (At this point it's basically just a packing problem like this.

In this video, minute 42, the lecturer says

In the limit that's going to be a perfect cylinder. If the cylinder is small enough. It's gonna be a perfect cylinder. Right ? I have control over the radius.

Here are the slides.

Here is a pdf version from another university, so you do not have to watch the video.

Why am I not convinced?

I created a program to plot this, and even if I decrease the radius by orders of magnitude it still has the same shape.

Let's start with a simple tower of radius 0.1:

Now let's decrease the radius to 0.01:

Now, you might think that it gets close to a cylinder, but it just looks like it is approximating a perfect cylinder, because of the zoomed out effect.

Let's zoom in:

Let's decrease the radius to 0.0000001.

Still not a perfect cylinder. In fact, the "quality" of the cylinder is the same.

Python code to reproduce (requires NumPy and matplotlib): https://pastebin.com/CMXFXvNj.

So my questions are:

Q1 Is it true that we can get a perfect cylinder solely by decreasing the radius of the tower to 0 ?

Q2 If this true, why is there no difference when I plot it with different radii(0.1, vs 1e-7) ?

Both towers have the same shape

Clarification: What do I mean with: same shape ? Let's say we calculate the volume of an actual cylinder(Vc) with the same raius and height as our tower and divide it by the volume of the tower(Vt) .

Vc = Volume Cylinder

Vt = Volume Tower

ratio(r) = Vc/Vt

What this documents/lectures claim that is the ratio of these 2 volumes depends on the radius but in my view it's just constant.

So what they are saying is that: lim r -> 0 for ratio(r) = 1 But my experiments show that: lim r -> 0 for ratio(r) = const and don't depend on the radius at all.

Q3 Preface

An objection i got multiple times once from Dutta and once from D.W is that just decreasing the radious and plotting it isn't mathematical rigorous.

So let's assume in the limit of r=0 it's really a perfect cylinder.

One possible explanation for this would be that the limit is a special case and one can't approximate towards it

But if that is true this would imply that there is no use for it since it's impossible to have a radius of exactly zero. It would only be useful if we could get gradually closer to a perfect cylinder by decreasing the radius.

Q3 So why should we even care about this then ?

Further Clarifications

The original universal approximation theorem proof for single hidden layer neural networks was done by G. Cybenko. Then I think people tried to make some visual explations for it. I am NOT questioning the paper ! But i am questioning the visual explanation given in the linked lecutre/pdf (made by other people)

",43729,,43729,,1/21/2021 12:42,5/20/2022 19:48,"Is it really possible to create the ""Perfect Cylinder"" used in Universal Approximation Theorem for 1-hidden layer Neural Network?",,2,16,,,,CC BY-SA 4.0 25713,1,,,1/11/2021 15:10,,2,59,"

I am just getting into medical image segmentation and have been able to understand the state-of-the-art architectures, like Double UNet, UNet++, and Multiresunet.

What I haven't understood yet: Why are these approaches better for medical segmentation than, for example, HRNet-OCR, which currently tops the rankings of the Cityscapes dataset, and vice versa?

",43632,,2444,,1/12/2021 0:18,1/12/2021 0:18,How and why do state-of-the-art models in medical segmentation differ from general segmentation models?,,0,1,,,,CC BY-SA 4.0 25714,1,,,1/11/2021 16:03,,2,376,"

I'm training a VAE to reconstruct some input (channels picked up by some MIMO BS for context) and I ran an experiment on the training set to see how the performance improves with the latent space dimension.

My VAE structure is as follows : Input : 2048 -> 1024 -> 512 -> Latent space dimension -> 512 -> 1024 -> Output : 2048

Here is what I get in terms of relative error when the latent space dimension goes from 2 to 100 :

Everything works as expected at the beginning, but the error starts rising up at around 50 and I have no idea why. With a large latent space dimension, the output is orders of magnitude smaller than the input, which explains the relative error of value 1.

Here is the same figure when I run the exact same experiment but with a normal autoencoder this time.

This time the results are consistent.

What's wrong with my VAE ?

",43733,,,,,12/16/2022 16:03,VAE giving near zero output when latent space dimension is large,,1,3,,,,CC BY-SA 4.0 25715,1,,,1/11/2021 16:30,,2,207,"

I am new to few-shot learning, and I wanted to get a hands-on understanding of it, using Reptile algorithm, applied to my custom dataset.

My custom dataset has 30 categories, with 5 images per category, so this would be a 30 way 5 shot.

Given a new image, I wish to be able to classify it into one of 30 categories. I changed train_shots = 5, classes = 30 in the linked example, and got the training output as

batch 0: train=0.050000 test=0.050000
batch 1: train=0.050000 test=0.050000

Should the custom dataset be used as a validation set, with mini-ImageNet as a training dataset, so that the knowledge is transferred? Or can I use only a custom dataset with only $30*5=150$ images for training?

",43623,,2444,,1/12/2021 0:16,1/12/2021 0:16,"In few-shot classification, should I use my custom dataset as the validation dataset and mini-ImageNet as the training dataset?",,0,0,,,,CC BY-SA 4.0 25717,1,,,1/11/2021 18:32,,3,313,"

In Monte Carlo Dropout (MCD), I know that I should enable dropout during training and testing, then get multiple predictions for the same input $x$ by performing multiple forward passes with $x$, then, for example, average these predictions.

Let's suppose I want to fine-tune a pre-trained model and get MCD uncertainty estimations, how should I add dropout layers?

  • on the fully-connected layers;
  • after every convolutional layer.

I've read some papers and implementations where one applies dropout at fully-connected layers only, using a pre-trained model. However, when using a custom model, usually one adds dropout after every convolutional layer. This work builds two configurations:

  • dropout on the fully-connected layers;
  • dropout after resnet blocks.

The first configuration performs better, but I'm unsure if this is an actual uncertainty estimation from resnet. The results show that there is a correlation between high uncertainty predictions and incorrect predictions. So, would this be a good way of estimating uncertainty? My shot is "yes", because even though there are no nodes being sampled from the backbone, the sampling in the fully-connected layer forces a smooth variation in the backbone, generating a low-variance ensemble. But, I'm quite a beginner on MCD so, any help would be appreciated.

",42100,,42100,,1/15/2021 19:52,1/15/2021 19:52,How can I use Monte Carlo Dropout in a pre-trained CNN model?,,0,0,,,,CC BY-SA 4.0 25725,1,,,1/12/2021 5:51,,3,59,"

I am writing about the role of machine learning scientists in developing a solution. Is there a term for the humans who do learning? Can we call a "team of machine learning scientists with their computers working on some ML problem" an intelligent agent? Is "cognizer" the right term? I know that "learner" is reserved for an ML algorithm. I just want a shorter term for their role in cognition, learning.

",43741,,2444,,1/12/2021 9:46,1/12/2021 11:09,"Is a team of ML scientists an ""intelligent agent""?",,1,1,,,,CC BY-SA 4.0 25726,1,25848,,1/12/2021 8:15,,1,398,"

In this research paper, we have the following claim

the smoothness assumption that underlies many kernel methods such as Support Vector Machines (SVMs) does not hold for deep neural networks trained through backpropagation

Does smoothness here refer to no sharp rise/fall in gradients?

",35616,,2444,,1/12/2021 18:35,1/19/2021 16:12,What is the smoothness assumption in SVMs?,,1,0,,,,CC BY-SA 4.0 25727,2,,25725,1/12/2021 8:50,,3,,"

Is there a term for the humans who do [machine] learning?

Typically you will see "AI researchers" for people studying machine intelligence in general, or "data scientists" for people working with statistics or studying specific solutions in machine learning. Both those terms are used quite flexibly, and generally understood to be scientists/engineers that work on machine intelligence problems. There are also other specialisms that might apply, such as "computer vision researcher".

If you are writing only about researchers who are focused mostly on learning systems - such as new types of learning model, or the best way to train a model for some group of tasks - then "machine learning researchers" or "ML researchers" would be fine. If they are specific people, you may want to ask them directly what title they prefer, in case you accidentally misrepresent their role and work.

Can we call a "team of machine learning scientists with their computers working on some ML problem" an intelligent agent?

No. That might be confused with something that the team is working on or creating. An "intelligent agent" can be the product of specific AI research. For instance, it is reasonable to consider AlphaZero to be an "intelligent agent" (not everyone would call it that, it depends on context, but not relevant to your question).

Is "cognizer" a right term?

No. That is an obscure term meaning something like "a being that perceives or knows". No-one will naturally associate that word with a team of researchers and their computers.

I just want a shorter term for their role in cognition, learning.

I think the most usual way to do this is to describe the team, and name them. For example:

  • "The MIT computer science researchers are using the X1000 system to investigate learning of abstract spaces. The MIT team said yesterday . . ."

  • "Professor Hilary Smith is working with a team of graduates on the Learny McLearnface framework. Smith's team have discovered that . . ."

This is the normal way that you see a research team introduced and referred in science journalism in e.g. New Scientist or Scientific American.

If you need to describe some abstract approach about a team working in general, then you can use a similar approach, naming your teams according to how you need to differentiate between them.

",1847,,1847,,1/12/2021 11:09,1/12/2021 11:09,,,,2,,,,CC BY-SA 4.0 25730,1,,,1/12/2021 12:22,,1,50,"

I want to pre-train a model (combined by two popular modules A and B, and both are large blocks), then fine-tune it on downstream tasks.

What if for the weight initialization for pre-training, module A is initialized from some checkpoints, while B's is from random? Can I still claim the process as pre-training? Or the modules in the model must all be initialized from random, and so we can call it pre-training?

If parts of the module's weights are 'contaminated' by checkpoints, it is only can be called fine-tune?

",43349,,2444,,1/12/2021 19:57,1/12/2021 19:57,What is the definition of pre-training?,,0,0,,,,CC BY-SA 4.0 25732,1,,,1/12/2021 14:54,,2,64,"

I am currently working on implementing a Multi-Agent System for Smart Grids.

There's a lot of literature for that and some things confuse me. I have read that there is FIPA, which aimed to create a unified Agent Communication Language. So multiple Agents are talking to each other and FIPA specifies how the messages should be sent and processed. However, it is pretty old.

In newer papers, where Multi-Agent Reinforcement Learning Algorithms are proposed, FIPA or generally any ACL isn't mentioned. I believe that is because in MARL, communication is done by observing the states of the other agents, rather than communicating explicitly. Also in MARL, the decision making is not based on negotiation like in FIPA but with the learned policy.

I am now super confused if I got it right. Is FIPA still a thing I should worry about when I design my Multi-Agent System? Is there any other thing to handle communication in MARL other than sharing states?

Any help would be really appreciated, thank you very much :)

",43752,,,,,7/1/2021 11:19,What place do Agent Communications Language have in Multi-Agent Systems nowadays?,,0,2,,,,CC BY-SA 4.0 25733,1,,,1/12/2021 15:17,,1,110,"

My variational autoencoder seems to work for MNIST, but fails on slightly "harder" data.
By "fails" I mean there are at least two apparent problems:

  1. Very poor reconstruction, for example sample reconstructions from the last epoch on validation set without any regularization at all.
    The last reported losses from console are val_loss=9.57e-5, train_loss=9.83e-5 which I thought would imply exact reconstructions.
  2. validation loss is low (which does not seem to reflect the reconstruction), and always lower than training loss which is very suspicious.

For MNIST everything looks fine (with less layers!).

I will give as much nformation as I can, since I am not sure what I should provide to help anyone help me.


Firstly, here is the full code
You will notice loss calculation and logging is very simple and straight forward and I can't seem to find what's wrong.

import torch
from torch import nn
import torch.nn.functional as F
from typing import List, Optional, Any
from pytorch_lightning.core.lightning import LightningModule
from Testing.Research.config.ConfigProvider import ConfigProvider
from pytorch_lightning import Trainer, seed_everything
from torch import optim
import os
from pytorch_lightning.loggers import TensorBoardLogger
# import tfmpl
import matplotlib.pyplot as plt
import matplotlib
from Testing.Research.data_modules.MyDataModule import MyDataModule
from Testing.Research.data_modules.MNISTDataModule import MNISTDataModule
from Testing.Research.data_modules.CaseDataModule import CaseDataModule
import torchvision
from Testing.Research.config.paths import tb_logs_folder
from Testing.Research.config.paths import vae_checkpoints_path
from pytorch_lightning.callbacks.model_checkpoint import ModelCheckpoint


class VAEFC(LightningModule):
    # see https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73
    # for possible upgrades, see https://arxiv.org/pdf/1602.02282.pdf
    # https://stats.stackexchange.com/questions/332179/how-to-weight-kld-loss-vs-reconstruction-loss-in-variational
    # -auto-encoder
    def __init__(self, encoder_layer_sizes: List, decoder_layer_sizes: List, config):
        super(VAEFC, self).__init__()
        self._config = config
        self.logger: Optional[TensorBoardLogger] = None
        self.save_hyperparameters()

        assert len(encoder_layer_sizes) >= 3, "must have at least 3 layers (2 hidden)"
        # encoder layers
        self._encoder_layers = nn.ModuleList()
        for i in range(1, len(encoder_layer_sizes) - 1):
            enc_layer = nn.Linear(encoder_layer_sizes[i - 1], encoder_layer_sizes[i])
            self._encoder_layers.append(enc_layer)

        # predict mean and covariance vectors
        self._mean_layer = nn.Linear(encoder_layer_sizes[
                                         len(encoder_layer_sizes) - 2],
                                     encoder_layer_sizes[len(encoder_layer_sizes) - 1])
        self._logvar_layer = nn.Linear(encoder_layer_sizes[
                                           len(encoder_layer_sizes) - 2],
                                       encoder_layer_sizes[len(encoder_layer_sizes) - 1])

        # decoder layers
        self._decoder_layers = nn.ModuleList()
        for i in range(1, len(decoder_layer_sizes)):
            dec_layer = nn.Linear(decoder_layer_sizes[i - 1], decoder_layer_sizes[i])
            self._decoder_layers.append(dec_layer)

        self._recon_function = nn.MSELoss(reduction='mean')
        self._last_val_batch = {}

    def _encode(self, x):
        for i in range(len(self._encoder_layers)):
            layer = self._encoder_layers[i]
            x = F.relu(layer(x))

        mean_output = self._mean_layer(x)
        logvar_output = self._logvar_layer(x)
        return mean_output, logvar_output

    def _reparametrize(self, mu, logvar):
        if not self.training:
            return mu
        std = logvar.mul(0.5).exp_()
        if std.is_cuda:
            eps = torch.FloatTensor(std.size()).cuda().normal_()
        else:
            eps = torch.FloatTensor(std.size()).normal_()
        reparameterized = eps.mul(std).add_(mu)
        return reparameterized

    def _decode(self, z):
        for i in range(len(self._decoder_layers) - 1):
            layer = self._decoder_layers[i]
            z = F.relu((layer(z)))

        decoded = self._decoder_layers[len(self._decoder_layers) - 1](z)
        # decoded = F.sigmoid(self._decoder_layers[len(self._decoder_layers)-1](z))
        return decoded

    def _loss_function(self, recon_x, x, mu, logvar, reconstruction_function):
        """
        recon_x: generating images
        x: origin images
        mu: latent mean
        logvar: latent log variance
        """
        binary_cross_entropy = reconstruction_function(recon_x, x)  # mse loss TODO see if mse or cross entropy
        # loss = 0.5 * sum(1 + log(sigma^2) - mu^2 - sigma^2)
        kld_element = mu.pow(2).add_(logvar.exp()).mul_(-1).add_(1).add_(logvar)
        kld = torch.sum(kld_element).mul_(-0.5)
        # KL divergence Kullback–Leibler divergence, regularization term for VAE
        # It is a measure of how different two probability distributions are different from each other.
        # We are trying to force the distributions closer while keeping the reconstruction loss low.
        # see https://towardsdatascience.com/understanding-variational-autoencoders-vaes-f70510919f73

        # read on weighting the regularization term here:
        # https://stats.stackexchange.com/questions/332179/how-to-weight-kld-loss-vs-reconstruction-loss-in-variational
        # -auto-encoder
        return binary_cross_entropy + kld * self._config.regularization_factor

    def _parse_batch_by_dataset(self, batch, batch_index):
        if self._config.dataset == "toy":
            (orig_batch, noisy_batch), label_batch = batch
            # TODO put in the noise here and not in the dataset?
        elif self._config.dataset == "mnist":
            orig_batch, label_batch = batch
            orig_batch = orig_batch.reshape(-1, 28 * 28)
            noisy_batch = orig_batch
        elif self._config.dataset == "case":
            orig_batch, label_batch = batch

            orig_batch = orig_batch.float().reshape(
                    -1,
                    len(self._config.case.feature_list) * self._config.case.frames_per_pd_sample
            )
            noisy_batch = orig_batch
        else:
            raise ValueError("invalid dataset")
        noisy_batch = noisy_batch.view(noisy_batch.size(0), -1)

        return orig_batch, noisy_batch, label_batch

    def training_step(self, batch, batch_idx):
        orig_batch, noisy_batch, label_batch = self._parse_batch_by_dataset(batch, batch_idx)

        recon_batch, mu, logvar = self.forward(noisy_batch)

        loss = self._loss_function(
                recon_batch,
                orig_batch, mu, logvar,
                reconstruction_function=self._recon_function
        )
        # self.logger.experiment.add_scalars("losses", {"train_loss": loss})
        tb = self.logger.experiment
        tb.add_scalars("losses", {"train_loss": loss}, global_step=self.current_epoch)
        # self.logger.experiment.add_scalar("train_loss", loss, self.current_epoch)
        if batch_idx == len(self.train_dataloader()) - 2:
            # https://pytorch.org/docs/stable/_modules/torch/utils/tensorboard/writer.html#SummaryWriter.add_embedding
            # noisy_batch = noisy_batch.detach()
            # recon_batch = recon_batch.detach()
            # last_batch_plt = matplotlib.figure.Figure()  # read https://github.com/wookayin/tensorflow-plot
            # ax = last_batch_plt.add_subplot(1, 1, 1)
            # ax.scatter(orig_batch[:, 0], orig_batch[:, 1], label="original")
            # ax.scatter(noisy_batch[:, 0], noisy_batch[:, 1], label="noisy")
            # ax.scatter(recon_batch[:, 0], recon_batch[:, 1], label="reconstructed")
            # ax.legend(loc="upper left")
            # self.logger.experiment.add_figure(f"original last batch, epoch {self.current_epoch}", last_batch_plt)
            # tb.add_embedding(orig_batch, global_step=self.current_epoch, metadata=label_batch)
            pass
        self.logger.experiment.flush()
        self.log("train_loss", loss, prog_bar=True, on_step=False, on_epoch=True)
        return loss

    def _plot_batches(self, orig_batch, noisy_batch, label_batch, batch_idx, recon_batch, mu, logvar):
        # orig_batch_view = orig_batch.reshape(-1, self._config.case.frames_per_pd_sample,
        # len(self._config.case.feature_list))
        #
        # plt.figure()
        # plt.plot(orig_batch_view[11, :, 0].detach().cpu().numpy(), label="feature 0")
        # plt.legend(loc="upper left")
        # plt.show()

        tb = self.logger.experiment
        if self._config.dataset == "mnist":
            orig_batch -= orig_batch.min()
            orig_batch /= orig_batch.max()
            recon_batch -= recon_batch.min()
            recon_batch /= recon_batch.max()

            orig_grid = torchvision.utils.make_grid(orig_batch.view(-1, 1, 28, 28))
            val_recon_grid = torchvision.utils.make_grid(recon_batch.view(-1, 1, 28, 28))

            tb.add_image("original_val", orig_grid, global_step=self.current_epoch)
            tb.add_image("reconstruction_val", val_recon_grid, global_step=self.current_epoch)

            label_img = orig_batch.view(-1, 1, 28, 28)
            pass
        elif self._config.dataset == "case":
            orig_batch_view = orig_batch.reshape(-1, self._config.case.frames_per_pd_sample,
                                                 len(self._config.case.feature_list)).transpose(1, 2)
            recon_batch_view = recon_batch.reshape(-1, self._config.case.frames_per_pd_sample,
                                                   len(self._config.case.feature_list)).transpose(1, 2)

            # plt.figure()
            # plt.plot(orig_batch_view[11, 0, :].detach().cpu().numpy())
            # plt.show()
            # pass

            n_samples = orig_batch_view.shape[0]
            n_plots = min(n_samples, 4)
            first_sample_idx = 0

            # TODO either plotting or data problem
            fig, axs = plt.subplots(n_plots, 1)
            for sample_idx in range(n_plots):
                for feature_idx, (orig_feature, recon_feature) in enumerate(
                        zip(orig_batch_view[sample_idx + first_sample_idx, :, :],
                            recon_batch_view[sample_idx + first_sample_idx, :, :])):
                    i = feature_idx
                    if i > 0: continue  # or scale issues don't allow informative plotting

                    # plt.figure()
                    # plt.plot(orig_feature.detach().cpu().numpy(), label=f'orig{i}, sample{sample_idx}')
                    # plt.legend(loc='upper left')
                    # pass

                    axs[sample_idx].plot(orig_feature.detach().cpu().numpy(), label=f'orig{i}, sample{sample_idx}')
                    axs[sample_idx].plot(recon_feature.detach().cpu().numpy(), label=f'recon{i}, sample{sample_idx}')
                    # sample{sample_idx}')
                    axs[sample_idx].legend(loc='upper left')
                pass
            # plt.show()

            tb.add_figure("recon_vs_orig", fig, global_step=self.current_epoch, close=True)

    def validation_step(self, batch, batch_idx):
        orig_batch, noisy_batch, label_batch = self._parse_batch_by_dataset(batch, batch_idx)

        recon_batch, mu, logvar = self.forward(noisy_batch)

        loss = self._loss_function(
                recon_batch,
                orig_batch, mu, logvar,
                reconstruction_function=self._recon_function
        )

        tb = self.logger.experiment
        # can probably speed up training by waiting for epoch end for data copy from gpu
        # see https://sagivtech.com/2017/09/19/optimizing-pytorch-training-code/
        tb.add_scalars("losses", {"val_loss": loss}, global_step=self.current_epoch)

        label_img = None
        if len(orig_batch) > 2:
            self._last_val_batch = {
                "orig_batch": orig_batch,
                "noisy_batch": noisy_batch,
                "label_batch": label_batch,
                "batch_idx": batch_idx,
                "recon_batch": recon_batch,
                "mu": mu,
                "logvar": logvar
            }
        # self._plot_batches(orig_batch, noisy_batch, label_batch, batch_idx, recon_batch, mu, logvar)

        outputs = {"val_loss":  loss, "recon_batch": recon_batch, "label_batch": label_batch,
                   "label_img": label_img}
        self.log("val_loss", loss, prog_bar=True, on_step=False, on_epoch=True)
        return outputs

    def validation_epoch_end(self, outputs: List[Any]) -> None:
        first_batch_dict = outputs[-1]

        self._plot_batches(
                self._last_val_batch["orig_batch"],
                self._last_val_batch["noisy_batch"],
                self._last_val_batch["label_batch"],
                self._last_val_batch["batch_idx"],
                self._last_val_batch["recon_batch"],
                self._last_val_batch["mu"],
                self._last_val_batch["logvar"]
        )
        self.log(name="VAEFC_val_loss_epoch_end", value={"val_loss": first_batch_dict["val_loss"]})

    def test_step(self, batch, batch_idx):
        orig_batch, noisy_batch, label_batch = self._parse_batch_by_dataset(batch, batch_idx)

        recon_batch, mu, logvar = self.forward(noisy_batch)

        loss = self._loss_function(
                recon_batch,
                orig_batch, mu, logvar,
                reconstruction_function=self._recon_function
        )

        tb = self.logger.experiment
        tb.add_scalars("losses", {"test_loss": loss}, global_step=self.global_step)

        return {"test_loss": loss, "mus": mu, "labels": label_batch, "images": orig_batch}

    def test_epoch_end(self, outputs: List):
        tb = self.logger.experiment

        avg_loss = torch.stack([x['test_loss'] for x in outputs]).mean()
        self.log(name="test_epoch_end", value={"test_loss_avg": avg_loss})

        if self._config.dataset == "mnist":
            tb.add_embedding(
                    mat=torch.cat([o["mus"] for o in outputs]),
                    metadata=torch.cat([o["labels"] for o in outputs]).detach().cpu().numpy(),
                    label_img=torch.cat([o["images"] for o in outputs]).view(-1, 1, 28, 28),
                    global_step=self.global_step,
            )

    def configure_optimizers(self):
        optimizer = optim.Adam(self.parameters(), lr=self._config.learning_rate)
        return optimizer

    def forward(self, x):
        mu, logvar = self._encode(x)
        z = self._reparametrize(mu, logvar)
        decoded = self._decode(z)
        return decoded, mu, logvar


def train_vae(config, datamodule, latent_dim, dec_layer_sizes, enc_layer_sizes):
    model = VAEFC(config=config, encoder_layer_sizes=enc_layer_sizes, decoder_layer_sizes=dec_layer_sizes)

    logger = TensorBoardLogger(save_dir=tb_logs_folder, name='VAEFC', default_hp_metric=False)
    logger.hparams = config

    checkpoint_callback = ModelCheckpoint(dirpath=vae_checkpoints_path)
    trainer = Trainer(deterministic=config.is_deterministic,
                      # auto_lr_find=config.auto_lr_find,
                      # log_gpu_memory='all',
                      # min_epochs=99999,
                      max_epochs=config.num_epochs,
                      default_root_dir=vae_checkpoints_path,
                      logger=logger,
                      callbacks=[checkpoint_callback],
                      gpus=1
                      )
    # trainer.tune(model)
    trainer.fit(model, datamodule=datamodule)
    best_model_path = checkpoint_callback.best_model_path
    print("done training vae with lightning")
    print(f"best model path = {best_model_path}")
    return trainer


def run_trained_vae(trainer):
    # https://pytorch-lightning.readthedocs.io/en/latest/test_set.html
    # (1) load the best checkpoint automatically (lightning tracks this for you)
    trainer.test()

    # (2) don't load a checkpoint, instead use the model with the latest weights
    # trainer.test(ckpt_path=None)

    # (3) test using a specific checkpoint
    # trainer.test(ckpt_path='/path/to/my_checkpoint.ckpt')

    # (4) test with an explicit model (will use this model and not load a checkpoint)
    # trainer.test(model)


Parameters

I am getting very similar results for any combination of parameters I am (manually) using. Maybe I didn't try something.

num_epochs: 40
batch_size: 32
learning_rate: 0.0001
auto_lr_find: False

noise_factor: 0.1
regularization_factor: 0.0

train_size: 0.8
val_size: 0.1
num_workers: 1

dataset: "case" # toy, mnnist, case
mnist:
  enc_layer_sizes: [784, 512,]
  dec_layer_sizes: [512, 784]
  latent_dim: 25
  n_classes: 10
  classifier_layers: [20, 10]
toy:
  enc_layer_sizes: [2, 200, 200, 200]
  dec_layer_sizes: [200, 200, 200, 2]
  latent_dim: 8
  centers_radius: 4.0
  n_clusters: 10
  cluster_size: 5000
case:
  #enc_layer_sizes: [ 1800, 600, 300, 100 ]
  #dec_layer_sizes: [ 100, 300, 600, 1800 ]
  #frames_per_pd_sample: 600

  enc_layer_sizes: [ 10, 600, 300, 300 ]
  dec_layer_sizes: [ 600, 300, 300, 10 ]
  frames_per_pd_sample: 10

  latent_dim: 300
  n_classes: 10
  classifier_layers: [ 20, 10 ] # unused right now.

  feature_list:
    #- V_0_0 # 0, X
    #- V_0_1 # 0, Y
    #- V_0_2 # 0, Z
    - pads_0
  enc_kernel_sizes: [] # for conv
  end_strides: []
  dec_kernel_sizes: []
  dec_strides: []

is_deterministic: False

real_data_pd_dir: "D:/pressure_pd"
case_dir: "real_case_20_min"
case_file: "pressure_data_0.pkl"


Data

For Mnist everything works fine.

When changing to my specific data, results are as above.

The data is a time series, of several features. To dumb this down even more, I am feeding just a single feature, sliced to equal-length chunks, and fed into the input layer as a vector.
The fact that the data is a time series could maybe help modeling in the future, but for now I want to just refer to it as chunks of data, which I believe I am doing.

code:

from torch.utils.data import Dataset
import matplotlib.pyplot as plt
import torch
from Testing.Research.config.ConfigProvider import ConfigProvider
import os
import pickle
import pandas as pd
from typing import Tuple
import numpy as np


class CaseDataset(Dataset):
    def __init__(self, path):
        super(CaseDataset, self).__init__()
        self._path = path

        self._config = ConfigProvider.get_config()
        self.frames_per_pd_sample = self._config.case.frames_per_pd_sample
        self._load_case_from_pkl()
        self.__len = len(self._full) // self.frames_per_pd_sample  # discard last non full batch

    def _load_case_from_pkl(self):
        assert os.path.isfile(self._path)
        with open(self._path, "rb") as f:
            p = pickle.load(f)

        self._full: pd.DataFrame = p["full"]
        self._subsampled: pd.DataFrame = p["subsampled"]
        self._misc: pd.DataFrame = p["misc"]

        feature_list = self._config.case.feature_list
        self._features_df = self._full[feature_list].copy()

        # normalize from -1 to 1
        features_to_normalize = self._features_df.columns
        self._features_df[features_to_normalize] = \
            self._features_df[features_to_normalize].apply(lambda x: (((x - x.min()) / (x.max() - x.min())) * 2) - 1)

        pass

    def __len__(self):
        # number of samples in the dataset
        return self.__len

    def __getitem__(self, index: int) -> Tuple[np.array, np.array]:
        data_item = self._features_df.iloc[index * self.frames_per_pd_sample: (index + 1) * self.frames_per_pd_sample, :].values
        label = 0.0
        # plt.figure()
        # plt.plot(data_item[:, 0], label="feature 0")
        # plt.legend(loc="upper left")
        # plt.show()
        return data_item, label

The amount of time-steps per batch does not seem to affect convergence.

Train test val split

is done like so:

import os
from pytorch_lightning import LightningDataModule
import torchvision.datasets as datasets
from torchvision.transforms import transforms
import torch
from torch.utils.data import DataLoader
from torch.utils.data import Subset
from Testing.Research.config.paths import mnist_data_download_folder
from Testing.Research.datasets.real_cases.CaseDataset import CaseDataset
from typing import Optional


class CaseDataModule(LightningDataModule):
    def __init__(self, config, path):
        super().__init__()
        self._config = config
        self._path = path

        self._train_dataset: Optional[Subset] = None
        self._val_dataset: Optional[Subset] = None
        self._test_dataset: Optional[Subset] = None

    def prepare_data(self):
        pass

    def setup(self, stage):
        # transform
        transform = transforms.Compose([transforms.ToTensor()])
        full_dataset = CaseDataset(self._path)

        train_size = int(self._config.train_size * len(full_dataset))
        val_size = int(self._config.val_size * len(full_dataset))
        test_size = len(full_dataset) - train_size - val_size
        train, val, test = torch.utils.data.random_split(full_dataset, [train_size, val_size, test_size])

        # assign to use in dataloaders
        self._full_dataset = full_dataset
        self._train_dataset = train
        self._val_dataset = val
        self._test_dataset = test

    def train_dataloader(self):
        return DataLoader(self._train_dataset, batch_size=self._config.batch_size, num_workers=self._config.num_workers)

    def val_dataloader(self):
        return DataLoader(self._val_dataset, batch_size=self._config.batch_size, num_workers=self._config.num_workers)

    def test_dataloader(self):
        return DataLoader(self._test_dataset, batch_size=self._config.batch_size, num_workers=self._config.num_workers)

Questions

  1. I believe having the validation loss consistently lower than the train loss shows something is very wrong here, but I can't put my finger on what, or come up with how to verify this.
  2. How can I just make the model auto-encode the data correctly? Basicaly, I would want it to learn the identity function, and for the loss to reflect that.
  3. The loss does not seem to reflect the reconstruction. I think this is probably the most fundamental issue

My thoughts

  1. Try a convolutional net instead of FC? maybe it would be able to better learn features?
  2. Out of ideas :(

Will provide any lacking information.

",21645,,21645,,1/12/2021 15:32,1/12/2021 15:32,variational auto encoder loss goes down but does not reconstruct input. out of debugging ideas,,0,0,,,,CC BY-SA 4.0 25734,2,,25561,1/12/2021 15:32,,1,,"

I've asked this on rl-list and got lots of interesting references to research papers.

So far the most promising one I've seen is this:

Nicolas Anastassacos, Stephen Hailes and Mirco Musolesi. Partner Selection for the Emergence of Cooperation in Multi-Agent Systems using Reinforcement Learning. In AAAI 2020. New York City, NY, USA. February 2020.

",25904,,,,,1/12/2021 15:32,,,,1,,,,CC BY-SA 4.0 25735,1,,,1/12/2021 18:41,,1,34,"

I am trying to reproduce the paper Synthetic Petri Dish: A novel surrogate model for Rapid Architecture Search. In the paper, the authors try to reduce the architecture of an MLP model trained on MNIST (2 layers - 100 neurons) by initializing a motif network from it, that is, 2 layers, 1 neuron each, and extracting the sigmoid function. I have been searching a lot, but I have not found the answer of how can someone extract an 'architectural motif' from a trained neural network.

",43764,,2444,,1/13/2021 17:03,1/13/2021 17:03,"How can an ""architectural motif"" be extracted from a trained MLP?",,0,3,,,,CC BY-SA 4.0 25736,1,,,1/12/2021 18:55,,2,56,"

This might be more of a question about nested function classes:

For $k$ class node classification in a graph with $n$ nodes, and $d$ feature vector.

I want to compare

Architecture I: the GCN model of Kipf/ Welling with two graph convolutional layers: $$ \mathbf{Y}=\operatorname{softmax}\left(\mathbf{A} \xi\left(\mathbf{A X W}_{1}\right) \mathbf{W}_{2}\right) $$

where

  • $\mathbf{X}$ is $n \times d, - $ $\mathbf{Y}$ is $n \times k$
  • $\mathbf{A}$ is a fixed $n \times n$ graph diffusion matrix,
  • $\mathbf{W}_{1}, \mathbf{W}_{2}$ are learnable weight matrices of size $d \times d^{\prime}$ and $d^{\prime} \times 2,$ respectively, shared across all nodes, and
  • $\xi$ is a nonlinearity.

Architecture II: a single-layer graph neural network of the form: $$ \mathbf{Y}=\operatorname{softmax}\left(\mathbf{A}^{2} \mathbf{X W}\right) $$ where $\mathbf{W}$ is a learnable weight matrix of size $d \times 2$.


Now I'm wondering

  • $\operatorname{Can} \xi, d^{\prime}$ be chosen in a way that both architectures have the same expressive power? (i.e. can represent the same class of functions)?

  • $\operatorname{Can} \xi, d^{\prime}$ be chosen in a way that Architecture II is more expressive?

  • What would be the advantage in training complexity of Architecture II when applied to large-scale graphs.

",43765,,2444,,12/19/2021 15:01,12/19/2021 15:01,"Given a 2-layer GCN, can we choose the dimensions of the 2nd weight matrix, such that this architecture has the same capacity as a 1-layer GCN?",,0,1,,,,CC BY-SA 4.0 25739,1,,,1/12/2021 22:29,,6,654,"

In general, what are the advantages of RL with actor-critic methods over actor-only (or policy-based) methods?

This is not a comparison with the Q-learning series, but probably a method of learning the game with only the actor.

I think it's effective to use only actors, especially for sparse rewards. Is that correct?

Please, let me know if you have any specific use cases that use only actors.

",43246,,2444,,1/13/2021 10:29,1/13/2021 16:14,What are the advantages of RL with actor-critic methods over actor-only methods?,,1,1,,,,CC BY-SA 4.0 25740,2,,25739,1/12/2021 22:59,,5,,"

In general, what are the advantages of RL with actor-critic methods over actor-only (or policy-based) methods?

One practical benefit is that critics can use TD learning to bootstrap, allowing them to learn online on each step taken, plus learn in continuing problems. Pure actor algorithms like REINFORCE, cross-entropy method, and non-RL policy-only learners, such as genetic algorithms, require episodic problems. The smallest unit those can learn from is an entire episode. That is because without a critic providing value estimates, the only way to estimate return is to sample an actual return from the end of an episode.

A TD-based critic may also have lower variance, which can aid in fast learning and stability, although this is not always a benefit. TD-based critics also have bias, which can cause instability. See Why do temporal difference (TD) methods have lower variance than Monte Carlo methods? on Cross Validated for more details on this.

In practice, RL algorithm choice is a hyperparameter. As well as impacting difficulty of implementation, CPU and other resource costs, it can impact how well learning occurs depending on the problem being attempted. Usually, the only way you can tell a method is better for your problem is to try all the valid ones and measure their performance.

I think it's effective to use only actors, especially for sparse rewards. Is that correct?

The sparsity of reward is not a major factor here. The hard credit assignment problem that this brings into play means that the agent has to either assign a value, or pick an action, in states which have no direct feedback. All else being equal, both the value function and optimal action choice are hard to resolve when they depend on a large number of future policy decisions and state transitions which may vary.

Which approach is best will depend on whether it is easier for a statistics-based learner to approximate the mapping from state to expected future reward, or to the action. The two functions can have different degrees of complexity.

For an example where a policy function (for an actor) is simpler than a value function (for a critic or value-based method), you could consider a simple chase environment where a wolf tries to catch a rabbit on a simple continuous plane surface. The agent is the wolf, and gets a reward of +1 for catching the rabbit (I won't bother with other details, there are plenty of variations you could make).

In the example environment, a simple strategy for the wolf is to turn to face the rabbit and move forwards. This is easy to map from current locations and facings of the wolf and the rabbit - for ultimate ease you can express the state as the rabbit's position and velocity in polar coordinates from the wolf's perspective. Compare that to the value function - it has to predict the time it will take to reach the rabbit given a current action choice. This is a far harder function to express based on the state, and as a result may also be harder to learn.

",1847,,1847,,1/13/2021 16:14,1/13/2021 16:14,,,,3,,,,CC BY-SA 4.0 25741,1,,,1/13/2021 1:18,,2,140,"

Batch norm is a normalizing layer that is shown to help deep networks learn faster and with higher generalization accuracy. It normalizes the activations of the previous layer to a mean $\beta$ and variance $\gamma^2$ to prevent things like activations from exploding or shifting during the learning process.

More specifically: $$\hat{x} = \displaystyle \frac{x - \mu_t}{\sqrt{\sigma_t^2 + \epsilon}}\label{1}\tag{1}$$ $$ BatchNorm_{\mu_t, \sigma_t}(x) = \gamma \hat{x} + \beta \label{2}\tag{2}$$

where

  • $x$ is the layer input of the layer
  • $\mu_t, \sigma_t$ is the sample mean and standard deviation at time step $t$
  • $\epsilon$ is a small constant, and
  • $\gamma$ and $\beta$ are learnable parameters so that the output is not necessarily standardized to mean $0$ and variance $1$, but possibly to another mean and variance that may be better for the neural network.

My question is, why does BatchNorm first standardize the input $x$ to $\hat{x}$ before applying the learnable parameters $\gamma$ and $\beta$? Isn't this redundant? The parameters $\gamma$ and $\beta$ could learn to standardize the input themselves right?

In fact, as training progresses, $\mu_t$ and $\sigma_t$ becomes updated to new values $\mu_{t+1}$ and $\sigma_{t+1}$, so the learned parameters at that time step, $\gamma_t$ and $\beta_t$, no longer apply for time step $t+1$ since that involves a different standardization process with a different mean and variance. So by adding this standardization step, it may even hurt the convergence of the layer during learning, since it is adding the gradient of $BatchNorm_{\mu_{t+1}, \sigma_{t+1}}(x)$ to $BatchNorm_{\mu_t, \sigma_t}(x)$, which are two different functions right?

Why not just simply make it like this?

$$BatchNorm(x) = \gamma x + \beta \label{3}\tag{3}$$

This would simplify the calculation of the gradients, which would make learning faster to compute.

BatchNorm is one of the most successful developments of deep learning, so I know my intuition on these things is wrong -- I'm just curious as to what I am missing.

",42699,,2444,,1/13/2021 10:56,12/30/2022 3:06,"Why does batch norm standardize with sample mean/variance, when it also learns parameters to scale the mean/variance?",,1,1,,,,CC BY-SA 4.0 25742,2,,10419,1/13/2021 3:51,,1,,"

Not sure about which function to call in GPFlow, but the principle is simple. If your $Y$ data was scaled:

$$Y_{scaled} = {(Y - mean(Y)) \over {sd(Y)}}$$

So now if you have any prediction (from the scaled model) $\hat{Y}_{scaled}$, with stand error $\sigma_{\hat {Y}_{scaled}}$, then it is a normal distribution ${\cal N}(\hat {Y}_{scaled}, \sigma^2_{\hat {Y}_{scaled}})$. Then you can very easily rescale it back to the original scale ${\cal N}(\hat {Y}, \sigma^2_{\hat {Y}})$, where:

$$\hat {Y} = \hat {Y}_{scaled}\cdot sd(Y) + mean(Y)$$ $$\sigma_{\hat {Y}} = \sigma_{\hat {Y}_{scaled}} \cdot sd(Y)$$

",30285,,,,,1/13/2021 3:51,,,,0,,,,CC BY-SA 4.0 25743,2,,22900,1/13/2021 4:01,,7,,"

Rather than the survey by Liu et al. recommended above, I'd suggest you read the following survey paper for an overview of MORL (disclaimer - I was a co-author on this, but I genuinely think it is a much more useful introduction to this area)

Roijers, D. M., Vamplew, P., Whiteson, S., & Dazeley, R. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67-113.

Liu et al's survey, in my opinion, doesn't do much more than list and briefly describe the MORL algorithms which existed at that point. There's no deeper analysis of the field. The original version of their paper was also retracted due to blatant plagiarism of several other authors, including myself as can be confirmed here.

Our survey provides arguments for the need for multiobjective methods by describing 3 scenarios where agents using single-objective RL may be unable to provide a satisfactory solution that matches the needs of the user. Briefly, these are

  1. the unknown weights scenario where the required trade-off between the objectives isn't known in advance, and so to be effective the agent must learn multiple policies corresponding to different trade-offs and then at run-time select the one which matches the current preferences (e.g. this can arise when the objectives correspond to different costs which vary in relative price over time);

  2. the decision support scenario where scalarization of a reward vector is not viable (for example, in the case of subjective preferences, which defy explicit quantification), so the agent needs to learn a set of policies, and then present these to a user who will select their preferred option, and

  3. the known weights scenario where the desired trade-off between objectives is known, but its nature is such that the returns are non-additive (i.e. if the user's utility function is non-linear), and therefore standard single-objective methods based on the Bellman equation can't be directly applied.

We propose a taxonomy of MORL problems in terms of the number of policies they require (single or multi-policy), the form of utility/scalarization function supported (linear or non-linear), and whether deterministic or stochastic policies are allowed, and relate this to the nature of the set of solutions which the MO algorithm needs to output. This taxonomy is then used to categorize existing MO planning and MORL methods.

One final important contribution is identifying the distinction between maximising Expected Scalarised Return (ESR) or Scalarised Expected Return (SER). The former is appropriate in cases where we are concerned about the results within each individual episode (for example, when treating a patient - that patient will only care about their own individual experience), while SER is appropriate if we care about the average return over multiple episodes. This has turned out to be a much more important issue than I anticipated at the time of the survey, and Diederik Roijers and his colleagues have examined it more closely since then (e.g., Multi-objective Reinforcement Learning for the Expected Utility of the Return)

",36575,,32410,,9/8/2021 14:04,9/8/2021 14:04,,,,1,,,,CC BY-SA 4.0 25747,1,,,1/13/2021 11:17,,2,527,"

Context:

My team and I are working on a RL problem for a specific application. We have data collected from user interactions (states, actions, rewards, etc.).

It is too costly for us to emulate agents. We decided therefore to concentrate on Offline RL techniques. For this, we are currently using the RL-Coach library by Intel, which offers support for Batch/Offline RL. More specifically, to evaluate policies in offline settings, we train a DDQN-BCQ model and evaluate the learned policies using Offline Policy Estimators (OPEs).

Problem:

In an Online RL setting, the decision of when to stop the training of an agent generally depends on the goal one wants to achieve (as described in this post: https://stats.stackexchange.com/questions/322933/q-learning-when-to-stop-training). If the goal is to train until convergence (of rewards) but no longer, then you could for example stop when the standard deviation of your rewards over the last n steps drops under some threshold. If the goal is to compare the performance of two algorithms, then you should simply compare the two using the same number of training steps.

However, in the Offline RL setting, I believe the conditions to stop training are not so clear. As stated above, no environement is directly available to evaluate our agents and the evaluation of the quality of the learned policy almost solely relies on OPEs, which are not always accurate.

For me, I believe that there are two different options that would make sense. I am unsure if both those options are actually equivalent though.

  1. The first option would be to stop training when the Q-values have converged/reached a plateau (i.e. when the Q-value network loss has converged) -- if they ever do, as we don't really have any guarantee of this happening with artificial neural networks. If the Q-values do reach a plateau, this would mean that our agent has reached some local optimum (or in the best case, the global optimum).
  2. The second option would be to only look at the OPEs reward estimation, and stop when they reach a plateau. However, different OPEs do not necessarily reach a plateau at the same time, as it can be seen in the figure below. In the Batch-RL tutorial of RL-Coach, it seems that they would simply select the agent at the epoch where the different OPEs give the highest policy value estimation, without checking that the loss of the network had converged or not (but this is only a tutorial, so I suppose we can't rely too much on it).

Questions:

  • What would be the best criteria for choosing when to stop the training of an agent in an Offline-RL setting?
  • Also, the performance of an agent often heavily depends on the seed used for training. To evaluate the general performance, I believe you have to run multiple training with different seeds? However, in the end, you still want only a single agent to deploy. Should you simply select the one having the highest OPEs values among all the runs?

P.S. I am not sure if this question should be splitted into two different posts, so please let me know if this is the case and I will edit the post!

",43780,,43780,,1/18/2021 14:40,9/22/2021 10:52,Offline/Batch Reinforcement Learning: when to stop training and what agent to select,,1,1,,,,CC BY-SA 4.0 25748,2,,22416,1/13/2021 12:00,,2,,"

I agree with Tomasz that the approach you are describing falls within the field of MORL. For a solid introduction MORL I would recommend the survey by Roijers, D. M., Vamplew, P., Whiteson, S., & Dazeley, R. (2013). A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research, 48, 67-113.

https://www.jair.org/index.php/jair/article/view/10836 (disclaimer: I'm an author in this, but I genuinely believe it will be useful to you).

Our survey provides arguments for the need for multiobjective methods by describing three scenarios where agents using single-objective RL may be unable to provide a satisfactory solution which matches the needs of the user. Briefly these are (a) the unknown weights scenario where the required trade-off between the objectives isn't known in advance, and so to be effective the agent must learn multiple policies corresponding to different trade-offs and then at run-time select the one which matches the current preferences (eg this can arise when the objectives correspond to different costs which vary in relative price over time; (b) the decision support scenario where scalarization of a reward vector is not viable (for example, in the case of subjective preferences which defy explicit quantification), so the agent needs to learn a set of policies, and then present these to a user who will select their preferred option, and (c) the known weights scenario where the desired trade-off between objectives is known but its nature is such that the returns are non-additive (ie if the user's utility function is non-linear) and therefore standard single-objective methods based on the Bellman equation can't be directly applied.

We propose a taxonomy of MORL problems in terms of the number of policies they require (single or multi-policy), the form of utility/scalarization function supported (linear or non-linear), and whether deterministic or stochastic policies are allowed, and relate this to the nature of the set of solutions which the MO algorithm needs to output. This taxonomy is then used to categorise existing MO planning and MORL methods.

One final important contribution is identifying the distinction between maximising Expected Scalarised Return (ESR) or Scalarised Expected Return (SER). The former is appropriate in cases where we are concerned about the results within each individual episode (for example, when treating a patient - that patient will only care about their own individual experience), while SER is appropriate if we care about the average return over multiple episodes. This has turned out to be a much more important issue than I anticipated at the time of the survey, and Diederik Roijers and his colleagues have examined it more closely since then (eg http://roijers.info/pub/esr_paper.pdf)

",36575,,2444,,1/13/2021 13:19,1/13/2021 13:19,,,,1,,,,CC BY-SA 4.0 25750,1,,,1/13/2021 13:34,,1,278,"

lets say I have three texts:

  1. "make a heading that says hello word"
  2. "make a heading of hello world"
  3. "create heading consist of hello world"

How can I fetch those groups of words using AI which is referring to heading i.e hello world in this case. Which AI frameworks or libraries can do that?

in all examples heading is pointing to hello world (which i am referring as group of words). so basically i want those words which will be a part of heading or in other word there is a relationship between them. another example i can give is "I am watching Breaking bad" so there is a relationship between watching and breaking bad and i want to extract what are you watching.

What's the best approach? Do I have to train a model for that or there are some other techniques that can get it done?

",43784,,43784,,1/13/2021 16:43,11/12/2022 22:07,How to extract parameters from a text using AI/NLP,,2,6,,,,CC BY-SA 4.0 25752,1,25753,,1/13/2021 16:36,,2,90,"

I am attempting to fully understand the explicit derivation and computation of the Hessian and how it is used in MAML. I came across this blog: https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html.

Specifically, could someone help to clarify this for me: is this term in the red box literally interpreted as the gradient at $\theta_{k-1}$ multiplied by the $\theta_k$?

",43791,,2444,,1/14/2021 16:36,1/14/2021 16:36,What is $ \nabla_{\theta_{k-1}} \theta_{k}$ in the context of MAML?,,1,1,,,,CC BY-SA 4.0 25753,2,,25752,1/13/2021 20:36,,0,,"

$\nabla_{\theta_{k-1}} \theta_k$ is gradient of $\theta_k$ with respect to $\theta_{k-1}$, it follows chain rule as noted in the side comment in the image. $\nabla_{\theta} \mathcal L(\theta_k)$ is also not a Hessian but a gradient vector.

",20339,,,,,1/13/2021 20:36,,,,0,,,,CC BY-SA 4.0 25754,1,,,1/13/2021 21:10,,1,47,"

A monotonically increasing function is a function that as x gets bigger so does its output. So, if plotted, it will never go down. Although the outputs might stay constant.

Logically this seems like an easier function to learn when compared to something that can, when plotted, go up or down.

Wikipedia has some example diagrams on monotonic functions.

If I were to say that it is easier for a neural network to learn a monotonic function compared to a non-monotonic function would the statement be correct? If so, is there any reason to it other than 'it only goes one way'?

",32265,,,,,1/13/2021 21:10,Are monotonically increasing functions easier to learn?,,0,1,,,,CC BY-SA 4.0 25759,1,25766,,1/14/2021 4:13,,3,61,"

During the course of training a DQN agent, all visited states are stored in a replay buffer. Therefore would it be practically possible for a CNN, given a reasonable amount of data, to predict the next RL state (in the form of an image) for a given action?

For example, take the game of Atari as shown below - Here the agent can take 2 major actions - go left and go right. Would a CNN be generate the image of the bar going right/left for the respective actions? My practical knowledge of CNNs is quite limited and therefore I'm trying to gauge the abilities of CNNs before I take up a project.

",31755,,,,,1/14/2021 11:33,Can a convolutional network predict states for a RL Agent,,1,0,,,,CC BY-SA 4.0 25760,2,,16348,1/14/2021 5:21,,3,,"

This is actually a highly technical term, which has been kind of misused and overgeneralized in many places.

What does 'convergence' mean in a literal sense? It simply means that a sequence of terms indexed by $\mathbb{N}$ ($X_1, X_2, X_3,..$) tends to a certain fixed value say $X$ as $\mathbb{N} \rightarrow \infty$, but may not achieve the fixed value. (there are a few technical details associated with this definition but I won't go into it as it requires some analysis)

When it comes to ML we are looking at probabilistic or stochastic models. When we talk about convergence in ML, we generally mean 4 types of convergence:

  • Convergence in Probability: This means that $\mathbb{N} \rightarrow \infty$ your likelihood of $X_N$ (a sequence of random variables) being very close to $X$ also increases i.e $P(\omega:[|X_N(\omega)-X(\omega)| >\epsilon]) \rightarrow 0$ as $N\rightarrow \infty$. This type of convergence is mostly used in Statistical Learning Theory.

  • Almost Sure Convergence: This means that $\mathbb{N} \rightarrow \infty$ your probability of $X_N$ (a sequence of random variables) being very close to $X$ is $1$ (NOTE: Here there is no likelihood of being close to $X$, we straight up say it must be close to $X$) i.e $P(\omega:[|X_N(\omega)-X(\omega)| >\epsilon]) = 0 $ as $N\rightarrow \infty$. This is a stronger verision of the previous convergence, and this is the type of convergence I have seen being used in RL.

  • Convergence in Distribution: This means that the distribution of a sequence of random variables tend to a certain distribution i.e $$\lim_{N\to \infty} F_{X_n} = F_X$$.

  • Convergence in $r$'th moment: This means that a sequence of random variables will converge to a certain mean as the sequence goes to infinity or simply put: $$\lim_{N\to \infty} \mathbb E[|X_N - \mu|^r] \ 0$$ where $\mu$ is the value to which the random variables in converge in $r$th moment.

A simple useful reference for all the aforementioned modes of convergence.

As a side note, this is meant as an informal reference, there is a lot of mathematical analysis involved in getting conditions for when these hold for a sequence of random variables.

In the context of ML, one can think of $L(w_N,y_N,x_N)$ in place of $X_N$ where $w_N, x_N, y_N$ will be the one deciding on the next step, and one can check if it satisfies any of the aforementioned convergence using some sufficient conditions. Note that when we do convex optimization we are talking about almost sure convergence (if the method used works), while for SGD due to stochasticity one might formulate it in the convergence in probability setting.

As a concrete example, the PAC learning paradigm uses the convergence in probability framework (without going into details the idea of PAC learning is that with increasing size of dataset your confidence about your classifier increases which can be interpreted as some sort of convergence in probability with the actual loss as the random variable, check the PAC learning framework here), while the Q-learning convergence (proof as suggested in the comments) is an almost sure convergence under some assumptions (probably proved by Bertsekas and Tsitsiklis), CLT is an example of convergence in distribution.

",,user9947,,user9947,2/13/2021 10:14,2/13/2021 10:14,,,,2,,,1/23/2022 10:58,CC BY-SA 4.0 25761,1,,,1/14/2021 5:36,,2,55,"

I have a combined network consisting of two parts: one is for images and the other is for numerical data. Each sample is matched with a numerical case by an ID. For this combined network, a lr of 0.01 was found to be best working via hyperparameter tunings

However, when I trained them as a separate task (binary classification), a lr of 0.001 for images and 0.01 for numerical data were best. As for an AUC metric, the combined network (0.818) is performing on average of image and numerical networks (0.799 and 0.821, respectively). Here I thought maybe the combined network's lr is too high for image part and should apply lower lr for that part. But, I don't know if it is possible.

If anyone has any idea of what is what, let me know

",31870,,,,,1/14/2021 11:55,Is it possible to train one part of the network with a particular learning rate and the other part with a different one?,,1,2,,,,CC BY-SA 4.0 25762,2,,15624,1/14/2021 6:01,,0,,"

The following is my intuition about the behavior of the algorithm considering your question, based on my knowledge and experience with single BMU SOMs. I didn't verify it experimentally.

At early stages of training: It should disturb the SOM topology preserving properties, as you're assigning the same pattern to different (and probably distant) locations in the lattice, making the distribution multimodal. This is not terrible on its own, as different positions could encode different relationships to neighbors, overcoming the limitations of a single 2D/3D neighborhood. But if your SOM is not large enough, the multiple modes may collapse, resulting in unimodal distributions covering large portions of the map, wasting space. It all depends on your training schedule.

At later stages of training: If you started with a single BMU at early stages, it wouldn't provide any meaningful impact, as a SOM, by definition, clusters similar patterns at close positions of its lattice. In other words, the neurons closest to your input are already inside the BMU update radius. If you trained with multiple BMUs since the beginning, you would keep finding / activating the multiple maxima of your multimodal distributions for each pattern. The dimensionality reduction would become even more nonlinear than it already is with single BMUs. This may or may not be a problem, depending on your application.

",144,,,,,1/14/2021 6:01,,,,0,,,,CC BY-SA 4.0 25766,2,,25759,1/14/2021 11:33,,2,,"

It sounds like what you're suggesting is similar to what is done in methods that use a planner. These methods looks to learn the dynamics of the MDP to use to plan during training; that is they want to be able to learn the transition probabilities $p(s'| s, a)$.

In this paper that I read recently they note that learning to predict environment dynamics when the state/action space is high dimensional, as is the case with images, is difficult; so whilst it may be possible in theory it would be difficult to do and if you were predicting many steps into the future then the error would compound.

A way around this, as is done in the referenced paper, is to use predict environment dynamics in a latent space. This means that they use a latent variable to predict the next state using e.g. Variational Autoencoders.

",36821,,,,,1/14/2021 11:33,,,,0,,,,CC BY-SA 4.0 25767,2,,25761,1/14/2021 11:55,,1,,"

Create two different optimizers and split the subnets' parameters into either with different lrs. You will have to call optimizer1.step(), optimizer2.step() with a single backward() call

",21738,,,,,1/14/2021 11:55,,,,0,,,,CC BY-SA 4.0 25768,1,25774,,1/14/2021 13:51,,0,159,"

I have an idea to adapt YOLO algorithm to my application, the original YOLO algorithm is for image classifications, which have 24 convolutional layers with output class of 1000, is it possible to replace the basic network of YOLO with Alexnet or Resnet or the custom network structure designed by myself? Noted that my application have input shape of 500 * 10000 * 1 and only 4 classes for classification.

",43822,,,,,1/14/2021 19:52,Is it possible to modify or replace the basic network of YOLO?,,1,0,,,,CC BY-SA 4.0 25769,1,,,1/14/2021 14:39,,1,84,"

Suppose I have a set of messages A,B,C,D and I want to produce the best message for a website user at a given time.

For training I plan to show random users a random single message [A/B/C/D] and fill these columns (i'm simplifying the data for illustration)

  • converted before
  • funnel state (e.g awareness, search, decision)
  • number of page views
  • message shown [A-D]
  • Time to convert (this will be updated later if there is a conversion)

I want to predict what is the best message to show to a specific user in order to maximise the chance of conversion (=min time to convert).

I'm not sure how to represent this for training and inference. Its not a simple prediction like predicting one of the given data points.

One option is to run prediction of time to buy for each of the messages but 1- its not efficient 2- It will prefer messages that are shown closer to purchase time regardless if they fit the current user time.

",43826,,,,,10/16/2021 3:09,How to predict the best from a set of messages - best practice,,1,0,,,,CC BY-SA 4.0 25770,1,,,1/14/2021 14:54,,1,82,"

Consider a POMDP with a finite number of environment states, $|\mathcal{S}| = N$, but the number of belief states is uncountably infinite. The belief state space is the convex hull of an $N$ simplex. Each turn this space is sampled with a flat probability distribution. As you are sampling from an uncountably infinite set of belief states, the probability of a belief state recurring in a finite number of samples is zero.

Now, let's suppose that there are a finite number of episodes, and each episode ends after 1-time step. At the only time step of the episode, the agent receives a belief state $b(s)$ over some fixed set of contexts $s$, selects a single action, and receives a single reward, before the episode ends.

I understand that the belief state value function, $V(b)$, is piecewise linear and convex, with a single hyperplane for each action (see e.g. [1]).

My question is, given that I only observe the belief states and the sampled rewards, how do I identify the value function, given that a belief state $b(s)$ has an infinitesimally small probability of occuring again?

The expected reward for a given belief state $b$ is just a linear function $\alpha \cdot b$, where $\alpha$ is the vector of the rewards for each state of the environment. But I cannot simply learn a linear model here because $\alpha \cdot b$ gives me the expected reward for a given belief state, but I may never start with this belief state again and so cannot simply calculate the sample mean expected reward.

",43828,,2444,,1/24/2021 14:09,1/24/2021 14:09,How do I learn the value function for a POMDP with a single-step horizon (bandit)?,,0,0,,,,CC BY-SA 4.0 25771,1,,,1/14/2021 18:22,,3,128,"

I'm new to deep learning. I wanted to know: do we use pre-processing in deep learning? Or it is only used in machine learning. I searched for it and its methods on the internet, but I didn't find a suitable answer.

",30164,,2444,,1/14/2021 18:32,1/14/2021 19:46,Is pre-processing used in deep learning?,,2,0,,,,CC BY-SA 4.0 25772,2,,25771,1/14/2021 18:41,,1,,"

Yes, sure, data pre-processing is also done in deep learning. For example, we often normalize (or scale) the inputs to neural networks. If the inputs are images, we often resize them so that they all have the same dimensions. Of course, the pre-processing step that you apply depends on your data, neural network, and task.

Here or here are two examples of implementations that perform a pre-processing step (normalization in the second case). You can find more explanations and examples here and probably here too.

",2444,,2444,,1/14/2021 18:56,1/14/2021 18:56,,,,0,,,,CC BY-SA 4.0 25773,2,,25771,1/14/2021 19:46,,0,,"

Adding to nbro's solution, the less you have to normalize/balance/preprocess/augment, the better, e.g. because then you know for sure that the accuracy is the achievement of the model rather than data combing. For example, if you can achieve the same accuracy using two approaches (e.g. with the image dataset):

  1. for each image, subtract global mean, divide by global standard deviation
  2. 1)+random flips, random crops, color jitter, etc,

then 1), if you can achieve a comparable accuracy, is a better solution, as the model is more general. The same refers to the balancing of the data - if you can train a good model without it, it's an additional strength.

",21738,,,,,1/14/2021 19:46,,,,3,,,,CC BY-SA 4.0 25774,2,,25768,1/14/2021 19:52,,0,,"

No, the original (or any) YOLO is for object detection. You can easily replace the feature extractor (DarkNet53, if I'm not mistaken) with any other, as long as you maintain the correct number of weights in the detection layer.

",21738,,,,,1/14/2021 19:52,,,,2,,,,CC BY-SA 4.0 25775,1,,,1/14/2021 21:23,,3,1260,"

Say I have a machine learning model trained on a laptop and I then want to embed/deploy the model on a microcontroller. How can I do this?

I know that TensorflowLite Micro generates a C header to be added in the project and then be embedded, but every example I read shows how it is done with neural networks, which seems legit as TensorFlow is primarily used for deep learning.

But how can I do the same with any type of model, like the ones there is in scikit-learn? So, I'm not interested in necessarily doing this with TensorflowLite Micro, but I'm interested in the general approach to solve this problem.

",43239,,43239,,1/18/2021 12:37,5/23/2021 13:47,How to embed/deploy an arbitrary machine learning model on microcontrollers?,,4,1,,,,CC BY-SA 4.0 25776,2,,9662,1/14/2021 21:52,,0,,"

One very popular RL algorithm that is capable of predicting multiple action outputs concurrently is Proximal Policy Optimization. In that algorithm, one or more, say $n$, tuples of outputs, $(\mu, \sigma)$, can be predicted at once (having $2*n$ output nodes), where each tuple is used to parameterize a Gaussian distribution from which a respective action value is sampled. The thus sampled action values are then applied in the simulation/game. By slightly modifying this procedure, this can of course also be applied to discrete action spaces equally well.

To get you started, multiple high-quality implementations of PPO are available for rapid prototyping, e.g. OpenAI baselines or stable-baselines.

",37982,,,,,1/14/2021 21:52,,,,0,,,,CC BY-SA 4.0 25777,1,,,1/14/2021 22:59,,1,12,"

I have a data set of 3D images with some bounding box annotations. The images are too large to train something like YOLO 3D (would run out of memory), so I instead created slices of the 3D images with corresponding 2D bounding boxes and trained a 2D object detector. During inference, I assemble the 2D detections into 3D detections. I constructed some simple heuristics to do that, which works fine, but I am wondering if there aren't any established methods of doing that.

I appreciate if you could point me in the right direction.

",43712,,,,,1/14/2021 22:59,Aggregating 2D object detections into 3D object detections,,0,0,,,,CC BY-SA 4.0 25781,1,,,1/15/2021 13:22,,1,84,"

I just started working on a DRL project from scratch. The state of each episode can be expressed as a state set $S=(S^A, S^B, S^C, S^D)$. Each subset is a feature set of a constituent component of the environment, say, $S^A=(a_1, a_2, a_3)$. To model components, I decided to create four pythonic classes with attributes as features. For example, class A is like:

class A:
    def __init__(self, a1, a2, a3): 
        self.a1 = a1
        self.a2 = a2
        self.a3 = a3

Each class has some methods that help in the interaction with other components (classes) and is used in the environment's step function to generate actions.

I am going to create one instance of class A, 10 instances of class B, 20 instances of class C, and a random number of between 1-10 instances of class D at the beginning of each episode. So, my observation includes 33-42 states of entities.

As far as I know, the observation space is usually encoded in n-dimensional arrays as it is in OpenAI Gym. Is it possible, feasible, or considered good practice to store instances as a sub-state of the whole observation space? In my case, it would be like storing 33-42 instances in an array (list) of 33-42 elements.

Thanks for your time and suggestions!

",43578,,43578,,1/18/2021 16:55,1/18/2021 16:55,Is object-based representation of the observation space feasible?,,0,0,,,,CC BY-SA 4.0 25782,1,,,1/15/2021 15:10,,0,70,"

I am a (soon-to-become, to be honest) theoretical physicist. I want to learn a bit about AI. So as you know in physics we develop theories based on as few and as simple basic equations as possible which shall explain as much of the experimental results and observations as possible. I feel that this is kind of not how AI solves problems.

My understanding is that AI can be understood as a very generalized and abstract statistics software package handling input data in a general way to find the "best fit" to some form of problem. Is that correct? I know it isn't. But is it vaguely correct?

I give you an example. In weather prediction there is a technique called MOS (model output statistics). It collects output from numerical weather prediction models (simulation software) as well as observational data and finds statistical relations between them to correct the model output for errors. For example, it might be that the intensity of precipitation in London is on average underestimated by the model by 10 %, so MOS will correct for that. Over time, it improves itself, because it collects more and more data. Is this already a form of AI?

",43857,,2444,,1/16/2021 17:02,1/16/2021 17:02,Can AI be understood as a generalized statistics tool?,,1,0,,1/16/2021 16:51,,CC BY-SA 4.0 25785,2,,25782,1/15/2021 19:36,,1,,"

My understanding is that AI can be understood as a very generalized and abstract statistics software package handling input data in a general way to find the "best fit" to some form of problem. Is that correct? I know it isn't. But is it vaguely correct?

No. It's not correct, in my opinion, not even vaguely and in many ways.

  • AI is not (necessarily) abstract (although there are examples of theoretical frameworks of AI, such as AIXI, which may be of interest to you, given that you are about to become a theoretical physicist)
  • AI does not necessarily generalize statistics or statistical concepts
  • AI is not just machine learning or, more precisely, supervised learning. You're very likely referring to supervised learning when you say "best fit", but there are other forms of learning (such as reinforcement learning) and there are other aspects of AI apart from learning, such as perception, control, etc. Some people sometimes claim that machine learning is glorified statistics, but, although the two are similar and related, I don't think that's exactly correct.

To know what AI is, you should read the book Artificial Intelligence: A Modern Approach. If you want to know what machine learning is, then there are many books that you can read, such as Machine Learning (1997) by Tom M. Mitchell. If you are interested in the relationship between machine learning and statistics, you may be interested in this post. You should also read this answer, which describes what AI is or may refer to. This answer gives a nice brief description of the difference between AI and ML.

",2444,,2444,,1/15/2021 19:57,1/15/2021 19:57,,,,0,,,,CC BY-SA 4.0 25786,2,,3419,1/16/2021 3:36,,0,,"

For the definition and calculation of perplexity, please refer to this answer.

Google proposed a human evaluation metric called Sensibleness and Specificity Average (SSA) which combines two fundamental aspects of a humanlike chatbot: making sense and being specific. And they conducted some experiments and found that perplexity aligns very well with the SSA.

Here is the explanation in the paper:

Perplexity measures how well the model predicts the test set data; in other words, how accurately it anticipates what people will say next.
Our results indicate most of the variance in the human metrics can be explained by the test perplexity.

Their experiments showed a very strong correlation between SSA and perplexity(the lower the perplexity the higher the SSA).

References:

  1. Towards a Human-like Open-Domain Chatbot
",5351,,,,,1/16/2021 3:36,,,,0,,,,CC BY-SA 4.0 25788,2,,2922,1/16/2021 7:13,,1,,"

It seems that Automated Knowledge Base Construction would be unfavorable.

As Matt Gardner noted in NLP Highlights in 2019 that:

Um, but I know that Google, for instance, canceled their knowledge base construction project because there wasn’t high enough precision to actually be useful in their product.

The canceled project Knowledge Vault is an Automated Knowledge Base Construction(AKBC) project launched in August 2014.

There are three methods to integrate knowledge into the neural networks: 1) pre-trained models like BERT, ELECTRA; 2) retrieval-augmented generative model; 3) flesh out the triples into natural text as that in KELM.

In a 2020 paper REALM: Integrating Retrieval into Language Representation Models, they utilized a retrieval rather than a knowledge base to enrich neural networks. And the best systems in the NeurIPS 2020 EfficientQA competition all relied on retrieval.

Knowledge bases that are actively being maintained receive a lot of annotation and curation, as stated in that podcast. If curation and annotation are not sufficient, the knowledge base maybe cannot apply in AI.

",5351,,5351,,5/22/2021 7:11,5/22/2021 7:11,,,,0,,,,CC BY-SA 4.0 25789,2,,9358,1/16/2021 7:49,,1,,"

Human evaluation is the gold standard as stated in this podcast by Asli Celikyilmaz, even if you only test a very small part of the generated text.

You needed an automated method and this one: BLEURT by Google would be helpful. It's a flexible, semantic-level metric/model trained in a multi-stage way: 1) masked language model like BERT; 2) pre-training on synthetic sentence pairs; 3) fine-turning on public human ratings(WMT metrics shared task); 4) fine-tuning on application-specific human ratings. And it applies very well even in the presence of domain drift.

",5351,,,,,1/16/2021 7:49,,,,0,,,,CC BY-SA 4.0 25790,1,,,1/16/2021 8:02,,1,34,"

I wrote two programs that simulated 10000 episodes in gym environment CartPole-v0.

The first program takes random moves in every steps in each episode. The average reward over 10000 episodes is 22.1582.

The second program uses a random policy in each episode. For each episode, initialize a 4 by 2 matrix $M$ with random numbers from a uniform distribution on $[0,1)$ that maps state observations to action values. Then choose the action with higher value in each step. The average reward over 10000 episodes is 46.8291.

The linear mapping given by $M$ covers only a portion of the search space so it seems the way actions are selected in the first program is more "random" than the second program. How can we go about explaining the huge discrepancy in the average rewards obtained by the two methods?

",43579,,,,,1/16/2021 8:02,Difference in average rewards between taking random actions and following random policies,,0,4,,,,CC BY-SA 4.0 25793,1,,,1/16/2021 8:52,,2,52,"

I noticed that there are many studies in recent years on how to train/update neural networks faster/quicker with equal or better performance. I find the following methods(except the chips arms race):

  1. using few-shot learning, for instance, pre-taining and etc.
  2. using the minimum viable dataset, for instance using (guided) progressive sampling.
  3. model compression, for instance, efficent transformers
  4. Data echoing, or simply put let the data pass multiple times in the graph(or GPU)

Is there a systematic structure on this topic and how can we update or train a model faster without loss of its capacity?

",5351,,5351,,1/23/2021 9:59,1/23/2021 9:59,How to train/update neural networks faster without a decrease in performance?,,0,6,,,,CC BY-SA 4.0 25795,1,,,1/16/2021 10:48,,1,445,"

I've seen plenty of examples of people doing Sigmoid + MSE backpropagation implementations, yet I do not seem to understand how to implement backpropagation as stated in the title in the case of multi-class classifications.

What confuses me mainly are the matrix-vector shapes and their multiplications, and their implementations in code.

",43270,,,,,1/22/2021 0:06,How to compute the gradient of the cross-entropy loss function with respect to the parameters with softmax activation function?,,1,3,,,,CC BY-SA 4.0 25796,1,,,1/16/2021 11:46,,1,558,"

What if I don't apply an activation function on some layers in a neural network. How will it affect the model?

Take for instance the following code snippet:

def model(x):
    a = Conv2D(64, (3, 3))(x)                         
    x = Conv2D(64, (3, 3), activation = 'relu')(x)
    b = Conv2D(128, (3, 3))(x)
    x = Conv2D(128, (3, 3), activation = 'relu')(b)
    return x, a, b
",43870,,2444,,1/16/2021 22:03,1/16/2021 22:12,What happens if there is no activation function in some layers of a neural network?,,1,0,0,,,CC BY-SA 4.0 25797,1,,,1/16/2021 15:33,,0,487,"

I am trying to train an LSTM using CTC loss, but the loss does not decrease when I train it. I have created a minimal example of my issue by creating training data where the network simply has to copy the current input element at each time step. Moreover, I have made the length of the label the same as the length of the input sequence and no adjacent elements in the label sequence the same so that both CTC loss and categorical cross-entropy loss can be used. I found that when using categorical cross-entropy loss the model very quickly converges, whereas when using CTC loss it gets nowhere.

I have uploaded by minimal example to colab. Does anyone know why CTC loss is not working in this case?

",38209,,2444,,1/16/2021 21:33,10/9/2022 23:08,Why won't my model train with CTC loss?,,1,4,,,,CC BY-SA 4.0 25799,2,,5990,1/16/2021 19:10,,4,,"

Why is AIMA dense?

Artificial intelligence is a broad field: that's why Artificial Intelligence: A Modern Approach (AIMA) may look a bit dense to newcomers, given that it covers many different aspects of AI, such as search, machine learning, and natural language processing.


AI is not just ML!

The first book in this answer is a good book, but it focuses on computational intelligence approaches, which are often considered part of AI too. The other books that you mention in your post also focus on subfields of AI, such as machine learning or image processing, so they do not cover all aspects of AI.


Alternatives to AIMA

Title Author(s) Year Topics Comments
Artificial intelligence Patrick Winston 1992 (3rd edition) Search, rule-based systems, machine learning, evolutionary algorithms, etc. I occasionally consulted this book; Patrick Winston was a professor at MIT and also director of the AI Lab at MIT; you can also find his free course on Artificial Intelligence (which I highly recommend) here.
Artificial Intelligence: A New Synthesis Nils J. Nilsson 1998 Neural networks, evolutionary algorithms, search, knowledge representation and reasoning, planning, Bayesian networks, etc. Nilsson also wrote other important books related to AI and the philosophy of AI, such as The Quest for Artificial Intelligence: A History of Ideas and Achievements (2009), among other important contributions to the AI field, such as the robot Shakey and STRIPS.
Artificial Intelligence: A Guide to Intelligent Systems Michael Negnevitsky 2005 (2nd edition) Expert systems, fuzzy systems, artificial neural networks, evolutionary computation, hybrid systems, etc.
Artificial Intelligence: Structures and Strategies for Complex Problem Solving George Lugar 2009 (6th edition) History of AI, search, knowledge representation, expert systems, rule-based systems, machine learning, evolutionary algorithms, automated reasoning, natural language understanding, etc. This book is mentioned by Ben Goertzel in his paper Artificial General Intelligence: Concept, State of the Art, and Future Prospects as a "popular AI textbook".
Artificial Intelligence: Foundations of Computational Agents David L. Poole, Alan K. Mackworth 2017 (2nd edition) Machine learning, search, planning, reasoning, and knowledge-based systems, etc. I occasionally consulted this book.
",2444,,2444,,1/23/2022 17:57,1/23/2022 17:57,,,,0,,,,CC BY-SA 4.0 25802,2,,25796,1/16/2021 22:12,,2,,"

If you do not specify an activation for a layer you are effectively creating a linear transformation through that layer. From the documentation:

activation: Activation function to use. If you don't specify anything, no activation is applied (see keras.activations).

",30426,,,,,1/16/2021 22:12,,,,0,,,,CC BY-SA 4.0 25804,1,25806,,1/16/2021 23:52,,2,340,"

Why is neural networks being a deterministic mapping not always considered a good thing?

So I'm excluding models like VAEs since those aren't entirely deterministic. I keep thinking about this and my conclusion is that often times neural networks are used to model things in reality, which often time do have some stochasticity and since neural networks are deterministic if they are not trained on enough examples of the possible variance inputs in relation to outputs can have they cannot generalize well. Are there other reasons this is not a good thing?

",30885,,2444,,1/17/2021 0:23,1/17/2021 11:51,Why is neural networks being a deterministic mapping not always considered a good thing?,,1,0,,,,CC BY-SA 4.0 25805,2,,25775,1/17/2021 0:30,,6,,"

There are a few possible approaches to deploying a ML model to a microcontroller.

The main limiting factor to deployment on microcontollers is that ML models are usually a representation of a set of parameters that are intended to be used as input to a prediction algorithm alongside a new datapoint. Most such models assume the presence of an accompanying library that implements the algorithm in question. However, a microcrontoller may use an exotic chip architecture, or have very several or unusual resource constraints that prevent these standard libraries from being deployed easily.

Presumably you will already have some way to get input into your microcontroller and to program it in order to call some function that you can write. If not, you will need to first figure out how to do that, and the right methods depend on your microcontroller. A common approach is to write assembly code or code in a very limited subset of C or another language. An alternative is to find a distribution of an interpreter for another language (e.g. Java, Python) that has been compiled to work on your chip. Either way, you will need some way to program the chip.

Presuming you can program the chip, you have two fundamental challenges in deploying the model:

  1. Most models are trained with very wide floating point numbers for their parameters. For example, 128-bit numbers may be used. On a standard computing environment, the CPU or GPU will be equipped to perform operations on wide datatypes efficiently. On a microcontroller, you may be limited to 8-bit or 16-bit integers. To work with your model parameters in an environment like this, you will need to either make the parameters smaller (usually by rounding them to fit in a much smaller numeric format, a process called "quantization"), or by finding or writing software that can simulate the operations you want (probably addition and multiplication) on a large datatype that is represented as a collection of smaller datatypes. The first approach may make the model perform poorly. The second may make model prediction very slow.

  2. You need an implementation of the algorithm. Some algorithms like linear regression, linear discriminant analysis, or even decision trees are extremely easy to implement prediction for, and may require only addition, multiplication, and/or comparison. You might be able to write these yourself in a simple subset of C, or even in assembly (for example, prediction with linear regression should be just a simple loop). Other algorithms, like deep neural networks, may contain more complex operations, and may contain many such operations performed in complicated sequences. For these, you generally will need to find an distribution of a library that implements the algorithms, or compile one yourself. Compiling one yourself will require setting up a build toolchain for your specific microcontroller, and can be quite involved.

",16909,,,,,1/17/2021 0:30,,,,0,,,,CC BY-SA 4.0 25806,2,,25804,1/17/2021 1:15,,1,,"

Your intuition is right. The main reason why a deterministic function can be undesirable (or even dangerous, as I will explain below with an example) is that we may not have enough data to learn the correct function, so we may end up learning the incorrect one. Right now, no other reason, from a theoretical point of view, comes to my mind, but below I will mention a few applications/cases where a deterministic function may not be desirable.

If we had all data pairs $\{(x_i, y_i)\}$, where $x_i \in \mathcal{X}$ and $y_i = \mathcal{Y}$ are, respectively, an input and output from the unknown function that you want to learn $f: \mathcal{X} \rightarrow \mathcal{Y}$, i.e. $f(x_i) = y_i$, then you could reconstruct $f$: whenever $x_i$ is given, you just need to return $y_i$.

Of course, in reality, we almost never have a large enough (training) dataset to approximate out desired (but usually unknown) function. If we learn only one (deterministic) function, then, in principle, you can catastrophically fail, i.e. your approximation of $f$, denoted as $f_\theta$ (where $\theta$ are the parameters of the neural network or any other model), can produce outputs that are completely wrong.

Let me try to give you a simple example. Let's say that $f$ is defined as follows

$$f: \mathbb{N} \rightarrow \{0, 1\}$$

You are given a training labelled dataset $$D = \{(4, 1), (11, 0), (8, 1), (31, 0), (16, 1), (7, 0) \}.$$

Apparently, our unknown function is defined as

\begin{align} h_1(x)= \begin{cases} 1, &x \text{ mod } 2 \equiv 0\\ 0, &\text{otherwise} \end{cases}\tag{1}\label{1} \end{align} Given that $D$ is small, your neural network, $f_\theta$, can easily overfit $D$, i.e. learn to output $1$ when $x$ is even and $0$ otherwise.

However, what if $f$ is not that function in equation \ref{1} and we collected just a dataset that doesn't represent $f$ well enough? If you look at $D$ more carefully, you will see that another possible hypothesis for $f$ is the following

\begin{align} h_2(x)= \begin{cases} 1, &x \text{ mod } 4 \equiv 0\\ 0, &\text{otherwise} \end{cases}\tag{2}\label{2} \end{align} However, given that your neural network can only compute one of these functions at a time, it could compute the wrong one. Let's say that $f_\theta \approx h_1$, then it should produce $1$ when $x = 6$ (an even number). If the correct unknown function was $h_2$, i.e. $f = h_2$, then $f_\theta(6) = 1$ would be wrong (because $6$ is not a multiple of $4$).

Of course, this is just a toy example. However, there are many other cases where this can happen, which may not be desirable, such as healthcare, medicine or self-driving cars, where the wrong prediction can lead to catastrophic consequences, such as the death of a person.

If we maintain a probability distribution over the possible functions that are consistent with the observed data so far, we can (partially) avoid this issue. So, continuing with the example above, this probability distribution over functions should be highly uncertain about $x = 6$, whether it produces $0$ or $1$, because it has never seen the label for $x=6$, so a medical doctor or the human driver could intervene in the case of (high) uncertainty.

For this reason, in the last decade, people have started to incorporate uncertainty estimation in neural networks. Neural networks that model uncertainty (to some degree) are often called Bayesian neural networks (BNNs), and there are different approaches (such as variational BNNs, MC dropout or Monte Carlo-based approaches). If you are interested in this topic, the paper Weight Uncertainty in Neural Network (2015) is a good start, especially if you are already familiar with VAEs. Given that this is a very new research area, the current solutions are still not very satisfactory. For example, you can find examples in the literature that report that MC dropout can produce very bad estimates of uncertainty (even in my master's thesis I have observed and thus concluded that this is the case), i.e. they can be highly certain when they should be highly uncertain.

",2444,,2444,,1/17/2021 11:51,1/17/2021 11:51,,,,0,,,,CC BY-SA 4.0 25808,1,25809,,1/17/2021 5:05,,2,126,"

I don't understand how the formula in the red circle is derived. The screenshot is taken from this paper

",43880,,2444,,1/19/2021 21:12,1/19/2021 21:12,How to prove the formula of eligibility traces operator in reinforcement learning?,,1,1,,,,CC BY-SA 4.0 25809,2,,25808,1/17/2021 11:48,,1,,"

I will refer to $\mathcal T^{\pi} $as $\mathcal T$ and $P^{\pi}$ as $P$ for notational simplicity \begin{align} (\mathcal{T})^{n+1} Q &= \mathcal{T}(\mathcal{T}(...(\mathcal{T}(Q))))\\ &= r + \gamma P(r + \gamma P(...(r + \gamma P Q)))\\ &= r + r\sum_{i=1}^{n} \gamma^i P^i + \gamma^{n+1} P^{n+1} Q \end{align}

\begin{align} \mathcal{T}_{\lambda}Q &= (1-\lambda) \sum_{n=0}^{\infty} \lambda^n (\mathcal{T})^{n+1} Q\\ &=(1-\lambda)\{\lambda^0 (\mathcal T)^1Q + \lambda^1 (\mathcal T)^2Q + \lambda^2 (\mathcal T)^3Q + \ldots \} \end{align}

when you plug in expression for $(\mathcal T)^i Q$ inside this sum and rearrange you get 3 sums \begin{equation} \mathcal{T}_{\lambda}Q = (1-\lambda) \sum_{n=0}^{\infty} \lambda^n r + (1-\lambda)\sum_{n=1}^{\infty} \lambda^n \gamma^n P^n r + (1-\lambda)\sum_{n=0}^{\infty} \lambda^n \gamma^{n+1} P^{n+1} Q \end{equation}

  1. sum: \begin{equation} (1-\lambda) \sum_{n=0}^{\infty} \lambda^n r = r \end{equation}
  2. sum: \begin{equation} (1-\lambda)\sum_{n=1}^{k} \lambda^n \gamma^n P^n r = (1-\lambda\gamma P)^{-1}(1 - \lambda^k \gamma^k P^k)\lambda\gamma P r \end{equation} As $k \rightarrow \infty$ and since $\gamma < 1$ this is in the limit equal to \begin{equation} (1-\lambda)\sum_{n=1}^{\infty} \lambda^n \gamma^n P^n r = (1-\lambda\gamma P)^{-1}\lambda\gamma P r \end{equation}
  3. sum: \begin{equation} (1-\lambda)\sum_{n=0}^{\infty} \lambda^n \gamma^{n+1} P^{n+1} Q = (1 - \gamma\lambda P)^{-1}(1-\lambda)\gamma P Q \end{equation} If you combine all 3 you get \begin{align} \mathcal{T}_{\lambda}Q &= r + (1-\lambda\gamma P)^{-1}\lambda\gamma P r + (1 - \gamma\lambda P)^{-1}(1-\lambda)\gamma P Q\\ &= r+ (1-\lambda\gamma P)^{-1}(\lambda \gamma P r + \gamma PQ - \lambda\gamma PQ)\\ &= r+ (1-\lambda\gamma P)^{-1}(\lambda \gamma P r + (\mathcal T)Q - r - \lambda\gamma PQ)\\ &= (1-\lambda\gamma P)^{-1}(r - \lambda \gamma P r + \lambda \gamma P r + (\mathcal T)Q - r - \lambda\gamma PQ)\\ &= (1-\lambda\gamma P)^{-1}( (\mathcal T)Q - \lambda\gamma PQ + Q - Q)\\ &= Q + (1-\lambda\gamma P)^{-1}( (\mathcal T)Q - Q) \end{align}
",20339,,,,,1/17/2021 11:48,,,,1,,,,CC BY-SA 4.0 25811,2,,25775,1/17/2021 12:53,,2,,"

If the library running the model can be compiled for your microcontroller, then you can run your model on that microcontroller.

If you train using one library and deploy using another library, you possible can convert your model to that library: ONNX.

Some library links on Edge Computing in ML:

To speed things up specific hardware is used: GPU's of course, but also Tensor Processing Units.

One can manually transfer an NN trained model to C code (paper), but there are also compilers:

XLA and Glow support x86-64 and ARM64. The NNCG approach generates C code and thus should be more readily portable to general MCU's (paper).

Further links: adge-ai, stackoverlow question

",43882,,43882,,1/18/2021 16:28,1/18/2021 16:28,,,,2,,,,CC BY-SA 4.0 25812,2,,20405,1/17/2021 13:34,,3,,"

The question in this video is

Are you real?

What does this question really mean? Is the guy asking whether the apparent female (I don't know if she is a cyborg or not because I did not yet watch the TV series) is a human? So, is "real" a synonym for "human"? If that's the case, then the first implication (in the form of a question) of the statement

If you can't tell, does it matter?

in relation to AI is

  1. Can we create an AGI that is sufficiently similar to a human that we can't tell whether it's a human or not (by just normally interacting with it)?

Of course, it's not clear what we mean by "normally interacting". As far as I remember, this issue is also raised in the film A.I. Artificial Intelligence, where the AI (the kid interpreted by Haley Joel Osment) looks sufficiently real to the other kid, so he behaves as if he was a human kid, but then the human kid understands that he's a machine, and starts to behave differently (I hope I'm remembering the film correctly).

So, the second question that we could ask is

  1. Once we understand that it's not a human (for example, because it's made of other substances), would we humans start to behave differently and start treating the AGI differently?

As opposed to the first question, which is still an open problem, this second question can probably be answered by looking at our relationships with other humans (or entities, such as other animals). Often, we have an idea of a person. Once we discover something new about that person, which maybe we dislike, we may start to treat that person differently. I think this would very likely also happen in our eventual relationship with a sufficiently advanced AGI too, as depicted in the mentioned film.

Now, let me try to address the other question

What are the implications of this statement in relation to experience, in relation to the self?

I think that you're asking whether a sufficiently advanced AGI could be considered conscious or not. Of course, this is a very hard question to answer, because we still don't have a clear definition of consciousness or we don't yet agree on a standard definition, so I don't really have a definitive answer to this question. However, if consciousness is just a byproduct of perception and the ability to understand the world and its (physical) rules, then an AGI could be conscious (in a similar way that humans are also conscious). However, consciousness may not actually be necessary to correctly act in the world. In any case, the AI probably needs to know that it has a body and that it needs to protect it for its survival, if that's its main goal.

",2444,,2444,,1/17/2021 13:44,1/17/2021 13:44,,,,1,,,,CC BY-SA 4.0 25814,1,,,1/17/2021 15:15,,3,158,"

Is there any current research on Gödel machines? It seems that the last article by Jürgen Schmidhuber on this topic was published in 2012: http://people.idsia.ch/~juergen/goedelmachine.html

",41482,,2444,,1/17/2021 15:33,10/9/2022 21:08,Current research on Gödel machines,,1,0,,,,CC BY-SA 4.0 25817,2,,25797,1/17/2021 15:40,,0,,"

The problem was the dimensions of the logit_length argument to tf.nn.ctc_loss was incorrect.

It was this:

tf.repeat(tf.shape(y_pred)[-1], tf.shape(y_pred)[0])

But it should have been

tf.repeat(tf.shape(y_pred)[-2], tf.shape(y_pred)[0])
",38209,,,,,1/17/2021 15:40,,,,0,,,,CC BY-SA 4.0 25818,2,,25814,1/17/2021 15:48,,0,,"

This paper Can Machines Design? An Artificial General Intelligence Approach (2018, presented at AGI-18 and published in the related proceedings here), which proposes the design Gödel machine, may be useful to you.

After a quick search, I have not found other relevant papers, so I suppose that the research on GMs is currently not very active.

",2444,,2444,,1/17/2021 16:11,1/17/2021 16:11,,,,0,,,,CC BY-SA 4.0 25819,1,26371,,1/17/2021 17:31,,1,91,"

The validation accuracy of my 1D CNN is stuck on 0.5 and that's because I'm always getting the same prediction out of a balanced data set. At the same time my training accuracy keeps increasing and the loss decreasing as intended.

Strangely, if I do model.evaluate() on my training set (that has close to 1 accuracy in the last epoch), the accuracy will also be 0.5. How can the accuracy here differ so much from the training accuracy of the last epoch? I've also tried with a batch size of 1 for both training and evaluating and the problem persists.

Well, I've been searching for different solutions for quite some time but still no luck. Possible problems I've already looked into:

  1. My data set is properly balanced and shuffled;
  2. My labels are correct;
  3. Tried adding fully connected layers;
  4. Tried adding/removing dropout from the fully connected layers;
  5. Tried the same architecture, but with the last layer with 1 neuron and sigmoid activation;
  6. Tried changing the learning rates (went down to 0.0001 but still the same problem).

Here's my code:

import pathlib
import numpy as np
import ipynb.fs.defs.preprocessDataset as preprocessDataset
import pickle
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras import Input
from tensorflow.keras.layers import Conv1D, BatchNormalization, Activation, MaxPooling1D, Flatten, Dropout, Dense
from tensorflow.keras.optimizers import SGD

main_folder = pathlib.Path.cwd().parent
datasetsFolder=f'{main_folder}\\datasets'
trainDataset = preprocessDataset.loadDataset('DatasetTime_Sg12p5_Ov75_Train',datasetsFolder)
testDataset = preprocessDataset.loadDataset('DatasetTime_Sg12p5_Ov75_Test',datasetsFolder)

X_train,Y_train,Names_train=trainDataset[0],trainDataset[1],trainDataset[2]
X_test,Y_test,Names_test=testDataset[0],testDataset[1],testDataset[2]

model = Sequential()

model.add(Input(shape=X_train.shape[1:]))

model.add(Conv1D(16, 61, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(32, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(64, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(64, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling1D(2, strides=2, padding="valid"))

model.add(Conv1D(64, 3, strides=1, padding="same"))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Flatten())
model.add(Dropout(0.5))

model.add(Dense(200))
model.add(Activation('relu'))

model.add(Dense(2))
model.add(Activation('softmax'))

opt = SGD(learning_rate=0.01)

model.compile(loss='binary_crossentropy',optimizer=opt,metrics=['accuracy'])

model.summary()

model.fit(X_train,Y_train,epochs=10,shuffle=False,validation_data=(X_test, Y_test))

model.evaluate(X_train,Y_train)

Here's model.fit():

model.fit(X_train,Y_train,epochs=10,shuffle=False,validation_data=(X_test, Y_test))

Epoch 1/10
914/914 [==============================] - 277s 300ms/step - loss: 0.6405 - accuracy: 0.6543 - val_loss: 7.9835 - val_accuracy: 0.5000
Epoch 2/10
914/914 [==============================] - 270s 295ms/step - loss: 0.3997 - accuracy: 0.8204 - val_loss: 19.8981 - val_accuracy: 0.5000
Epoch 3/10
914/914 [==============================] - 273s 298ms/step - loss: 0.2976 - accuracy: 0.8730 - val_loss: 1.9558 - val_accuracy: 0.5002
Epoch 4/10
914/914 [==============================] - 278s 304ms/step - loss: 0.2897 - accuracy: 0.8776 - val_loss: 20.2678 - val_accuracy: 0.5000
Epoch 5/10
914/914 [==============================] - 277s 303ms/step - loss: 0.2459 - accuracy: 0.8991 - val_loss: 5.4945 - val_accuracy: 0.5000
Epoch 6/10
914/914 [==============================] - 268s 294ms/step - loss: 0.2008 - accuracy: 0.9181 - val_loss: 32.4579 - val_accuracy: 0.5000
Epoch 7/10
914/914 [==============================] - 271s 297ms/step - loss: 0.1695 - accuracy: 0.9317 - val_loss: 14.9538 - val_accuracy: 0.5000
Epoch 8/10
914/914 [==============================] - 276s 302ms/step - loss: 0.1423 - accuracy: 0.9452 - val_loss: 1.4420 - val_accuracy: 0.4988
Epoch 9/10
914/914 [==============================] - 266s 291ms/step - loss: 0.1261 - accuracy: 0.9497 - val_loss: 4.3830 - val_accuracy: 0.5005
Epoch 10/10
914/914 [==============================] - 272s 297ms/step - loss: 0.1142 - accuracy: 0.9548 - val_loss: 1.6054 - val_accuracy: 0.5009

Here's model.evaluate():

model.evaluate(X_train,Y_train)

914/914 [==============================] - 35s 37ms/step - loss: 1.7588 - accuracy: 0.5009

Here's model.summary():

Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv1d (Conv1D)              (None, 4096, 16)          992       
_________________________________________________________________
batch_normalization (BatchNo (None, 4096, 16)          64        
_________________________________________________________________
activation (Activation)      (None, 4096, 16)          0         
_________________________________________________________________
max_pooling1d (MaxPooling1D) (None, 2048, 16)          0         
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 2048, 32)          1568      
_________________________________________________________________
batch_normalization_1 (Batch (None, 2048, 32)          128       
_________________________________________________________________
activation_1 (Activation)    (None, 2048, 32)          0         
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 1024, 32)          0         
_________________________________________________________________
conv1d_2 (Conv1D)            (None, 1024, 64)          6208      
_________________________________________________________________
batch_normalization_2 (Batch (None, 1024, 64)          256       
_________________________________________________________________
activation_2 (Activation)    (None, 1024, 64)          0         
_________________________________________________________________
max_pooling1d_2 (MaxPooling1 (None, 512, 64)           0         
_________________________________________________________________
conv1d_3 (Conv1D)            (None, 512, 64)           12352     
_________________________________________________________________
batch_normalization_3 (Batch (None, 512, 64)           256       
_________________________________________________________________
activation_3 (Activation)    (None, 512, 64)           0         
_________________________________________________________________
max_pooling1d_3 (MaxPooling1 (None, 256, 64)           0         
_________________________________________________________________
conv1d_4 (Conv1D)            (None, 256, 64)           12352     
_________________________________________________________________
batch_normalization_4 (Batch (None, 256, 64)           256       
_________________________________________________________________
activation_4 (Activation)    (None, 256, 64)           0         
_________________________________________________________________
flatten (Flatten)            (None, 16384)             0         
_________________________________________________________________
dropout (Dropout)            (None, 16384)             0         
_________________________________________________________________
dense (Dense)                (None, 200)               3277000   
_________________________________________________________________
activation_5 (Activation)    (None, 200)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 402       
_________________________________________________________________
activation_6 (Activation)    (None, 2)                 0         
=================================================================
Total params: 3,311,834
Trainable params: 3,311,354
Non-trainable params: 480
_________________________________________________________________
",43887,,43887,,1/17/2021 18:46,2/13/2021 17:11,Keras 1D CNN always predicts the same result even if accuracy is high on training set,<1d-convolution>,1,0,,,,CC BY-SA 4.0 25820,1,25825,,1/17/2021 18:05,,0,326,"

I just finished Andrew Ngs's deep learning specialization, but RL was not covered, so I don't know the basics of RL. So, I have been having trouble understanding the cost function in deep Q-learning. Like other cost functions in machine learning, you usually have $\hat{y}$ (the network prediction) and $y$ (the target, or what the network is being optimized for.)

I've read through a few online articles on deep Q-learning. So far, there has been no mention of setting up a target state ($y$) for the agent to produce. There has been mention of calculating a temporal-difference, however, which is where I am confused.

When calculating the cost function, are you taking the input state ($\hat{y}$) and a target state ($y$) into consideration to determine the temporal-difference?

Otherwise, I'm not sure how the cost function could determine a reward based on the input alone (state of the environment the agent is in.).

",20271,,2444,,1/18/2021 12:18,1/18/2021 15:47,"When calculating the cost in deep Q-learning, do we use both the input and target states?",,1,0,,,,CC BY-SA 4.0 25822,1,26021,,1/18/2021 8:51,,1,311,"

I am working on a pix2pix GAN model that was inspired by the code in this Github repository. The original code is working and I have already customized most of the code for my needs. However, there is one part I am unable to understand.

The pix2pix GAN is a conditional GAN network that takes an image as a condition and outputs a modified image - such as blurry to clear, facades to buildings, filling up cut out part of an image, etc. The combined model thus takes as input a conditional image, the discriminator compares it with the dummy matrix named valid or fake, containing 0s or 1s according to validity (0 for generated samples, 1 for real samples). The generator loss is according to similarity with real sample + discriminator. The following code corresponds to what I told:

self.combined = Model(inputs=[img_A, img_B], outputs=[valid, fake_A])
self.combined.compile(loss=['mse', 'mae'],
                      loss_weights=[1, 100],
                      optimizer=optimizer)

The losses are thus set as MSE for discriminator output and MAE for generator. That seems to be OK, but I can not understand why the implementation uses 1 and 100 for the weights of the discriminator and generator losses, respectively, which seems to imply that the discriminator loss is 100 times lower than the loss of the generator. I couldn't find the reason in the original article. Are my understandings of the GAN incorrect?

Disclaimer: I have posted this question on Stats SE, but have no luck with answers. Maybe it is more suitable for AI.

",40205,,2444,,1/19/2021 11:38,1/27/2021 19:00,"In this implementation of pix2pix, why are the weights for the discriminator and generator losses set to 1 and 100 respectively?",,1,1,,,,CC BY-SA 4.0 25825,2,,25820,1/18/2021 12:08,,3,,"

I will first explain briefly to you the difference between supervised learning and reinforcement learning to make sure that you don't have any misunderstandings. In supervised learning you are provided with some data $\{(\textbf{x}_i, y_i)\}_{i=1}^n$ where $\textbf{x}_i$ are the features for data point $i$ and $y_i$ is its true label. Now, the aim of supervised learning is to learn a function $f$ that can accurately predict the label of a data point given its features. In deep learning this function is a neural network, $f_\theta(\cdot)$. To optimise the parameters we obtain the models prediction for the label of $y$, denoted typically by $\hat{y} = f_\theta(x)$ and we look to optimise the parameters of the function $\theta$ by minimising the loss $\mathcal{L}(\hat{y}, y)$ (note that here the loss is a function $\mathcal{L}: \mathcal{Y} \times \mathcal{Y} \rightarrow \mathbb{R}$.

In reinforcement learning things are quite different. We are not provided with any data. Instead we have a Markov Decision Process (MDP) and an environment that we can interact with. The state space is defined by the MDP and we can get samples, mainly tuples of the form $(s, a, r, s')$, that we can use to teach our agent how to find an optimal policy - typically an optimal policy is one that takes an action given the current state that will maximise the sum of the future rewards of the episode. So in reinforcement you can probably see that things are quite different to supervised learning, mainly in the way that we 'obtain' our data.

Of course these explanations are gross oversimplifications of the learning process in both paradigms and should only be used as a brief example to try to emphasise the different between the two learning paradigms.

Now, there are two ways you can parameterise your Q-function using a neural network:

  1. The network takes as input the current state and outputs Q-values for each potential action; i.e. the output is a vector in $\mathbb{R}^{|\mathcal{A}|}$;
  2. The network takes as input the state and action and outputs a real number which is the Q-value for the state, action tuple you pass as input to the network.

The second way is usually reserved for instances where you have a huge action space but only ever would consider a few feasible actions at each state - this saves computational complexity.

Now, to answer your question:

In case 1) you are assumed to have access to a transition tuple $(s, a, r, s')$. The temporal difference, which we will use as the target, is $\hat{y} = r + \max_a Q(s', a)$. Now, as our output of $Q(s', \cdot)$ is an $\mathbb{R}^{|\mathcal{A}|}$ what we do is make a forward pass of the network for the current $(s, a)$ tuple, i.e. we get $x = Q(s, a)$, and then change the element that corresponds to the action which satisfies $\arg\max_a Q(s', a)$ of $x$ to $\hat{y}$, so we have now got our augmented input $\tilde{x}$ which serves as our target.

To make that step a bit clearer, suppose we have a 2-dimensional action space and the $\arg\max_a Q(s', a)$ is the action in the first dimension, then we would change the first dimension of $x$ to be $\hat{y}$.

We then train the network using the Mean Squared Error Loss between $x$ and $\tilde{x}$ - note that no gradient information is retained when we do the forward pass to get $\tilde{x}$; in fact we usually don't use a current version of the $Q$ network, we use an 'old' version of the network called the target network (I imagine there's probably a question about this network on the site already so I won't explain it in detail).

In case 2) the idea is much more simple as the outputs of the network are scalars so you can just train your network using the MSE between the scalar values of $Q(s, a)$ and $\hat{y}$ as defined above, the caveat here is that to calculate $\arg\max_a Q(s', a)$ you have to make $|\mathcal{A}|$ forward passes of the network for all possible $(s', a)$, for fixed $s'$, tuples which is why method 1) is usually preferred.

Now, whilst we may use techniques from supervised learning to optimise the Q-function, you can hopefully see the differences between supervised and reinforcement learning.

In short, the temporal difference is the target that you train your network towards being able to predict, and the state is (typically) the input into your neural network. For a more in depth discussion of RL being framed as supervised learning please see this answer.

",36821,,36821,,1/18/2021 15:47,1/18/2021 15:47,,,,0,,,,CC BY-SA 4.0 25826,2,,25775,1/18/2021 12:40,,4,,"

I found:

These seems to partly fit my needs. But i am surprised that i cannot find something more general that either convert Python to C or to object file with ML support (to be used in C projects). Indeed, once trained, ML algo are "just" a bunch of additions/multiplications/comparisons.

It would help to be able to convert scikit-learn pipeline for instance, as ML projects are rarely composed of only a single ML model taking raw data. But it seems that EdgeAI/TinyML is mainly focused on compiling deep learning models when it comes to deploy ML models on "bare-metal" microcontrollers.

Thanks for your answers btw, it helps.

",43239,,43239,,1/27/2021 20:39,1/27/2021 20:39,,,,5,,,,CC BY-SA 4.0 25827,1,,,1/18/2021 13:32,,2,91,"

I have been training some kind of agent to reach a target using a Q-learning based approach, and I have tried two different types of rewards:

  1. Long-term reward: $\mathrm{reward} = - \mathrm{distance}(\mathrm{agent,target})(t+1)$

  2. Short-term reward: $\mathrm{reward} = \mathrm{distance}(\mathrm{agent,target})(t) - \mathrm{distance}(\mathrm{agent,target})(t+1)$

In the first case, I am rewarding the current progress. In the second case, I am rewarding direct progression, but this may lead to less progression in the future. My question is, what kind of reward does Q-learning need?

I understand that the $\gamma$ factor should incorporate long term rewards, so it makes more sense to reward direct progression. However, using long-term rewards gave better results for my scenario...

",5344,,2444,,1/18/2021 14:07,1/18/2021 14:07,Is better to reward short- or long-term progress in Q-learning?,,0,1,,,,CC BY-SA 4.0 25828,1,,,1/18/2021 15:52,,1,15,"

I have a question about Show, Attend and Tell: Neural Image CaptionGeneration with Visual Attention paper by Xu. The basic mechanism of stochastic hard attention is that each pixel of the input image has a corresponding parameter $\alpha_i$, which describes the probability that this pixel will be chosen for further processing.

But I don't see an explanation of how to train or define this parameter in the paper. Can someone explain how to train this $\alpha_i$ for each pixel?

",43912,,2444,,1/19/2021 12:06,1/19/2021 12:06,How are the parameters $\alpha_i$ of hard attention trained?,,0,0,,,,CC BY-SA 4.0 25829,1,,,1/18/2021 15:58,,1,110,"

I want to train some IA algorithm to be able to evaluate the maturity of a fruit (say, measured in numbers of days before rotten) based on an image of the fruit. My first instinct is to go with convolutional neural network (CNN), since those have proven very efficient for recognizing images. However, I am not sure what the output layer should look like in this case.

I could separate the data into a bunch of classes (1 day left, 2 days left, 3 days left, etc.) and use one output node for each of these classes, as in an usual classification task, but in doing so I completely lose the continuous nature of the output, which makes me think it might not be the optimal way to proceed.

Another option would be to just have a unique output node, whose activation would correspond to the continuous value to predict, here the number of days left (normalized appropriately to lie between 0 and 1). This would have the advantage of taking the continuity into account, but I have been told that neural networks aren't made to predict values in that way, they really are best suited for classification into discrete classes.

What do you think would be the best way to proceed? Is there another way to nudge a neural network so that its output is continuous? Or maybe CNN just aren't suited for this task? If you have any suggestions of other algorithms that would be efficient for this kind of task, I would be happy to know them.

",37262,,,,,1/21/2021 11:28,Predicting continous value with CNN (prediction of fruit maturity),,1,2,,,,CC BY-SA 4.0 25832,1,,,1/18/2021 18:46,,1,130,"

Looking at some old notes I took on CNN's and I wrote down that the weights in a CNN are acting like filters in a CNN but to be honest I don't really know what the weights are acting as in a CNN and was wondering if someone could explain that clearly to me.

",20336,,2444,,1/19/2021 15:30,2/18/2021 17:33,What are acting as weights in a convolution neural network?,,1,2,,,,CC BY-SA 4.0 25835,1,,,1/18/2021 21:22,,4,294,"

AI reached super-human level in many complex games, including imperfect information games such as six-player no-limit Texas hold’em poker. However, it still did not reached that level in Trick-taking card games such as Spades, Bridge, Skat and Whist. In a related question, I am asking Why Trick-Taking games are a challenge for AI.

An important factor that makes those games a challenge for AI is their size, to be precise lets talk about the State-space complexity which is define as the number of legal game positions reachable from the initial position of the game [Victor Allis (section 6.2.4)].

What is the size of Spades?

",43351,,43351,,2/14/2021 7:25,2/14/2021 7:25,What is the state-space complexity of Spades?,,1,0,,,,CC BY-SA 4.0 25837,1,25842,,1/18/2021 22:58,,2,99,"

I have seen many papers using autoencoders to replace images (states) with latent representations. Some of those methods have shown higher rewards using such techniques. However, I do not understand how this helps the RL agent learn better. Perhaps viewing latent representations allows the agent to generalize to novel states more quickly?

Here are 2 papers I have read -

",31755,,2444,,1/20/2021 11:30,1/20/2021 11:30,How does replacing states with latent representations help RL agents?,,1,0,,,,CC BY-SA 4.0 25838,2,,18181,1/18/2021 23:32,,1,,"

I do not feel comfortable proclaiming that I would be a machine learning expert. But I want to point out that there is indeed interest in applying tensor networks in machine learning settings. Let me highlight three particular settings in the following.

They can be used to find the governing equations behind dynamical systems analogous to the SINDy algorithm. The reasoning is that the dynamics of your system may not be sparse but may indeed have a low-rank structure. (See this paper for a comparison of sparse and low-rank structures and this paper for a discussion about why low-rank matrices so often appear in big data.) The resulting modification of SINDy is the appropriately named MANDy algorithm. But if your dynamical law exhibits additional structure (despite being low-rank) then this structure is not used. (Requiring you to collect unnecessary samples.) This work tackles this issue by allowing you to specify the structure that you expect your law to exhibit.

Another area of interest is the application of tensor networks as a means of compression. Tensor networks can be used to approximate multivariate functions. Either directly (see this article or this article for further details) or by representing their high-dimensional coefficient tensor (see e.g. here). The last paper also highlights where such functions may occur: when solving stochastic or parametric PDEs. "What does this have to do with machine learning?" you may rightfully ask. These PDEs are hard to solve and it is often easier to minimize a residual than to apply a Galerkin method. The results can be shown to be equivalent with high probability. (See this paper for a proof of this statement and an application to some stochastic PDEs or this paper for an application to an optimal control problem.)

Moreover, there is the problem of tensor completion - a generalization of matrix completion to higher dimensions. (This and this are references for algorithms.) These methods find application in data science where they can be used to decompose or denoise data. (Unfortunately, I don't have a reference with examples but I remember a presentation where these methods where used on EEG signals.)

Additionally, I recently found this interesting symposium covering even more potential applications of tensor networks in machine learning: https://itsatcuny.org/calendar/quantum-inspired-machine-learning

",42033,,42033,,1/18/2021 23:58,1/18/2021 23:58,,,,0,,,,CC BY-SA 4.0 25841,2,,25769,1/19/2021 1:38,,1,,"

One way you can definitely approach the problem is by using (Deep) Reinforcement Learning (DRL).

YouTube is actually using DRL as well to suggest videos to users in order to maximize users' engagement with their website. For more information (and further references to papers explaining how other major companies implement their recommendation systems), see this paper. Just as a bit of motivation. Other research into that direction could be found here.

Actually, there are different DRL algorithms available and each might be suitable for approaching the problem you described above from a slightly different angle. The way how I would roughly approach your problem is as follows:

As I understand it, the goal is the following. You want to show either of four messages to a user (only one at a time) and always display the message, which is expected to reduce some metric (=conversion time), to the user.

In that case, an DRL algorithm, or agent, could directly be trained on minimizing the conversion time without having to explicitly produce conversion time estimates per message & user. That might make phrasing the problem at hand much easier (in terms of modelling the learning task).

The outcome of training an DRL agent consists of a so-called optimal policy network, which dictates the agent which message to display to a given user, upon observing the provided user data, such that conversion time gets minimized for that user. The aforementioned network is a simple artificial Neural Network (NN) or Recurrent NN (RNN).

Training the DRL agent will consist of two steps being repeated many times in alternating order.

One step will include the sampling/generation of training data. For this step, the algorithm (initially being untrained) will be applied to the problem at hand and predict messages for given users. Then, (in addition to the provided user data that was fed as input to the agent) each user's response (i.e. the conversion time) is recorded. This can later be used as the reward that the agent receives for having selected the selected message given the provided user data. Here, a low conversion time translates to a high reward.

In the second step, the agent's policy network will be trained in order to make the agent's predictions more accurate in the next round of generating training data. Here, the data recorded during the previous step will be used. How the policy network is optimized largely depends on which DRL algorithm you choose. The range of different methods varies vastly. Some algorithms try to estimate utilities (i.e. measures of goodness) per action given a certain state (here, state = provided user data used to generate a prediction). In this case, an action would be choosing message A or choosing message B etc. A popular algorithm implementing this procedure is Deep Q-Learning. However, if you rather want to predict probabilities per action, then Proximal Policy Optimization might rather be what you are looking for.

Upon convergence, i.e. stabilization of the goal metric (=convergence time per user), the algorithm has arrived at its optimal policy (ideally speaking).

The only drawback with this approach is that DRL usually requires quite a lot of training. But maybe you are lucky and pre-trained recommendation system models exist.

If there is still anything unclear, feel free to ask in the comments.

",37982,,37982,,1/19/2021 2:29,1/19/2021 2:29,,,,0,,,,CC BY-SA 4.0 25842,2,,25837,1/19/2021 11:24,,3,,"

In short, it is much easier for the agent to learn from a smaller dimensional state space. This is because the agent must also do representation learning; i.e. it must also infer what the state is telling it as part of the learning process. If you think of the architecture used in DQN to solve Atari, they had a CNN that outputted a vector which was then passed through some dense layers. Here the representation learning was done by the CNN and was trained using an end-to-end approach i.e. all updates to the network weights were done through the reinforcement learning objective; that is there is no supervised or unsupervised learning that takes place.

This can be particularly difficult when you combine images with sparse rewards as there is not a lot of feedback so the representation learning can take a long time. This paper gives a good description of the problem of decoupling representation learning from reinforcement learning with a nice solution.

The other main 'problem setting' I have seen images replaced with a latent state is when the authors are looking at planning. The problem with doing any kind of planning is that a model of the transition dynamics, $p(s' | s, a)$, is needed. For high dimensional state spaces such as images, this can be very difficult to predict and even relatively small errors will quickly compound so if you use the model to predict multiple time steps into the future the planner is useless because of these compounding errors. I think there is a discussion on this in this paper (certainly there will be references therein that point you in the right direction).

",36821,,,,,1/19/2021 11:24,,,,0,,,,CC BY-SA 4.0 25844,1,,,1/19/2021 14:25,,2,118,"

Considering weights initialization in my personal projects, I always used some standard techniques such as:

  1. Glorot (also known as Xavier) initialization (2010).
  2. Mertens initialization (2010).
  3. He initialization (2015).

As it is a very active research field, are there some innovations in recent years that have increased the performance of DNNs?

I am thinking specifically of architectures such as DNNs and CNNs with activation functions, such as ReLU, ELU, PReLU, Leaky ReLU, SELU, Swish, and Mish.

",36907,,2444,,1/20/2021 21:03,1/20/2021 21:03,Are there any new weight initialization techniques for DNN published after 2015?,,0,1,,,,CC BY-SA 4.0 25847,2,,25832,1/19/2021 15:25,,1,,"

There are many resources that answer your question, but, given that you're apparently new to machine learning (ML), deep learning (DL), and neural networks (NN), let me provide a simple answer that should clarify your doubts.

The term weight in the context of ML, DL, and NN is a synonym for parameter (sometimes, in some contexts, such as linear regression, it is also known as coefficient), which can be constant (i.e. do not change during, for example, learning/training) or learnable (i.e. change during the training/learning process). In a feedforward neural network (FFNN), with and without recurrent connections, the weights are the numbers on the connections between neurons in different or same layers. They are called weights to emphasize that their role is to weigh the effect of one neuron on the other.

In a CNN, the weights are the kernels/filters of the CNN, i.e. the matrices that you use to perform the convolution (or cross-correlation) operation in a convolutional layer. So, given that CNNs perform an operation that seems to be different than the linear combination followed by the non-linear activation function in FFNNs, you could think that the weights in a CNN are not very similar to the weights in FFNNs. However, this is not fully true, as CNNs can be viewed as FFNNs (with some specific structure): maybe this answer will provide you more info about this topic. So, your notes are right!

In my opinion, the best way to understand the role of the kernels (i.e. weights) of a CNN and all the details behind the convolution (or cross-correlation) operation is to first study some traditional computer vision and image processing techniques, so maybe you should pick a book that covers them.

In this answer, I briefly try to describe what a CNN is and what it can be used for. There are other answers on this site, such as this, this, and this, which you may want to read, if not now later, to understand even more all the details of CNNs. A decent reference book that covers CNNs (and other DL topics) is Deep Learning by Ian Goodfellow et al.

",2444,,2444,,2/18/2021 17:33,2/18/2021 17:33,,,,0,,,,CC BY-SA 4.0 25848,2,,25726,1/19/2021 16:12,,2,,"

Smoothness here is the mathematical definition, so as you implied smoothness is ruled out by output data with sharp spikes or discontinuous jumps (and possibly the data of the gradient, the gradient's gradient, ad infinitum, depending on who defines smoothness).

By any definition a lot of activation functions are not smooth, for example RELU. This means neural networks in general are not smooth.

",43947,,,,,1/19/2021 16:12,,,,0,,,,CC BY-SA 4.0 25849,1,25885,,1/19/2021 16:16,,0,190,"

I've solved many problems with neural networks, but rarely work with images. I have about 18 hours into creating a bounding box regression network and it continues to utterly fail. With some loss functions it will claim 80% accuracy during training and validation (with a truly massive loss on both) but testing the predictions reveals a bounding box that only moves one or two pixels in any given direction and seems to totally ignore the data. I've now implemented a form of IoU loss, but find that IoU is pinned at zero... which is obviously true based on the outputs after training. :). I'd like someone to look this over and give me some advice on how to proceed next.

What I Have

I am generating 40000 examples of 200x100x3 images with a single letter randomly placed in each. Simultaneously I am generating the ground truth bounding boxes for each training sample. I have thoroughly validated that this all works and the data is correct.

What I Do To It

I am then transforming the 200x100x3 images down to greyscale to produce a 200x100x1 image. The images are then normalized and the bounding boxes are scaled to fall between 0 and 1. In simplified form, this happens:

x_train_normalized = (x_data - 127.5) / 127.5
y_train_scaled = boxes[:TRAIN]/[WIDTH,HEIGHT,WIDTH,HEIGHT]

I've been through this data carefully, even reconstituting images and bounding boxes from it. This is definitely working.

Training

To train, after trying mse and many others, all of which fail equally badly, I have implemented a simple custom IOU loss function. It actually returns -ln(IoU). I made this change based on a paper since the loss was (oddly?) pinned at zero over multiple epochs.

(Loss function:)

import tensorflow.keras.backend as kb
def iou_loss(y_actual,y_pred):
    b1 = y_actual
    b2 = y_pred
#    tf.print(b1)
#    tf.print(b2)
    zero = tf.convert_to_tensor(0.0, b1.dtype)
    b1_ymin, b1_xmin, b1_ymax, b1_xmax = tf.unstack(b1, 4, axis=-1)
    b2_ymin, b2_xmin, b2_ymax, b2_xmax = tf.unstack(b2, 4, axis=-1)
    b1_width = tf.maximum(zero, b1_xmax - b1_xmin)
    b1_height = tf.maximum(zero, b1_ymax - b1_ymin)
    b2_width = tf.maximum(zero, b2_xmax - b2_xmin)
    b2_height = tf.maximum(zero, b2_ymax - b2_ymin)
    b1_area = b1_width * b1_height
    b2_area = b2_width * b2_height

    intersect_ymin = tf.maximum(b1_ymin, b2_ymin)
    intersect_xmin = tf.maximum(b1_xmin, b2_xmin)
    intersect_ymax = tf.minimum(b1_ymax, b2_ymax)
    intersect_xmax = tf.minimum(b1_xmax, b2_xmax)
    intersect_width = tf.maximum(zero, intersect_xmax - intersect_xmin)
    intersect_height = tf.maximum(zero, intersect_ymax - intersect_ymin)
    intersect_area = intersect_width * intersect_height

    union_area = b1_area + b2_area - intersect_area
    iou = -1 * tf.math.log(tf.math.divide_no_nan(intersect_area, union_area))
    return iou

The Network

This has been through many, many iterations. As I said, I've solved many other problems with NNs... This is the first one to get me completely stuck. At this point, the network is dramatically stripped down but continues to fail to train at all:

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers, optimizers

tf.keras.backend.set_floatx('float32') # Use Float32s for everything

input_shape = x_train_normalized.shape[-3:]
model = keras.Sequential()
model.add(layers.Conv2D(4, 16, activation = tf.keras.layers.LeakyReLU(alpha=0.2), input_shape=input_shape))
model.add(layers.MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(layers.Dropout(0.2))
model.add(layers.Flatten())
model.add(layers.Dense(200, activation = tf.keras.layers.LeakyReLU(alpha=0.2)))
model.add(layers.Dense(64, activation=tf.keras.layers.LeakyReLU(alpha=0.2)))
model.add(layers.Dense(4, activation="sigmoid"))

model.compile(loss = iou_loss, optimizer = "adadelta", metrics=['accuracy'])
history = model.fit(x_train_normalized, y_train_scaled, epochs=8, batch_size=100, validation_split=0.4)

All pointers are welcome! In the meantime I'm implementing a center point loss function to see if that helps at all.

",30426,,2444,,1/21/2021 1:57,1/21/2021 1:57,Bounding Box Regression - An Adventure in Failure,,1,0,,8/23/2022 11:12,,CC BY-SA 4.0 25850,2,,18785,1/19/2021 16:17,,0,,"

The other answer gives a good overview of the differences between MLPs and CNNs, and it includes 2 diagrams that attempt to illustrate the main differences between MLPs and CNNs, i.e. sparse connectivity and weight sharing. However, these diagrams do not clarify what a neuron in a CNN could be. A better diagram, which illustrates what a neuron is in a CNN, from a CNN and MLP perspective, is the following (taken from the famous article on CNNs).

Here, there are 2 main blocks (aka volumes): the orange block on the left (the input) and the blue/cyan volume on the right (the feature maps, i.e. the outputs of the convolutional layer, i.e. after the application of the convolutions with different kernels).

The circles in the visible stack of the cyan block represent the neurons (or, more precisely, their activations or outputs). We only see $k=5$ neurons stacked: this corresponds to the application of $k=5$ different kernels (i.e. weights) to that specific subset of the input (aka receptive field), hence the sparse connectivity of CNNs. So, these neurons, in the same stack, are looking at the same small subset of the input, but with different weights (i.e. kernels). The neurons, which are not shown in this diagram, that are on the same (vertical) 2d plane (known as feature map) of the same neuron (e.g. the first that we see from left to right) in the cyan volume are the neurons that share the same weights, i.e. we use the same kernel to produce their outputs.

So, in this biological/neuroscientific view of the CNN, when you apply the convolution (or cross-correlation) operation with 1 specific filter (or kernel), you are computing the activation (not to be confused with the activation function, which is used to compute the activation!) i.e. the output of multiple neurons, all of them share the same weights. You stack all these activations on the same 2d plane (known as feature map) of the output volume: note that this operation is just the convolution operation! When you compute the convolution with another kernel, you are again computing the activation of other multiple neurons, which share another different weight matrix, and so on and so forth.

Some authors prefer to use the term convolutional networks, i.e. without the term neural, probably because of this issue, i.e. it's not clear, especially to newcomers, what a neuron would be in a CNN, so the neuroscientific/biological view of CNNs is not always clear, although it's important to emphasize that CNNs were inspired by the visual cortext, so this biological interpretation could (and should) be more widely known or less confusing/misunderstood.

Now, let's address your question more directly.

Aren't the filters the same, in the way that they convert an "image" to a new "image" based on the weights that are in that filter? And that the next layer uses this new "image"?

The filters in a CNN correspond to the weights of an MLP.

A neuron in a CNN can be viewed as performing exactly the same operation as a neuron in an MLP. The big differences between a CNN and an MLP (as explained also in the other answer) are

  • Weight sharing: Some neurons (not all!) in the same convolutional layer share the same weights. The convolution (or cross-correlation) is the operation that implements this partial forward pass with the same weights for different neurons.

  • Neurons in a CNN only look at a subset of the input and not all inputs (i.e. receptive field), which leads to some notion of sparse connectivity

  • A convolutional layer, in a CNN, is composed of neurons in a 3d dimensional volume (or, more precisely, their activations are organized in a 3d volume), rather than a 1-dimensional one, as in an MLP.

  • CNNs may use subsampling (aka pooling)

",2444,,2444,,10/13/2021 11:24,10/13/2021 11:24,,,,0,,,,CC BY-SA 4.0 25853,1,,,1/19/2021 17:28,,4,78,"

In Chapter 9, section 9.1.6, Raul Rojas describes how committees of networks can reduce the prediction error by training N identical neural networks and averaging the results.

If $f_i$ are the functions approximated by the $N$ neural nets, then:

$$ Q=\left|\frac{1}{N}(1,1, \ldots, 1) \mathbf{E}\right|^{2}=\frac{1}{N^{2}}(1,1, \ldots, 1) \mathbf{E} \mathbf{E}^{\mathrm{T}}(1,1, \ldots, 1)^{\mathrm{T}}\tag{9.4}\label{9.4} $$ is the quadratic error of the average of the networks, where $$ \mathbf{E}=\left(\begin{array}{cccc} e_{1}^{1} & e_{2}^{1} & \cdots & e_{m}^{1} \\ \vdots & \vdots & \ddots & \vdots \\ e_{1}^{N} & e_{2}^{N} & \cdots & e_{m}^{N} \end{array}\right), $$ and $\mathbf{E}$'s rows are the errors of the approximations of the $N$ functions, i.e. $\mathbf{e}^{i} = f_i(\mathbf{x}^{i}) - t_i$, for each of the input-output pairs $\left(\mathbf{x}^{1}, t_{1}\right), \ldots,\left(\mathbf{x}^{m}, t_{m}\right)$ used in training.

Is there a way to assure that the errors for a neural network are uncorrelated to the errors of the others?

Raul Rojas says that the uncorrelation of residual errors is true for a not too large $N$ (i.e. $N < 4$). Why is that?

",14892,,14892,,1/20/2021 17:02,1/20/2021 17:02,When do two identical neural networks have uncorrelated errors?,,0,10,,,,CC BY-SA 4.0 25857,2,,14204,1/19/2021 18:33,,0,,"

In tree-based genetic programming (TGP), you have a tree that represents a program or a function. The nodes in this tree are functions, while the edges represent the interactions between these functions. The leaves of this tree are the inputs (or random numbers) that you pass to this function. The incoming edges into a node represent the inputs, while the outgoing edges represent the output of the associated function.

So, for example, consider the following function

\begin{align} f(x) &= \sin(x^2) \\ &= \sin(y) \tag{1}\label{1}, \end{align} where $y = g(x)$ (note that this is just a change of variable in order to illustrate the corresponding tree more clearly below!).

The function $f$ will be represented by the following tree

sin(y)
  |
 g(x)
  |
  x

So, how do we read this diagram? Essentially, we read it from the bottom-up. So, $x$ is first passed to $y = g(x) = x^2$, which then passed to $\sin(y)$. In this case, given that we are dealing with mathematical operations, you expect $x$ to be a number, because, otherwise, what would $x^2$ or $\sin(x^2)$ mean?

Let's now consider a function of 2 inputs.

\begin{align} f(x, y) &= x + y \tag{2}\label{2}, \end{align}

The corresponding tree would be

  +
 / \
x   y

In this case, you naturally expect that $x$ and $y$ are also numbers. However, this may not be actually the case. Let's say that you're evolving Python functions, then x + y is well defined even if x and y are strings, i.e. that would be a concatenation operation.

So, in this sense, we naturally expected the functions (the nodes in the tree) to get parameters/arguments with the right types, but, in some cases, more types are possible for the apparently same function.

So, in general, types don't necessarily need to be enforced! It depends on the implementation of TGP.

There's a specific approach to TGP where the idea is really to ensure type-safety, i.e. strongly-typed (tree-based) genetic programming, where functions can have a different number and type for their parameters and return values, but, in that case, only functions that are consistent with the signature of the function can be "connected" with that function in the tree.

In weakly-typed GP, the types are not checked, so you could end up evolving functions of the form $f(x) = \sin(x)$, where $x$ is a string. Of course, when you execute these programs/functions, the program may crash, but this is a different story, which you need to take care of (i.e. you need to ensure that your individuals can be evaluated), if you need this flexibility. This weakly-typed approach can also be viewed as a strongly-typed approach where all functions' parameters and return values have the same type.

DEAP, a well-known Python library for GAs and GP, provides both weakly and strongly-typed GP, so, if you are familiar with Python, you may want to start from it.

To conclude and answer your question more directly, it's not true that in TGP you need all functions to have the same type for all parameters and return values.

",2444,,2444,,1/19/2021 18:38,1/19/2021 18:38,,,,0,,,,CC BY-SA 4.0 25860,1,,,1/19/2021 20:23,,2,293,"

I'm a bit confused about the visualization of the upper bound (following the notation of (c.f. Sutton & Barto (2018))

$$Q_t(a)+C\sqrt{\frac{\mathrm{ln}(t)}{N_t(a)}}$$

In many blog posts about the UCB(1)-algorithm, such as visualized in the following image (c.f. Link ):

Isn't the upper (confidence) bound simply the upper bound of a one-sided confidence interval instead of a two-sided confidence interval as shown in the image above? A lower bound of the interval is completely useless in this case, or am I wrong?

",21287,,2444,,1/19/2021 22:09,1/31/2021 18:52,"In UCB, is the actual upper bound an upper bound of an one-sided or two-sided confidence interval?",,1,2,,,,CC BY-SA 4.0 25861,1,,,1/19/2021 23:05,,1,71,"

I am trying to understand how batch normalization (BN) works in CNNs. Suppose I have a feature map tensor $T$ of shape $(N, C, H, W)$

where $N$ is the mini-batch size,

$C$ is the number of channels, and

$H,W$ is the spatial dimension of the tensor.

Then it seems there could a few ways of going about this:

Method 1: $T_{n,c,x,y} := \gamma*\frac {T_{c,x,y} - \mu_{x,y}} {\sqrt{\sigma^2_{x,y} + \epsilon}} + \beta$ where $\mu_{x,y} = \frac{1}{NC}\sum_{n, c} T_{n,c,x,y}$ is the mean for all channels $c$ for each batch element $n$ at spatial location $x,y$ over the minibatch, and

$\sigma^2_{x,y} = \frac{1}{NC} \sum_{n, c} (T_{n, c,x,y}-\mu_{c})^2$ is the variance of the minibatch for all channels $c$ at spatial location $x,y$.

Method 2: $T_{n,c,x,y} := \gamma*\frac {T_{c,x,y} - \mu_{c,x,y}} {\sqrt{\sigma^2_{c,x,y} + \epsilon}} + \beta$ where $\mu_{c,x,y} = \frac{1}{N}\sum_{n} T_{n,c,x,y}$ is the mean for a specific channels $c$ for each batch element $n$ at spatial location $x,y$ over the minibatch, and

$\sigma^2_{c,x,y} = \frac{1}{N} \sum_{n} (T_{n, c,x,y}-\mu_{c})^2$ is the variance of the minibatch for a channel $c$ at spatial location $x,y$.

Method 3: For each channel $c$ we compute the mean/variance over the entire spatial values for $x,y$ and apply the formula as

$T_{n, c,x,y} := \gamma*\frac {T_{n, c,x,y} - \mu_{c}} {\sqrt{\sigma^2_{c} + \epsilon}} + \beta$, where now $\mu_c = \frac{1}{NHW} \sum_{n,x,y} T_{n,c,x,y}$ and $\sigma^2{_c} = \frac{1}{NHW} \sum_{n,x,y} (T_{n,c,x,y}-\mu_c)^2 $

In practice which of these methods is used (if any) are correct for?

The original paper on batch normalization , https://arxiv.org/pdf/1502.03167.pdf , states on page 5 section 3.2, last paragraph, left side of the page:

For convolutional layers, we additionally want the normalization to obey the convolutional property – so that different elements of the same feature map, at different locations, are normalized in the same way. To achieve this, we jointly normalize all the activations in a minibatch, over all locations. In Alg. 1, we let $\mathcal{B}$ be the set of all values in a feature map across both the elements of a mini-batch and spatial locations – so for a mini-batch of size $m$ and feature maps of size $p \times q$, we use the effective mini-batch of size $m^\prime = \vert \mathcal{B} \vert = m \cdot pq$. We learn a pair of parameters $\gamma^{(k)}$ and $\beta^{(k)}$ per feature map, rather than per activation. Alg. 2 is modified similarly, so that during inference the BN transform applies the same linear transformation to each activation in a given feature map.

I'm not sure what the authors mean by "per feature map", does this mean per channel?

",30962,,30962,,1/20/2021 4:23,1/20/2021 4:23,Understanding Batch Normalization for CNNs,,0,2,,,,CC BY-SA 4.0 25864,1,,,1/20/2021 7:43,,3,166,"

The pseudocode below is taken from Barto and Sutton's "Reinforcement Learning: an introduction". It shows an actor-critic implementation with eligibility traces. My question is: if I set $\lambda^{\theta}=1$ and replace $\delta$ with the immediate reward $R_t$, do I get a backwards implementation of REINFORCE?

",43952,,2444,,1/20/2021 12:32,1/20/2021 12:32,How to implement REINFORCE with eligibility traces?,,0,3,,,,CC BY-SA 4.0 25871,2,,18084,1/20/2021 12:59,,0,,"

This is how I see it:

The state that a purely reactive agent is reacting to is, in fact, a subset of the set of all possible runs that end with a state. So in theory, E (some state) is a subset of R (finite set of all possible runs with a state as the last element). Standard and purely reactive agents are similar in the sense that the agents' purpose is to commit an action given some condition. It is entirely possible that a standard agent might be dealing with a run which encompasses only one element. Therefore, this would imply that for a discrete event in time, the standard agent's behaviour would be equivalent to a purely reactive agent, right? or am I completely off the rails?

Sorry for not using the required symbols, I'm new here and am getting used to the styling :D

",43966,,,,,1/20/2021 12:59,,,,1,,,,CC BY-SA 4.0 25876,1,,,1/20/2021 17:28,,1,190,"

The code below is adapted from this implementation.

from math import floor

basehash = hash

class IHT:
    "Structure to handle collisions"
    def __init__(self, sizeval):
        self.size = sizeval
        self.overfullCount = 0
        self.dictionary = {}

    def count (self):
        return len(self.dictionary)

    def getindex (self, obj, readonly=False):
        d = self.dictionary
        if obj in d: return d[obj]
        elif readonly: return None
        size = self.size
        count = self.count()
        if count >= size:
            if self.overfullCount==0: print('IHT full, starting to allow collisions')
            self.overfullCount += 1
            return basehash(obj) % self.size
        else:
            d[obj] = count
            return count

def hashcoords(coordinates, m, readonly=False):
    if type(m)==IHT: return m.getindex(tuple(coordinates), readonly)
    if type(m)==int: return basehash(tuple(coordinates)) % m
    if m==None: return coordinates


def tiles(ihtORsize, numtilings, floats, ints=[], readonly=False):
    """returns num-tilings tile indices corresponding to the floats and ints"""
    qfloats = [floor(f*numtilings) for f in floats]
    Tiles = []
    for tiling in range(numtilings):
        tilingX2 = tiling*2
        coords = [tiling]
        b = tiling
        for q in qfloats:
            coords.append( (q + b) // numtilings )
            b += tilingX2
        coords.extend(ints)
        Tiles.append(hashcoords(coords, ihtORsize, readonly))
    return Tiles

if __name__ == '__main__':
    tc=IHT(4096)
    tiles = tiles(tc, 8, [0, 0.5], ints=[], readonly=False)
    print(tiles)

I'm trying to figure out how the function tiles() works. It implements tile coding, which is explained in "Reinforcement Learning: An Introduction" (2020) Sutton and Barto on page 217.

So far I've figured out:

  • qfloats rescales the floating numbers to be the largest integer less than or equal to the original floating number, these are then re-scaled by the number of tilings.

  • Then, for each tiling, a list is created, the first element of the list is the tiling number, followed by the coordinates of the floats for that tiling, i.e. each of the re-scaled numbers is offset by the tiling number, b and integer divided by numtilings.

  • Finally, hashcoords first checks the dictionary d to see if this list has appeared before, if it has it returns the number relating to that list. It not it either creates a new entry with that list at the key and the count as the value or if the count is more than or equal to the size it adds one to overfullCount and returns basehash(obj) % self.size.

I'm struggling to understand two parts:

  1. What is the tilingX2 part doing?

  2. Why is tilingX2 added to b after the first coordinate has been calculated? It seems to me that each coordinate should be treated separately

  3. And why by a factor of 2?

  4. What is the expression basehash(obj) % self.size doing? I'm quite new to the concept of hashing. I know that, generally, they create a unique number for a given input (up to a limit), but I'm really struggling to understand what is going on in the line above.

",43975,,2444,,1/21/2021 2:41,1/21/2021 2:41,Can someone explain to me this implementation of Tile Coding using Hash Tables?,,0,3,,,,CC BY-SA 4.0 25877,1,,,1/20/2021 19:00,,1,210,"

I cannot make SAC learn a task in a certain environment. The point is that it actually sometimes finds a very good policy, but it never learns the policy in the end. I am using the SAC implementation from stable-baselines3, which is correct as far as I have seen. I have an environment driven by complex dynamics. I have to control a variable to be in a certain range. Every time the variable goes out of minimum or maximum range the environment is done. The action is continuous (between 0 and 30). My goal is to keep the variable (1D) in the range for as long as possible (millions of steps per episode would be ideal). There are certain characteristics of the environment that may make it particular:

  • The action can only drive the variable to lower values. The variable can go up as a result of the environment dynamics (not controlled) and as a consequence of certain events (not controlled) that occur at random intervals.
  • The observation is a noisy sample of the variable. The observation is just a real number.
  • The effect of actions in the variable is usually delayed. That is, applying an action does not immediately lower the value of the variable.

I have tried SAC with many different hyperparameters. It sometimes find very good policies, policies that last for thousands and even millions of steps in evaluation or rollout. But it never learns such policies. Even saving the policy in those cases, they are not able to produce a lone episode later. In the attached image, it can be seen that during the training (in some evaluations) the policy is able to run for thousand of steps. But then it never learns that. I only show 500K here steps but I have run test for 1.5 million training timesteps.

So, my question is (I have several ones actually):

  • Is SAC not suitable for this problem? I have also run TD3 and PPO but without better results and SAC is the only one actually able to find those policies that make very long episodes. Any other algorithm?
  • I have tried several reward functions, and, in the end, a simple one that gives 1 for every step and 0 when done is the one that seems to give better results. In the image, the reward is one for every step and -100 when done.
  • Since the values of the variable are time correlated due to the dynamics, I have also tried with RNN actors (with TF Agents), but results do not improve.
  • I cannot see any relationship between the actor loss and critic loss and the results (maybe that is my problem). The loss seem to be larger when the episodes are longer (which is what I want).

Any advice is highly appreciated. Thanks

",43977,,,,,1/20/2021 19:00,How to make SAC (Soft-Actor-Critic) learn a policy?,,0,0,,,,CC BY-SA 4.0 25878,1,,,1/20/2021 19:32,,0,280,"

I am working on an optimization problem. First, I have done forward training to work the network as a surrogate model, then I freeze the output and I want to find an optimal value of input for a given output.

",43980,,40434,,10/18/2021 12:55,11/12/2022 16:00,How to make input variable as trainable parameter in a neural network?,,1,5,,,,CC BY-SA 4.0 25879,2,,25878,1/20/2021 20:41,,0,,"

It's just a typical optimization problem. You want to optimize function \begin{equation} f(\theta, x) \end{equation} with fixed parameters $\theta$ (network weights) and optimization variable $x$. Typically the approach is consisted of choosing a search direction and direction step. For deciding search direction you can use the gradient of objective with respect to decision variable and have an update of form \begin{equation} x = x - \alpha \nabla_x f(\theta, x) \end{equation} where $\nabla_x f(\theta, x)$ is the gradient. You can also use Newton or Quasi-Newton methods of form \begin{equation} x = x - \alpha B^{-1}\nabla_x f(\theta, x) \end{equation} where $B$ is a Hessian or approximate Hessian of $f(\theta, x)$ with respect to $x$. Step length parameter is usually decided with line search or you can also use trust region approaches which decide step direction with constraints to it's length. For more details about optimization you can consult Numerical Optimization by Nocedal and Wright

",20339,,,,,1/20/2021 20:41,,,,1,,,,CC BY-SA 4.0 25885,2,,25849,1/21/2021 0:35,,0,,"

In the end, this problem turned out to be largely a matter of the gradient descent falling into local minima.

For those reading for posterity, one of the issues in ML that is difficult to work around is that we cannot intuitively choose reasonable initial values for the weights, biases, and kernels (in the CNN). As a result, we typically allow them to initialize randomly. This can present some challenges.

One of the biggest challenges is that when you start from a random starting point, it's difficult to tell someone how to completely replicate your experiments. This isn't terribly important in the end since you can provide them with the saved parameters from your trained model. However, this can also lead to networks that appear to be "bad" that are in fact perfectly fine.

In this case, I had spent much of the time initializing the CNN with a uniform initializer (not present in the code above). I will sometimes use a random seed or some other function to generate initial values so that I can better improve networks through genetic search tools.

It seems that the uniform initializers combined with the various network iterations and this particular data lead to absolutely abysmal training performance and non-convergence.

When I ran the network as above with random initializations and one or two tweaks, it converged well. Some training iterations will pin one of the sides of the bounding box at the edge, some will never converge, but I've managed to successfully train several that are in the 96-98% accuracy range for the bounding boxes in my test set of 20000, so all is well!

",30426,,,,,1/21/2021 0:35,,,,2,,,,CC BY-SA 4.0 25892,1,,,1/21/2021 2:34,,1,223,"

Dall-E, it can generate many imaginative images from the description, even some peculiar images, how did they actually create this kind of dataset to train this AI , because there is not much of that kind of data which include weird images and descriptive text, how did they create this massive dataset. Does anyone have any idea?

If you have no idea what I am talking about, please refer to this link: https://openai.com/blog/dall-e/.

",42182,,42182,,1/23/2021 3:27,1/23/2021 3:27,What dataset might Elon Musk's Dall-E have used?,,1,6,,,,CC BY-SA 4.0 25893,1,25910,,1/21/2021 2:42,,0,90,"

I have a problem similar to the vehicle routing problem (VRP) that I want to solve with reinforcement learning. In this problem, the agent starts from the point $(x_0, y_0)$, then it needs to travel through $N$ other points, $(x_1, y_1), \dots, (x_n, y_n)$. The goal is to minimize the distance traveled.

Right now, I am modeling a state as a point $(x, y)$. There are 8 possible actions: go east, go north-east, go north, go north-west, go-west, go south-west, go south, go south-east. Each action goes by a pace of 100 metres.

After reaching near a destination point, that destination point is removed from the list of destination points.

The reward is the reciprocal of total distance until all destination points reached (there's a short optimisation to arrange the remaining points for a better reward).

I'm using a DNN model to keep the policy of a reinforcement learning agent, so this DNN maps a certain state to suitable action.

However, after every action of the agent with a good reward, the training data are added with 1 more sample, it's kinda incremental learning.

Should the policy model be trained again and again with every new sample added in? This does take too much time.

Any better RL approach to the problem above?

",2844,,2444,,1/21/2021 12:16,1/28/2021 2:48,How to train a policy model incrementally to solve a problem similar to the vehicle routing problem?,,1,9,,,,CC BY-SA 4.0 25894,2,,25892,1/21/2021 3:27,,1,,"

DALL·E is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions

should be the same data they used to train the GPT-3

",43989,,,,,1/21/2021 3:27,,,,2,,,,CC BY-SA 4.0 25895,2,,25795,1/21/2021 5:06,,1,,"

Alright. Consider an ordinary neural network, so, in the last layer, we have, $z^{[L]} = W^{[L]} a^{[L-1]} + b^{[L]}$, where $a^{[L]} = \sigma(z^{[L]})$, where $\sigma$ is the softmax activation: $$ \sigma(\mathbf z)_{i} = \frac{e^{z_i}}{\sum_k e^{z_k}} $$

I think, one of the most effective ways of not to get confused about all these matrices with different dimensions, is to simply get rid of the matrices and do all calculations componentwise. So, calculating the $ij$ component of the jacobian matrix, we have: $$ J_{ij} = \frac{\partial\sigma_i}{\partial z_j} = \frac{e^{z_i}}{\sum_k e^{z_k}}\delta_{ij} - \frac{e^{z_i}e^{z_j}}{(\sum_k e^{z_k})^2} $$

Consider a cost $C$, and we wish to calculate the derivative with respect to the weights of the last layer, that is, we are running the first step of backpropagation. Applying chain rule to the $ij$-component of the weight:

$$ \frac{\partial C}{\partial W^{[L]}_{ij}} = \sum_k\frac{\partial C}{\partial a_{k}}\frac{\partial a^{[L]}_{k}}{\partial W^{[L]}_{ij}} $$

And, with one more chain rule: $$ \frac{\partial a^{[L]}_{k}}{\partial W^{[L]}_{ij}} = \sum_s\frac{\partial a^{[L]}_{k}}{\partial z^{[L]}_s}\frac{\partial z^{[L]}_s}{\partial W^{[L]}_{ij}} $$

Now we can identify exactly where the jacobian matrix is: $$ J_{ij} = \frac{\partial\sigma_i}{\partial z_j} = \frac{\partial a^{[L]}_{i}}{\partial z^{[L]}_j} $$

Thus, the back-propagation step is just: $$ \frac{\partial C}{\partial W^{[L]}_{ij}} = \sum_k\sum_s\frac{\partial C}{\partial a_{k}}\frac{\partial a^{[L]}_{k}}{\partial z^{[L]}_s}\frac{\partial z^{[L]}_s}{\partial W^{[L]}_{ij}} = \sum_k\sum_s\frac{\partial C}{\partial a_{k}}J_{ks}\frac{\partial z^{[L]}_s}{\partial W^{[L]}_{ij}} $$

Now, in component language, it is much simpler to identify where are all the matrix multiplications: The transpose of the gradient with respect to the cost function, multiplied by the jacobian matrix of the softmax activation (or whatever activation), and finally, the last derivative, which will evaluate to something depending on the activation of the previous layer.

=].

",43988,,43988,,1/22/2021 0:06,1/22/2021 0:06,,,,0,,,,CC BY-SA 4.0 25897,2,,25750,1/21/2021 9:56,,0,,"

I think U are looking for PLSA for PLSA either U find out those topics(catogeries) with EM or NNMF Personally I recommend NNMF or u can use LDA which is Bayesian version of PLSA

here is code for PLSA: https://github.com/Man-ash/Probabilistic-Latent-semantic-analysis which use NNMF

for EM method I code it by myself but i am not sure if it is right https://stackoverflow.com/questions/65783880/problem-about-m-step-in-probabilistic-latent-semantic-analysis-with-code

",43999,,,,,1/21/2021 9:56,,,,0,,,,CC BY-SA 4.0 25899,2,,25829,1/21/2021 11:28,,1,,"

but I have been told that neural networks aren't made to predict values in that way, they really are best suited for classification into discrete classes

I don't agree with this statement. I already trained many CNNs for regressions tasks where a continous output is trained and they generally perform very well.

I think the general "advantage" for a classification approach over a regression approach is that there is usually some margin where the output of the NN can still be clipped to a specific class, which might not be the case in regression tasks. Furthermore a lot of the "better classification performance" is due to the fact that the cross-entropy loss of classification tasks always tries to increase probability of one class while reducing all others. But this advantage might not be really important in your case where a specific range of days (e.g 3-5 days) can be probable instead of just one single day (e.g. exactly day 4).

Thats why I think you should defenitly try the regression approach. For this just normalize/clip your output layer to the min/max range of days. You should also defenitley decrease the learning rate at the end of training, I found this much further improves regression predictions to be more accurate.

And lastly if you want to achieve higher performance and more robustness, in your case I highly recommend a bayesian approach where you average over an ensemble of NNs (this ensemble can be your single NN with different parts truned off via Dropout layers during inferencing). With this average you can get an estimate of the uncertainty (the variance over different predictions) which gives you a much better idea of the probability distribution over the different days for a specifiy input.

",13104,,,,,1/21/2021 11:28,,,,1,,,,CC BY-SA 4.0 25901,2,,25712,1/21/2021 12:20,,1,,"

The more I think about it the more convinced I am that the visual explanation from the linked lecture is wrong. But the good news is there are still some ways to get close to the cylinder but not before the activation of the last neuron but instead afterwards. I haven't done it with simgoid. But I tried with ReLu instead for now.

We can cut the tower at the very top (thanks to ReLu and a bias). The closer to the top we cut it the more it will be like a cylinder.

We can control the height of the tower with the weights.

First in 2d:

Unfortunately the closer we put this towers together the more they will start two influence each other.

But we can counter that with a negative tower between them.

Now in 3d:

This Answer is work in progress I will update it when i find out something new.

",43729,,43729,,1/21/2021 12:25,1/21/2021 12:25,,,,0,,,,CC BY-SA 4.0 25903,1,,,1/21/2021 13:40,,4,90,"

I have seeing a variation in importance sampling (IS) in Prioritized Experience Replay (PER) in some implementations regarding the original paper approach stated as (in section 3.4):

$$ w_{i}=\left(\frac{1}{N} \cdot \frac{1}{P(i)}\right)^{\beta} $$

For something like this:

$$ w_{i}=\left(\frac{\min (P(i))}{P(i)}\right)^{\beta} $$

Does anyone know where it comes from? A reference that explains the reason for that new formula and improvements obtained?

My intuition guides me to some conclusions, not necessarily correct, using this new formula:

  • In the beginning, supposing that the PER stills have empty positions, $\min(P(i)) \sim 0$, not giving too much weight for samples. But it grows substantially once the capacity is achieved as well as when the error becomes low (plus the incrementing Beta)

A code on github that applies this: link

",42682,,42682,,1/29/2021 16:24,1/29/2021 16:24,Where does this variation of the importance sampling weight come from?,,0,0,,,,CC BY-SA 4.0 25904,1,26055,,1/21/2021 14:40,,3,492,"

In my understanding, DQN is useful because it utilises a neural network as a q-value function approximator, which, after the training, can generalise to unseen states.

I understand how that would work when the input is a vector of continuous values, however, I don't understand why DQN would be used with discrete state-spaces. If the input to the neural network is just an integer with no clear structure, how is this supposed to generalise?

If, instead of feeding to the network just an integer, we fed a vector of integers, in which each element represents a characteristic of the state (separating things like speed, position, etc.) instead of collapsing everything in a single integer, would that generalise better?

",44004,,2444,,1/22/2021 21:21,1/29/2021 0:33,Does DQN generalise to unseen states in the case of discrete state-spaces?,,1,0,,,,CC BY-SA 4.0 25905,1,,,1/21/2021 18:34,,0,19,"

I am new to LSTMs and I was wondering if it is possible to have LSTM layer then dense then LSTM again and does it make sense?

",44010,,,,,1/21/2021 18:34,Is it possible and if so does it make sense to have dense layers in between LSTM layers?,,0,2,,,,CC BY-SA 4.0 25906,2,,2729,1/21/2021 20:19,,3,,"

One way of verifying whether two boolean expressions are equivalent is to assign all possibilities to all variables, and comparing all results.

A B f1 f2
T T F F
T F F T
F T F T
F F T T

We can see (F, F, F, T) does not equal (F, T, T, T), for example for the assignment (A, B) = (T, F) we get result (f1, f2) = (F, T) , meaning f1 $\ne$ f2.

",21645,,21645,,5/30/2021 9:03,5/30/2021 9:03,,,,0,,,,CC BY-SA 4.0 25907,1,,,1/21/2021 21:17,,1,27,"

I'm working on a project to do sentiment analysis but my data is not long and properly formatted text. It's more likely to be very short sentences, e.g. tweets (in full tweet lingo), quick reviews of maybe 2-5 short sentences, etc.

If my text is of that nature, what approach would you recommend? E.g. CNNs (spaCy has a ready-made text classifier), LSTM (e.g. something like Keras), etc.

What are the pros/cons of your suggested approach (i.e. why is it better suited for classifying short paragraphs/sentences)?

I'm starting out in the area so any links/papers/etc. will be most welcome!

",44015,,2444,,1/22/2021 12:10,1/22/2021 12:10,What is the best approach for sentiment analysis when the text is very brief?,,0,0,,,,CC BY-SA 4.0 25908,1,,,1/22/2021 2:06,,0,23,"

I realize most NLP algorithms have multiple steps. (e.g. OCR/speech rec > syntax > semantics > response logic > semantic output > natural language output)

Is there any NN model that can perform multiple steps in NLP at once? For example, a single network which accepts audio input and returns a semantic analysis of the given speech, or a single network which accepts text input and returns natural language output?

",44017,,,,,1/22/2021 2:06,Is there any neural network model that can perform multiple NLP steps at once?,,0,2,,,,CC BY-SA 4.0 25910,2,,25893,1/22/2021 5:11,,0,,"

I found out a concept called 'Experience Replay', which trains a single step every time a new data sample is added instead of training to max epochs.

That is, instead of this training loop:

for i in range(max_paces):
    find action for max reward;
    add to trajectory to make inp;
    train(batch_size=len(inp), epochs=max_epochs)

Do training this way (for a single-episode ML problem, no incremental):

for i in range(max_epochs):
    reset environment;

    for j in range(max_paces):
        find action for max reward;
        add to trajectory to make inp;
        train(batch_size=len(inp), epochs=1)

Do training this way (for a multi-episode ML problem, incremental data):

for i in range(max_epochs):
    reset environment;
    inc_trajectory = get_random_past_data();

    for j in range(max_paces):
        find action for max reward;
        add to inc_trajectory to make inp;
        train(batch_size=len(inp), epochs=1)

For multiple-episode problems especially those problems with unlimited episodes, the training loop needs to forget (ie. exclude from training) old episodes which are very distant in the past, or select a random number of old episodes (consider them a batch, random batch) to be in every round of experience replay. Anyway, without eliminating some old data, the amount of training data are too much since it's unlimited number of episodes.

",2844,,2844,,1/28/2021 2:48,1/28/2021 2:48,,,,0,,,,CC BY-SA 4.0 25911,1,,,1/22/2021 8:36,,1,125,"

I implemented Dice loss class in pytorch:

import torch
import torch.nn as nn


class DiceLoss(nn.Module):
    def __init__(self, weight=None, size_average=True):
        super(DiceLoss, self).__init__()

    def forward(self, inputs, targets, smooth=1):
        smooth = 1.

        input_flat = inputs.contiguous().view(-1)
        target_flat = targets.contiguous().view(-1)

        intersection = (input_flat * target_flat).sum()
        A_sum = torch.sum(input_flat * input_flat)
        B_sum = torch.sum(target_flat * target_flat)

        dsc = (2. * intersection + smooth) / (A_sum + B_sum + smooth)
        return 1 - dsc 

Now I tested it in 2 scenarios:

  1. where inputs is the prediction from the network without applying activation (in my case sigmoid), only convolution with a kernel of size 1.
  2. where inputs are the result of the network including activation of the sigmoid.

Now I get comparable results between the 2 ways, but I was wondering what is the "Right" way out of the 2.

",41293,,2444,,1/23/2021 23:17,1/23/2021 23:17,What does Dice Loss should receive in case of binary segmentation,,0,0,,,,CC BY-SA 4.0 25912,1,,,1/22/2021 9:21,,0,418,"

I've done implementing alpha-beta, and transpositional table on my search tree algorithm so I decided to implement move-ordering next. But once I implemented it, it's way more longer to respond than before?

Here's my code so far:

function sortMoves(chess)
{
    const listA = [], listB = chess.moves({ verbose: true });
    const scores = [];

    // calc best moves
    const moves = [...listB];
    for (const move of moves)
    {
        const state = chess.move(move, {
            promotion: 'q'
        });
        scores.push(evaluate(chess.board()));
        chess.undo();
    }
    
    // sort move
    for (var i = 0; i < Math.min(5, moves.length); i++)
    {
        let maxEval = -Infinity;
        let maxIndex = 0;
        
        for (var j = 0; j < scores.length; j++)
        {
            if (scores[j] > maxEval)
            {
                maxEval = scores[j];
                maxIndex = j;
            }
        }
        
        scores[maxIndex] = -Infinity;
        listA.push(moves[maxIndex]);
        listB.splice(maxIndex, 1);
    }
    
    const newList = listA.concat(listB);
    return newList;
}

I am expecting for this to respond quicker than before but it turns out it's not. So my question is am I actually sorting the moves correctly? or should I write some code for the alpha-beta pruning related to the sorted moves?

Here's my negamax function:

function negamax(chess, depth, depthOrig, alpha, beta, color)
{
    // transposition table look up
    const alphaOrig = alpha;
    const hashKey = zobrist.hash(chess);
    const lookup = transposition.get(hashKey);
    if (lookup)
    {
        if (lookup.depth >= depth)
        {
            if (lookup.flag === EXACT)
                return lookup.score;
            else if (lookup.flag === LOWERBOUND)
                alpha = Math.max(alpha, lookup.score);
            else if (lookup.flag === UPPERBOUND)
                beta = Math.min(beta, lookup.score);
            
            if (alpha >= beta)
                return lookup.score;
        }
    }


    if (depth === 0 || chess.game_over())
    {
        // if current turn is checkmated,
        // remove the king on the board
        // so the AI knows if the move
        // will lead to checkmate or not, if
        // it's remove on the board, 
        // the checkmated team will
        // reduce the king's value leading
        // the AI to move in checkmate
        const kingPos = getPiecePos(chess, 'k', chess.turn());
        if (chess.in_checkmate())
            chess.remove(kingPos);
        
        const evaluation = evaluate(chess.board());
        chess.put({ type: chess.KING, color: chess.turn() }, kingPos);

        return color * evaluation;
    }

    
    /* let moves = chess.moves();
    if (lookup)
    {
        console.log(moves, depth)
        const bestMove = lookup.move;
        const moveIndex = moves.indexOf(bestMove);
        const arr = moves.splice(moveIndex, 1);
        moves = arr.concat(moves);
        console.log(moves, depth)
    } */
    const moves = sortMoves(chess);
    /* const moves = chess.moves({ verbose: true }); */


    let count = 0;
    let score = -Infinity;
    let bestMove = null;
    if (lookup)
        bestMove = lookup.move;

    
    for (const move of moves)
    {
        const state = chess.move(move, {
            promotion: 'q'
        });
        searchedMoves++;
        if (count === 0)
            score = -negamax(chess, depth-1, depthOrig, -beta, -alpha, -color);
        else
        {
            score = -negamax(chess, depth-1, depthOrig, -alpha-1, -alpha, -color);
            if (alpha < score < beta)
                score = -negamax(chess, depth-1, depthOrig, -beta, -score, -color);
        }
        chess.undo();

        if (score > alpha)
        {
            alpha = score;
            bestMove = move;
        }

        // do I add something on this part?
        count++;
        if (alpha >= beta)
            break;
    }

    
    // transposition table store
    const key = zobrist.hash(chess);
    keyArr.push(key);
    const entry = new Transposition();
    entry.score = score;
    entry.depth = depth;
    entry.move = bestMove;
    if (score <= alphaOrig)
        entry.flag = UPPERBOUND;
    else if (score >= beta)
        entry.flag = LOWERBOUND;
    else
        entry.flag = EXACT;
    transposition.set(key, entry);


    return alpha;
}
```
",44021,,,,,1/22/2021 9:21,How to implement very simple move-ordering for alpha-beta pruning,,0,3,,,,CC BY-SA 4.0 25913,1,25916,,1/22/2021 9:41,,16,8037,"

Q-learning uses a table to store all state-action pairs. Q-learning is a model-free RL algorithm, so how could there be the one called Deep Q-learning, as deep means using DNN; or maybe the state-action table (Q-table) is still there but the DNN is only for input reception (e.g. turning images into vectors)?

Deep Q-network seems to be only the DNN part of the Deep Q-learning program, and Q-network seems the short for Deep Q-network.

Q-learning, Deep Q-learning, and Deep Q-network, what are the differences? May be there a comparison table between these 3 terms?

",2844,,2444,,1/22/2021 21:31,6/21/2021 10:32,"What is the difference between Q-learning, Deep Q-learning and Deep Q-network?",,2,0,,,,CC BY-SA 4.0 25914,1,,,1/22/2021 10:04,,4,263,"

I am taking a course about using matrix factorization for machine learning.

The first thing that came into my mind is by using the matrix factorization we are always limited to linear relationships between the data, which is very limiting to predict complex patterns.

In comparison with neural networks, where we can use a non-linear activation function. It seems to me that all the tasks that matrix factorization can achieve will score better using a simple multilayer neural network.

So, can I conclude that NMF and matrix factorization for machine learning, in general, are not that practical, or there are cases where it's better to use NMF?

",44022,,2444,,1/22/2021 21:07,1/6/2022 11:20,Is non-negative matrix factorization for machine learning obsolete?,,1,3,,,,CC BY-SA 4.0 25916,2,,25913,1/22/2021 10:25,,7,,"

In Q-learning (and in general value based reinforcement learning) we are typically interested in learning a Q-function, $Q(s, a)$. This is defined as $$Q(s, a) = \mathbb{E}_\pi\left[ G_t | S_t = s, A_t = a \right]\;.$$

For tabular Q-learning, where you have a finite state and action space you can maintain a table lookup that maintains your current estimate of the Q-value. Note that in practice even the spaces being finite might not be enough to not use DQN, if e.g. your state space contains a large number, say $10^{10000}$, of states, then it might not be manageable to maintain a separate Q-function for each state-action pair

When you have an infinite state space (and/or action space) then it becomes impossible to use a table, and so you need to use function approximation to generalise across states. This is typically done using a deep neural network due to their expressive power. As a technical aside, the Q-networks don't usually take state and action as input, but take in a representation of the state (e.g. a $d$-dimensional vector, or an image) and output a real valued vector of size $|\mathcal{A}|$, where $\mathcal{A}$ is the action space.

Now, it seems in your question that you're confused as to why you use a model (the neural network) when Q-learning is, as you rightly say, model-free. The answer here is that when we talk about Reinforcement Learnings being model-free we are not talking about how their value-functions or policy are parameterised, we are actually talking about whether the algorithms use a model of the transition dynamics to help with their learning. That is, a model free algorithm doesn't use any knowledge about $p(s' | s, a)$ whereas model-based methods look to use this transition function - either because it is known exactly such as in Atari environments, or it must need to be approximated - to perform planning with the dynamics.

",36821,,36821,,6/21/2021 10:32,6/21/2021 10:32,,,,2,,,,CC BY-SA 4.0 25917,2,,13317,1/22/2021 12:25,,5,,"

"Modern" Guarantees for Feed-Forward Neural Networks

My answer will complement nbro's above, which gave a very nice overview of universal approximation theorems for different types of commonly used architectures, by focusing on recent developments specifically for feed-forward networks. I'll try an emphasis depth over breadth (sometimes called width) as much as possible. Enjoy!


Part 1: Universal Approximation

Here I've listed a few recent universal approximation results that come to mind. Remember, universal approximation asks if feed-forward networks (or some other architecture type) can approximate any (in this case continuous) function to arbitrary accuracy (I'll focus on the : uniformly on compacts sense).

Let me mention, that there are two types of guarantees: quantitative ones and qualitative ones. The latter are akin to Hornik's results (Neural Networks - 1989) which simply state that some neural networks can approximate a given (continuous) function to arbitrary precision. The former of these types of guarantees quantifies the number of parameters required for a neural network to actually perform the approximation and are akin to Barron's (now) classical paper (IEEE - 1993)'s breakthrough results.

  1. Shallow Case: If you want quantitative results only for shallow networks: Then J. Siegel and J. Xu (Neural Networks - 2020) will do the trick but (note: The authors deal with the Sobolev case but you get the continuous case immediately via the Soblev-Morrey embedding theorem.)

  2. Deep (not narrow) ReLU Case: If you want a quantitative proof for deep networks (but not too narrow) with ReLU activation function then Dimity Yarotsky's result (COLT - 2018) will do the trick!

  3. Deep and Narrow: To the best of my knowledge, the first quantitative proof for deep and narrow neural networks with general input and output spaces has recently appeared here: https://arxiv.org/abs/2101.05390 (preprint - 2021).
    The article is a constructive version of P. Kidger and T. Lyon's recent deep and narrow universal approximation theorem (COLT - 2020) (qualitative) for functions from $\mathbb{R}^p$ to $\mathbb{R}^m$ and A. Kratsios and E. Bilokpytov's recent Non-Euclidean Universal Approximation Theorem (NeurIPS - 2020).


Part 2: Memory Capacity

A related concept is that of "memory capacity of a deep neural network".
These results seek to quantify the number of parameters needed for a deep network to learn (exactly) the assignment of some input data $\{x_n\}_{n=1}^N$ to some output data $\{y_n\}_{n=1}^N$. For example; you may want to take a look here:

  1. Memory Capacity of Deep ReLU networks: R. Vershynin's very recent publication Memory Capacity of Neural Networks with Threshold and Rectified Linear Unit Activations - (SIAM's SIMODS 2020)
",31649,,31649,,1/22/2021 15:17,1/22/2021 15:17,,,,4,,,,CC BY-SA 4.0 25918,2,,25913,1/22/2021 12:45,,12,,"

Here is a table that attempts to systematically show the differences between tabular Q-learning (TQL), deep Q-learning (DQL), and deep Q-network (DQN).

Tabular Q-learning (TQL) Deep Q-learning (DQL) Deep Q-network (DQN)
Is it an RL algorithm? Yes Yes No (unless you use DQN to refer to DQL, which is done often!)
Does it use neural networks? No. It uses a table. Yes No. DQN is the neural network.
Is it a model? No No Yes (but usually not in the RL sense)
Can it deal with continuous state spaces? No (unless you discretize them) Yes Yes (in the sense that it can get real-valued inputs for the states)
Can it deal with continuous action spaces? Yes (but maybe not a good idea) Yes (but maybe not a good idea) Yes (but only the sense that it can produce real-valued outputs for actions).
Does it converge? Yes Not necessarily Not necessarily
Is it an online learning algorithm? Yes No, if you use experience replay No, but it can be used in an online learning setting
",2444,,2444,,1/22/2021 21:50,1/22/2021 21:50,,,,1,,,,CC BY-SA 4.0 25919,1,,,1/22/2021 12:58,,1,26,"

I am trying to design a good heuristic to solve a constraint satisfaction problem (CSP). I think that a possible heuristic to use is

$$h_1(\text{state}) = \text{number of conflicts in state}$$

However, of the possible solutions to the CSP, some have a lower cost (are better). So I can't just use $h_1$ as my heuristic.

Since the state space is pretty huge, I want to use the local search with a heuristic $h$, that guides my variable assignments towards a low-cost solution while reducing the conflicts. The way I am thinking about going about this is: of the variable assignments which do not cause conflicts (are valid), apply $h$ to them, pick the variable assignment which has the lowest/best value $h$ value. So $h$ would not handle conflicts, I would make sure any assignments considered by $h$ are guaranteed to be valid.

Ideally, though, I want $h$ to both drive down the conflicts to 0 and simultaneously guide the assignments to the lowest cost solution. Is this generally possible?

",44028,,2444,,1/22/2021 16:37,1/22/2021 16:37,CSP heuristic to simultaneously reduce conflicts and find near optimal assignment,,0,0,,,,CC BY-SA 4.0 25922,1,,,1/22/2021 14:38,,2,29,"

I'm reading the following paper in which the author seems to do 2 things interesting:

  1. The hidden-to-hidden weight matrix of the RNN is SVD decomposed and train separately.
  2. Each orthogonal part of the decomposition is optimized multiplicatively according to Cayley Transformation to maintain its orthogonal properties.

Now, I'm not so strong with the math behind the technique, but I could be hand-waving and say that albeit being multiplicative, it is just another method of gradient descent, and each orthogonal part is still minimizing the Loss function. So far so good.

But what they are doing is actually split the original optimization problem into multiple sub-optimization (2 for orthogonal matrices and n for the number of singular values), and then multiplied the result together. How can we be sure about the convergence and the optimality of such method? Or is this the case where we can say nothing and let the experiment speak for themselves?

",41781,,,,,1/22/2021 14:38,Is it possible to ensure the convergence when training a RNN weight on its SVD decomposition?,,0,0,,,,CC BY-SA 4.0 25923,1,,,1/22/2021 15:17,,2,28,"

I've come across a few binary classification problems lately where the labelling was challenging even for an expert. I'm wondering what I should do with this. Here are some of my suggestions to get the ball rolling:

  1. Make a third category called "unsure" then make it a three-class classification problem instead.
  2. Make a third category called "unsure" and just remove these from your training set.
  3. Make a third category called "unsure" and during training model this as a 0.5 such that the binary cross entropy loss looks like $-0.5\log(\hat{y})-0.5\log(1-\hat{y})$
  4. Allow the labeller to pick a percentage on a sliding scale (or maybe multiple choice: (0%, 25%, 50%, 75%, 100%), and take that into account when calculating cross entropy (as in my point above).

I recently saw a paper which goes for option 2, although that's not enough to convince me. Here's the relevant quote:

In case of a high-level risk, collision is imminent and the driver must react in less than 0.5 s (TTC < 0.5s). For low-level risk, the TTC is more than 2.0 s (TTC > 2.0s). Videos that show intermediate-level risk (0.5 s ≤ TTC ≤ 2.0 s), which is a mixture of high- and low-level risks, were not included in the NIDB because when training a convnet, it must be possible to make a clear visual distinction of risk.

",16871,,,,,1/22/2021 15:17,"For binary classification learning problems, how should I label instances where I'm only 60% sure?",,0,2,,,,CC BY-SA 4.0 25926,1,25927,,1/22/2021 17:24,,3,176,"

We all know that Genetic Algorithms can give an optimal or near-optimal solution. So, in some problems like NP-hard ones, with a trade-off between time and optimal solution the near-optimal solution is good enough.

Since there is no guarantee to find the optimal solution, is GA considered to be a good choice for solving the Knuth problem?

According to Artificial intelligence: A modern approach (third edition), section 3.2 (p. 73):

Knuth conjectured that, starting with the number 4, a sequence of factorial, square root, and floor operations will reach any desired positive integer.

For example, 5 can be reached from 4:

floor(sqrt(sqrt(sqrt(sqrt(sqrt((4!)!))))))

So, if we have a number (5) and we want to know the sequence of the operations of the 3 mentioned ones to reach the given number, each gene of the chromosome will be a number that represents a certain operation with an additional number for (no operation) and the fitness function will be the absolute difference between the given number and the number we get from applying the operations in a certain order for each the chromosome (to min). Let's consider that the number of the iterations (generations) is done with no optimal solution and the nearest number we have is 4 ( with fitness 1), the problem is that we can get 4 from applying no operation on 4 while for 5 we need many operations, so the near-optimal solution is not even near to the solution.

So, is GA is not suitable for this kind of problems? Or the suggested chromosome representation and fitness function are not good enough?

",42300,,2444,,1/23/2021 0:35,1/23/2021 4:53,Are Genetic Algorithms suitable for problems like the Knuth problem?,,1,0,,,,CC BY-SA 4.0 25927,2,,25926,1/22/2021 17:48,,1,,"

Before trying to answer your question more directly, let me clarify something.

People often use the term genetic algorithms (GAs), but, in many cases, what they really mean is evolutionary algorithms (EAs), which is a collection of population-based (i.e. multiple solutions are maintained at the same time) optimization algorithms and approaches that are inspired by Darwinism and survival of the fittest. GAs is one of these approaches, where the chromosomes are binary and you have both the mutation and cross-over operation. There are other approaches, such as evolution strategies or genetic programming.

As you also noticed, EAs are meta-heuristics, and, although there is some research on their convergence properties [1], in practice, they may not converge. However, when any other potential approach has failed, EAs can be definitely useful.

In your case, the problem is really to find a closed-form (or analytical) expression of a function, which is composed of other smaller functions. This really is what genetic programming (in particular, tree-based GP) was created for. In fact, the Knuth problem is a particular instance of the symbolic regression problem, which is a typical problem that GP is applied to. So, GP is probably the first approach you should try.

Meanwhile, I have implemented a simple program in DEAP that tries to solve the Knuth problem. Check it here. The fitness of the best solution that it has found so far (with some seed) is 4 and the solution is floor(sqrt(float(sqrt(4)))) (here float just converts the input to a floating-point number, to ensure type safety). I used the difference as the fitness function and ran the GP algorithm for 100 generations with 100 individuals for each generation (which is not a lot!). I didn't tweak much the hyper-parameters, so, maybe, with the right seed and hyper-parameters, you can find the right solution.

To address your concerns, in principle, you could use that encoding, but, as you note, the GA could indeed return $4$ as the best solution (which isn't actually that far away from $5$), which you could avoid my killing, at every generation, any individuals that have just that value.

I didn't spend too much time on my implementation and thinking about this problem, but, as I said above, even with genetic programming and using only Knuth's operations, it could get stuck in local optima. You could try to augment my (or your) implementation with other operations, such as the multiplication and addition, and see if something improves.

",2444,,2444,,1/23/2021 4:53,1/23/2021 4:53,,,,2,,,,CC BY-SA 4.0 25934,1,,,1/23/2021 8:45,,1,77,"

Most of the tutorials only teach us to split the whole dataset into three parts: training set, develop set, and test set. But in the industry, we are kind of doing test-driven development, and what comes most important is the building of our test set. We are not given a large corpus first, but we are discussing how to build the test set first.

The most resource-saving method is just sampling(simple random sampling) cases from the log and then having them labeled, which represents the population. Perhaps we are concerning that some groups should be more important than others, then we do stratified sampling.

Are there any better sampling methods?

What to do when we are releasing a new feature and we cannot find any cases of that feature from the user log?

",5351,,,,,1/23/2021 22:18,How to build a test set for a model in industry?,,1,0,,,,CC BY-SA 4.0 25936,2,,1285,1/23/2021 10:34,,1,,"

Have a look at the paper Blockchain-Based Federated Learning in Medicine (2020), where the blockchain is used as a "federation" server for improving the parameters of local neural networks.

",43582,,2444,,10/13/2021 10:11,10/13/2021 10:11,,,,0,,,,CC BY-SA 4.0 25937,1,25938,,1/23/2021 11:24,,2,83,"

I understand what cross-correlation does given a kernel and an input image, but the formula confuses me a little. Given here in Goodfellow's Deep Learning (page 329), I can't quite understand what $m$ and $n$ are. Are they the dimensions of the kernel along the height and width dimensions?

$$S(i,j) =(K*I)(i,j) = \sum_m \sum_n I(i+m, j+n)K(m,n)$$

So, for the input image $I$ and kernel $K$, we take the sum product of $I*K$, but what do the $m$ and $n$ represent? How is the input image $I$ indexed?

",42966,,2444,,1/23/2021 13:47,1/23/2021 13:47,What do the variables in the cross-correlation formula mean?,,1,0,,,,CC BY-SA 4.0 25938,2,,25937,1/23/2021 13:22,,2,,"

It takes a little bit of time to fully understand the 2D convolution/cross-correlation and to relate it to the usual diagrams of the convolution operation, so, before addressing your questions, let me first try to break the definition of the 2D cross-correlation down, from the left to right.

$$S(i,j) =(K*I)(i,j) = \sum_m \sum_n I(i+m, j+n)K(m,n) \label{1}\tag{1}$$

  1. $S$ is the function that is the cross-correlation of the functions $K$ and $I$, so $S(i, j)$ is the cross-correlation of $K$ and $I$ at the pixels $i$ and $j$

  2. The symbol $*$ in $K*I$ is the cross-correlation/convolution symbol, but sometimes the cross-correlation/convolution is also denoted as $\circledast$

  3. $K*I$ is the function that results from the cross-correlation of $K$ and $I$, i.e. $S$, so $(K*I)(i,j)$ is the value of the function $S$ (the cross-correlation of $K$ and $I$) at the inputs (or pixels) $i$ and $j$. In other words, $(K*I)(i,j)$ is just another way of writing $S(i,j)$ that emphasizes that we took the cross-correlation of $K$ and $I$, but they are exactly the same thing.

  4. The double summation $\sum_m \sum_n$ is because we are computing the 2D cross-correlation, i.e. over the $x$ and $y$ dimensions of the image and kernel. This is just the definition of the 2D cross-correlation.

  5. The $m$ and $n$ are the indices of the summations, one across the $x$-axis and the other across the $y$-axis. Let $m = x$ and $n = y$, so we can rewrite equation \ref{1} as follows. $$S(i, j) =(K*I)(i, j) = \sum_x \sum_y I(i + x, j + y)K(x, y) \label{2}\tag{2}$$ Now, it should be clearer that we are indexing across the $x$ and $y$ dimensions.

  6. Now, let me further restrict the range of the summations. Let's say from $x=-1$ to $x=1$ and from $y=-1$ to $y=1$, then we can rewrite equation \ref{2} as follows $$S(i, j) =(K*I)(i, j) = \sum_{x=-1}^1 \sum_{y=-1}^1 I(i + x, j + y)K(x, y) \label{3}\tag{3}$$.

  7. Why do I want to do this? Let me explain why. Consider now the following kernel $K$ (which happens to be a Gaussian kernel) $$ K = \begin{bmatrix}\ \ \color{blue}{\frac {1}{16}} &\ \ \frac {1}{8} &\ \ \frac {1}{16} \\\ \ \frac {1}{8} &\ \ \frac {1}{4} &\ \ \color{red}{ \frac {1}{8}} \\\ \ \frac {1}{16} &\ \ \frac {1}{8} &\ \ \frac {1}{16}\end{bmatrix} $$ Note that this is the output of the function $K$ or, more precisely, its support. Let's assume that $\frac {1}{4}$ is at the index/pixel $(0, 0)$. Then, for example, the top-left $ \color{blue}{\frac {1}{16}} $ is at index/pixel $(-1, -1)$ and the middle-right $\color{red}{ \frac {1}{8}}$ at pixel $(1, 0)$

  8. Now, consider any image $I$ represented as a 2D matrix (i.e. its support), with, for example, dimensions $U \times V$. For concreteness, let $U = 5$ and $V=5$. Moreover, the middle pixel of the image is at index $(0, 0)$, as for the kernel. Let's say the image is the following $$ I = \begin{bmatrix} 0 & 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1 & 1 \\ 0 & 0 & \color{green}{0} & 1 & 1 \\ 1 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 1 \\ \end{bmatrix} $$ So, $\color{green}{0}$ is at index/pixel $(0,0)$.

  9. Now, let's say that $i = 0$ and $j=0$ in equation \ref{3}. This means that we compute the cross-correlation between $I$ and $K$ at the index $(0, 0)$, where, in the case of the image $I$, the value is $\color{green}{0}$.

  10. With these settings, it should be clearer now that equation \ref{3} means that the cross-correlation between $K$ and $I$ at index/pixel $i=0$ and $j=0$ is a 2D dot (or scalar) product. If it is not clear, then let be take the $3\times 3$ submatrix of $I$ centered at $(i, j) = (0, 0)$ (below, $(0, 0)^{3 \times 3}$ is just the notation that I came up with to indicate that). $$ I_{(0, 0)^{3 \times 3}} = \begin{bmatrix} 1 & 0 & 1 \\ 0 & \color{green}{0} & 1 \\ 1 & 0 & 0 \end{bmatrix} $$ Then the cross-correlation in equation \ref{3} is just $$ S(i, j) = S(0, 0) = (K * I)(0, 0) = \sum \begin{bmatrix} 1 \frac {1}{16} & 0 \frac {1}{8} & 1 \frac {1}{16} \\ 0 \frac {1}{8} & \color{green}{0} \frac {1}{4} & 1 \frac {1}{8} \\ 1 \frac {1}{16} & 0 \frac {1}{8} & 0 \frac {1}{16} \end{bmatrix}, $$ where $\sum$ is a sum across all elements, i.e. \begin{align} S(0, 0) &= 1 \frac {1}{16} + 0 \frac {1}{8} + 1 \frac {1}{16} + 0 \frac {1}{8} + \color{green}{0} \frac {1}{4} + 1 \frac {1}{8} + 1 \frac {1}{16} + 0 \frac {1}{8} + 0 \frac {1}{16} \\ & = \frac {1}{16} + \frac {1}{16} + \frac {1}{8} + \frac {1}{16} \\ & = \frac {3}{16} + \frac {1}{8} \\ &= \frac {5}{16} \end{align}

Now, to answer your questions/concerns more directly.

Are they the dimensions of the kernel along the height and width dimensions?

No. They are the indices that determine the neighbourhood around $i$ and $j$ where you want to compute the cross-correlation.

So, for the input image $I$ and kernel $K$, we take the sum product of $I*K$

In this case, the symbol $*$ does not denote the product, but the cross-correlation. Maybe by "sum product" you meant the cross-correlation (or 2D dot product), but I'm not sure.

How is the input image $I$ indexed?

The input image is indexed with $m$ and $n$, but you start at $i$ and $j$, that's why you use the actual indices $i+m$ and $j+n$.

",2444,,2444,,1/23/2021 13:36,1/23/2021 13:36,,,,0,,,,CC BY-SA 4.0 25939,1,34462,,1/23/2021 13:57,,0,325,"

I have a problem with applying alpha zero self-play to a game (Connect 6) with a huge branching factor (30,000 on average).

I have implemented the MCTS as described but I found that during the MCTS simulations for the first move, because P(s, a) is so much smaller than Q(s, a), the MCTS search tree is extremely narrow (in fact, it only had 1 branch per level, and later I have added dirichlet noise and it changed the tree to have 2 child branches instead of 1, per level).

With some debugging I figured that after the first simulation, the last visited child would have a Q value which is on average 0.5, and all the rest of the children would have 0 for their Q values (because they haven't been explored) while their U are way smaller, about 1/60000 on average. So comparing all the Q + U on the 2nd and subsequent simulations would result in selecting the same node over and over again, building this narrow tree.

To make matter worse, because the first simulation built a very narrow tree, and we don't clear the statistics on the subsequent simulations for the next move, the end result is the simulations for the first move dictated the whole self-play for the next X moves or so (where X is number of simulations per move) - this is because the N values on this path is accumulated from the previous simulations from the prior moves. Imagine I run 800 simulations per move but the N value on the only child inherited from previous simulations is > 800 when I started the simulation for the 3rd move in the game.

This is a basically related to question 2 raised in this thread:

AlphaGo Zero: does $Q(s_t, a)$ dominate $U(s_t, a)$ in difficult game states?

I don't think the answer addressed the problem I am having here. Surely we are not comparing Q and U but when Q dominates U then we ended up building a narrow and deep tree and the accumulated N values are set on this single path, preventing the self-play to explore meaningful moves.

At the end of the game these N values on the root nodes of moves played ended up training the policy network, reinforcing these even more on the next episode of training.

What am I doing wrong here? Or is AlphaZero algorithm not well suited for games with branching factor in this order?

Note: as an experiment I have tried to use a variable C PUCT value, which is proportional to the number of legal moves on each game state. However this doesn't seem to help either.

",44052,,44052,,1/24/2021 3:14,2/8/2022 13:34,"Alpha Zero does not converge for Connect 6, a game with huge branching factor - why?",,1,3,,,,CC BY-SA 4.0 25942,2,,20706,1/23/2021 18:18,,1,,"

Traditional CNNs used for image classification (and related tasks) are composed of 1 or more fully connected layers (FCs), after the convolutional and pooling layers, which take as input the features extracted from the convolutional and pooling layers, in order to perform classification or regression.

One problem with FCs in CNNs is that the number of parameters can be very big, with respect to the number of parameters in the convolutional layers.

There are tasks, such as image segmentation, where this big number of parameters is not really needed. An example of a neural network that does not make use of fully connected layers but only uses convolutions, downsampling (aka pooling), and upsampling operations is the U-net, which is used for image segmentation. A neural network that only uses convolutions is known as a fully convolutional network (FCN). Here I give a detailed description of FCNs and $1 \times 1$, which should also answer your question.

In any case, to answer your question more directly, $1 \times 1$ convolutions have been used for image segmentation tasks, i.e. dense classification tasks, i.e. tasks where you want to assign a label to each pixel (or a group of pixels), as opposed to sparse classification tasks such as image classification (where the goal is to assign 1 label to the whole image). Moreover, in comparison with FC layers, they have fewer parameters and, more importantly, the number of parameters in an FCN does not depend on the dimensions of the images (as in the case of traditional CNNs), which is a good thing (especially, when your images have high resolutions), but typically it depends on the number of kernels and instances (of objects), in the case of instance segmentation.

The FCN paper discusses this reduction of the number of parameters (and computation time), so you should probably read this paper for more details.

",2444,,2444,,1/23/2021 19:44,1/23/2021 19:44,,,,0,,,,CC BY-SA 4.0 25943,2,,25934,1/23/2021 22:18,,1,,"

I am not sure whether that solves your problem at hand, but one approach you could look into is k-fold Cross Validation (CV). In this approach, you split your combined train, development, and test data into $k$ randomized and equally-sized partitions. Afterwards, you train and evaluate your model $k$ times. In the $i^{th}$ iteration, you train your model on all but the $i^{th}$ partition. After training on the $k-1$ partitions is done, you evaluate your model on the $i^{th}$ partition. You repeat this process for all $i \in {1, 2, ..., k}$. To be clear, you keep your initially randomized partitions fixed during k-fold CV. Then, you take the average performance over all $k$ performed test runs to assess the quality of your model based on the averaged test performance. Afterwards, you could train your model on all train, development, and test data and deliver the resulting model as your final one. In the most extreme case, you would perform Leave-One-Out-CV, where you set $k$ equal to the number of all the data points at your disposal. That is the most expensive approach, but yields the most accurate performance estimates. For more information, see this website.

Generally, using that approach, you don't waste any data by reserving it exclusively for development/testing. Also, it might be important to mention that this approach is compatible with other sampling techniques as well. For example, in the $i^{th}$ iteration of your CV algorithm, you could apply stratified sampling to your $k-1$ partitions used for training during that ($i^{th}$) iteration.

I am not entirely sure whether I get the second part of your question right.

If it is about how to later introduce the new feature to a given model, I would say the following. When it comes to introducing new features, I think you are pretty much out of luck with respect to recycling your old model. Of course, (assuming that the introduction of new features to the existing model is technically possible) there might be types of models which allow continual learning under certain circumstances, but in the worst case that might cause catastrophic forgetting since you change the distribution of your underlying training data when adding new features, which not all models might be able to deal with. An example of this case is when you add more diverse training images for a given Convolutional Neural Net (CNN), which the CNN then has to learn to map to an already existing set of classes. In other cases, introducing new features might not even be technically possible if their introduction would require adding new input (or output) nodes to an existing model.

However, when your second part of the question asks for how to fill gaps in your older data, caused by missing data, there are different strategies you could try for imputing the missing data, some of which are briefly mentioned here.

",37982,,,,,1/23/2021 22:18,,,,5,,,,CC BY-SA 4.0 25946,2,,24613,1/24/2021 0:39,,0,,"

In fact, I think that the formula can be used as it is for multi-state problems.

However, the formula probably overlaps with adjusting the reward bias because it considers the bias of the true expected value for a particular situation.

Rather, this makes learning unstable, so I think it is not used.

",43246,,,,,1/24/2021 0:39,,,,0,,,,CC BY-SA 4.0 25947,1,,,1/24/2021 4:28,,2,399,"

I am learning to program neural networks and others, and I would like to know how I can get the numbers that are in an image, for example, if I pass an image that has 123 written, get with my model that there are 123 written, I have tried to use PyTesseract is not very precise, and I would like to do it with a neural network, my current code is quite simple, it recognizes the digits of the mnist dataset such that:

import tensorflow as tf
from tensorflow.keras import Sequential, optimizers
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.layers import Dense, Flatten, Conv2D, MaxPooling2D
import matplotlib.pyplot as plt

mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()

print('train_images.shape:', train_images.shape)
print('test_images.shape:', test_images.shape)
plt.imshow(train_images[0])

train_images = train_images.reshape((60000, 28, 28, 1))
test_images = test_images.reshape((10000, 28, 28, 1))

train_images = train_images.astype('float32') / 255
test_images = test_images.astype('float32') / 255

train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
 

model = Sequential()

model.add(Conv2D(32, (5, 5), activation = 'relu', input_shape = (28, 28, 1)))
model.add(MaxPooling2D((2, 2)))

model.add(Conv2D(64, (5, 5), activation = 'relu'))
model.add(MaxPooling2D((2, 2)))

model.add(Flatten())

model.add(Dense(10, activation = 'softmax'))

model.summary()

model.compile(loss = 'categorical_crossentropy', optimizer = 'sgd', metrics = ['accuracy'])

model.fit(train_images, train_labels, batch_size = 100, epochs = 5, verbose = 1)

test_loss, test_accuracy = model.evaluate(test_images, test_labels)

print('Test accuracy:', test_accuracy)

but I would need to know how I can pass an image with a sequence of digits to it, and that it recognizes the digits in question, does anyone know how I could do it?

",28278,,32410,,4/26/2021 16:12,4/26/2021 16:12,How to recognize sequence of digits in an image,,2,3,,,,CC BY-SA 4.0 25948,1,,,1/24/2021 12:05,,0,34,"

I have a dataset about the monitored health/growth of a community of people. The dataset has tensor shaped (batch_size, features, person, window), where:

  • person==10 means there are 10 people in the community
  • features==9 means that there are 9 features being monitored, for example, blood pressure, sugar level, ..etc
  • window==15 means the recorded value of each feature every day for 15 days (time dimension)

Moreover, people can join/leave the community, so the person dimension would increase/decrease over time. For simplicity, the window dimension is fixed at 15, then a new person that joined has to be in the community for a minimum of 15 days to be included in the dataset as 1 data point/sample. Also, say the number of features is fixed at 9. Hence, for this problem, only the number of people at an instance may change over each input interaction.

For example, assume batch_size==1 then the input dimension into the neural network would be something like:

Iter 1: (1, 9, 7, 15)
Iter 2: (1, 9, 7, 15)
Iter 3: (1, 9, 7, 15)
Iter 4: (1, 9, 8, 15) # 1 person joins the community
Iter 5: (1, 9, 8, 15)
Iter 6: (1, 9, 7, 15) # 1 person left the community
Iter 7: (1, 9, 6, 15) # 1 person left the community
Iter 8: (1, 9, 6, 15)
Iter 9: (1, 9, 10, 15) # 4 person joins the community 

Is there a way to deal with this dynamically changing input tensor in neural networks without padding? As we won't know in advance how many people will join/leave the community (related to continual learning?) hence won't know the maximum pad.

Also, how to deal when batch_size is not 1?

",44069,,2444,,1/25/2021 12:29,1/25/2021 12:29,How to deal with dynamically changing input tensor in neural networks without padding?,,0,2,,,,CC BY-SA 4.0 25949,1,26110,,1/24/2021 12:33,,5,514,"

I am implenting a Monte Carlo Tree Search algorithm, where the selection process is done through Upper Confidence Bound formula:

def uct(state):
        log_n = math.log(state.parent.sim_count) 
        explore_term = self.exploration_weight * math.sqrt(log_n / state.sim_count)
        exploit_term = (state.win_count / state.sim_count)

        return exploit_term + explore_term

I have trouble however choosing the initial value for UCT, when the sim_count of the node is 0. I tried with +inf (which would be appropriate as approaching lim -> 0 from the positive side would give infinity), but that just means the algorithm will be always choosing an unexplored child.

What would you suggest as a initial value for the uct?

Thank you in advance!

",43937,,,,,1/31/2021 16:07,"What should the initial UCT value be with MCTS, when leaf's simulation count is zero? Infinity?",,1,0,,,,CC BY-SA 4.0 25950,1,,,1/24/2021 13:50,,1,140,"

If I were to make a neural network that predicts the value of e.g. Bitcoin tomorrow based on the chart of the last month, would that work? Of course, 100% accuracy cannot be reached, but a success rate over 50% on determining if I should buy or sell Bitcoin could be very profitable. Have there been any attempts to create such neural networks so far?

",19783,,2444,,1/25/2021 12:30,11/16/2022 19:09,Can cryptocurrency charts be estimated using neural networks?,,2,1,,,,CC BY-SA 4.0 25952,1,26089,,1/24/2021 17:04,,5,282,"

I don't know much about AI and am just curious.

From what I read, AlphaZero/MuZero outperform any human chess player after a few hours of training. I have no idea how many chess games a very talented human chess player on average has played before he/she reaches the grandmaster level, but I would imagine it is a number that can roughly be estimated. Of course, playing entire games is not the only training for human chess players.

Nonetheless, how does this compare to AI? How many games do AI engines play before reaching the grandmaster level? Do (gifted) humans or AI learn chess faster?

",44073,,2444,,1/26/2021 16:22,1/30/2021 17:51,Do AlphaZero/MuZero learn faster in terms of number of games played than humans?,,1,1,,,,CC BY-SA 4.0 25953,1,,,1/24/2021 17:34,,2,90,"

I am currently working on an experiment to link reinforcement learning with graph neural networks. This is my architecture:

Feature Extraction with GCN:

  • there is a fully meshed topology with 23 nodes. Therefore there are 23*22=506 edges.
  • the original feature vector comprises 43 features that range from about -1 to 1.
  • First, a neuronal network f takes calculates a vector per edge, given the source node and target node features.
  • After we have calculated 506 edge vectors, function u aggregates the results from f per node (aggregation over 22 edges)
  • A function g takes the original target feature vector and concatenates the aggregated results from u. Finally, the output dimension of g determines the new feature vector size for each node.
  • At last, the function agg decides which information is returned from the feature extraction, e.g. just flatten the 23xg_output_dim feature vectors or building the average

After that:

  • The output of the feature extractor is passed to the OpenAi Baseline PPO2 Implementation. The frameworks adds a flatten layer to the output and maps it to 19 action values and 1 value-function value.

I have made some observations in the experiments and do not manage to explain them. Hyperparameter are: an output dimension for f and g of 512. U=sum, aggr=flatten. A tanh activation is applied on the outputs of f and g. For PPO2: lr=0.000343, stepsize=512.

This gives me the following weight matrices:

<tf.Variable 'ppo2_model/pi/f_w_0:0' shape=(86, 512) dtype=float32_ref>    
<tf.Variable 'ppo2_model/pi/g_w:0' shape=(555, 512) dtype=float32_ref>   
<tf.Variable 'ppo2_model/pi/w:0' shape=(11776, 19) dtype=float32_ref>    
<tf.Variable 'ppo2_model/vf/w:0' shape=(11776, 1) dtype=float32_ref>

The following problem occurs. Normally you wait for the entropy in the PPO2 to decrease during the training, because the algorithm learns which actions lead to more reward. With the described hyperparameters, the entropy drops abruptly to 0 within 100 update steps and stays zero even after >15.000 updates (=150M steps in the game). This means that the same action is always selected.

What I found out: the problem is that by making the sum over 22 edges, very large values are created (maximum 22*1 and 22*-1). The values are then given to the function g and thus ends up in the saturation region of the tanh. As a result, the new features of the 23 nodes contain many 1's and -1's. Because we flatten, the weighted sum of 11776 input neurons flows into each of the 19 action neurons, resulting in very large values in the policy. An action is then calculated from the policy with the following formula:

u = tf.random_uniform(tf.shape(logits), dtype=logits.dtype)
action = tf.argmax(logits - tf.log(-tf.log(u)), axis=-1), 

Most of the time tf.log(-tf.log(u) gives sommething between 2 and -2 (in my opinion). This means that as soon as a very large value appears in the policy, the corresponding action is always selected and not the second or third most probable one, which might lead to more exploration.

What I don't understand 1): As soon as negative reward occurs, shouldn't the likelihood decrease again, so that in the end I choose other actions again?

I did some experiments with relu and elu activations: These are the value histogram of the output of g after the using relu, tanh and elu: These are the value histograms of the policy, when using relu, tanh and elu: Histogram over resulting actions:

What I don't understand 2:Using Relu you see that in the first steps in the policy were large values, but then the model learns to reduce the range, which is why in this example also the entropy does not drop. Why does this not work when using tanh or elu?

We have found out 3 things with which the problem does not occur or is delayed. These are my assumptions. Are the correct in your opinion?:

  • using smaller output dimension of f and g, like 6 or using aggr=mean -> For each of the 19 action neurons, less input neurons are averaged -> smaller values in the policy --> more exploration
  • Using u=mean and not sum, averages the outputs of f, therefore the aggregated values are not only 1 and -1
  • Smaller learning rate -> Making the weights too big, increases the chance of the 19 action values to be big. If there is no negativ reward, there is no need for the algorithm to make the weights smaller.

I know this is a lot of information, so I would be grateful for any small tip.!

",44075,,44075,,1/25/2021 16:36,1/25/2021 16:36,Reinforcement learning and Graph Neural Networks: Entropy drops to zero,,0,3,,,,CC BY-SA 4.0 25956,1,25959,,1/24/2021 18:40,,0,945,"

You take any blog or any example and all they tell you about is the given picture below.

It has 4 different matrices and 3 of whose weights are shared. So, I'm wondering how is this achieved in practice?

Please correct me:

I think the first word "hello" goes in as a one-hot encoded form and changes the Hidden matrix. And then after it, "world" goes and gets multiplied and then changes the matrix again and so on. What people make it look like is that all of the words going are in Parallel. It can't be the case because the Hidden matrix is dependent on the previous word and without changing the matric, you can not pass the current word. Please correct if my idea is wrong but I think the execution is in sequential order.

",36062,,2444,,1/26/2021 11:58,1/26/2021 11:58,"Is the working of RNNs, LSTM and GRU sequential or parallel?",,1,0,,,,CC BY-SA 4.0 25958,2,,8676,1/24/2021 19:40,,1,,"

Yes, the core differences between the different categories of problems are correct as you've described them.

For SMDPs, I'd like to remark that the water boiling example is maybe not the best. That looks more like an example of "delayed rewards", but not one of "durative actions": when the agent takes that action to raise the temperature, it takes some time before the reward comes in, but the agent's action itself doesn't take that much time. The agent could maybe do something else in the meantime. Delayed rewards are not restricted to SMDPs, they can also show up in regular MDPs (you just have to make sure to include some data in the state representation to indicate the time since the temperature was set or something like that, such that the Markov property is not violated).

A typical example of an SMDP with durative actions would be a grid with rooms, split by walls but connected by smaller doors inside those walls. A single "primitive" action would just take a single step in the larger grid, but longer-duration "macro-actions" would make the agent navigate from somewhere within a room to one of the doors "on auto-pilot". This would still take as much time as it would to navigate explicitly in multiple smaller actions, but the larger more complex behaviour being encapsulated in a single "macro-action" can speed up learning.

",1641,,,,,1/24/2021 19:40,,,,0,,,,CC BY-SA 4.0 25959,2,,25956,1/24/2021 20:42,,1,,"

Yes, you are correct and it was one of original motivations, which inspired the invention of the Attention mechanism in seq2seq problems https://arxiv.org/pdf/1706.03762.pdf.

There is a quote from this paper:

Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t−1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples, which becomes critical at longer sequence lengths, as memory constraints limit batching across examples.

On the other hand, Transformer architectures have a loot loom for parallelization, because they take the whole sequence at once, and multiple heads can be executed in parallel.

",38846,,,,,1/24/2021 20:42,,,,6,,,,CC BY-SA 4.0 25962,1,,,1/25/2021 3:34,,1,420,"

I have recently come across transformers, I am new to Deep Learning. I have seen a paper using CNN and BiLSTM on top of a transformer, the paper uses a transformer(XLM-R) for sentiment analysis in code-mixed domain. But many of the blogs only use a normal feed formal network on top of the transformer.

I am trying to use transformers for sentiment analysis, short text classification.

Is it overkill to use models like CNN and BiLSTM on top of the transformer considering the size of the data it is trained on and its complexity?

",33985,,2444,,1/25/2021 12:41,1/25/2021 12:41,"Is using a LSTM, CNN or any other neural network model on top of a Transformer(using hidden states) overkill?",,0,8,,,,CC BY-SA 4.0 25963,1,,,1/25/2021 5:50,,7,5852,"

There are five parameters from an LSTM layer for regularization if I am correct.

To deal with overfitting, I would start with

  1. reducing the layers
  2. reducing the hidden units
  3. Applying dropout or regularizers.

There are kernel_regularizer, recurrent_regularizer, bias_regularizer, activity_regularizer, dropout and recurrent_dropout.

They have their definitions on the Keras's website, but can anyone share more experiences on how to reduce overfitting?

And how are these five parameters used? For example, which parameters are most frequently used and what kind of value should be input? ?

",44085,,44085,,2/3/2021 7:31,10/12/2021 0:02,How should we regularize an LSTM model?,,3,3,,,,CC BY-SA 4.0 25964,2,,25950,1/25/2021 5:58,,0,,"

There are some academic works on that, but basically what you're asking for is a time-series estimation.

Here's a paper that tries to estimate Bitcoin price, for example: PAKDD: Forecasting Bitcoin Price with Graph Chainlets.

",32621,,2444,,1/25/2021 12:58,1/25/2021 12:58,,,,1,,,,CC BY-SA 4.0 25966,2,,1285,1/25/2021 6:20,,1,,"

I'm aware of some works that use blockchains in federated learning scenario for accountability, incentivize participants and so on.

Here's an example, BlockFLA: Accountable Federated Learning via Hybrid Blockchain Architecture (2020), but there are probably many similar works around.

",32621,,2444,,10/13/2021 10:09,10/13/2021 10:09,,,,0,,,,CC BY-SA 4.0 25967,1,25981,,1/25/2021 7:16,,0,233,"

I'm employing the Actor-Critic algorithm. The critic network approximates the action-value function, i.e. $Q(s, a)$, which determines how good a particular state is, when provided with an action.

$Q(s, a)$ is approximated using the backpropagation of the temporal difference error (TD error). We can understand that $Q(s, a)$ has been approximated properly when TD error is minimized, i.e. when it is saturated at lower values.

My question is, when exactly can you say that $Q(s, a)$ is approximated properly, if you don't have TD error, i.e. if you have to plot the graph between $Q(s, a)$ vs episodes, then what would be the optimal behaviour?

Will it be increasing exponential with saturation value around reward values, or increasing exponential with saturation (around any value)?

Follow up: What can be the possible mistake, if the output of Q-value function is around 5x the rewards, and not saturating?

",37180,,2444,,1/25/2021 12:45,1/25/2021 16:36,"Relationship between Rewards and Q Value (Graph between Q(s, a) vs episodes)",,1,8,,,,CC BY-SA 4.0 25968,1,25978,,1/25/2021 7:41,,3,2763,"

I'm currently reading the paper Likelihood Ratios for Out-of-Distribution Detection, and it seems that their problem is very similar to the problem of anomaly detection. More precisely, given a neural network trained on a dataset consisting of classes $A,B,$ and $C$, then they can detect if an input to the neural network is anomalous if it is different than these three classes. What is the difference between what they are doing and regular anomaly detection?

",41856,,2444,,1/25/2021 12:49,1/25/2021 16:47,What is the difference between out of distribution detection and anomaly detection?,,1,0,,,,CC BY-SA 4.0 25970,1,,,1/25/2021 9:11,,1,98,"

I have been working on text detection and recognition for almost two months and new on this field. So far, I have fine-tuned, tested, and trained several text detection/recognition methods, such as CRAFT, TextFuseNet, CharNet for detection, and clova.ai model for recognition. Now I come up with this question:

  • Can object detection approaches(yolov5,efficientDet) be used to solve text/detection problems?
",34371,,34371,,1/26/2021 9:27,1/26/2021 9:27,Can object detection approaches be used to solve text/detection problems?,,0,3,,,,CC BY-SA 4.0 25971,1,25973,,1/25/2021 9:32,,3,658,"

I am completing an assignment at the moment. One of the assignment questions asks how you identified the learned policy and how you obtained it. The question is a reinforcement learning question, and the task is to apply the Q-learning algorithm to fill out a Q-table (which I've done) but confused on what it may mean by the learned policy.

So, what is a "learned policy" in Q-learning?

",,user43972,2444,,1/25/2021 12:13,1/25/2021 12:13,"What is a ""learned policy"" in Q-learning?",,1,0,,,,CC BY-SA 4.0 25972,1,25992,,1/25/2021 9:46,,2,194,"

I am quite new to GAN and I am reading about WGAN vs DCGAN.

Relating to the Wasserstein GAN (WGAN), I read here

Instead of using a discriminator to classify or predict the probability of generated images as being real or fake, the WGAN changes or replaces the discriminator model with a critic that scores the realness or fakeness of a given image.

In practice, I don't understand what the difference is between a score of the realness or fakeness of a given image and a probability that the generated images are real or fake.

Aren't scores probabilities?

",36907,,2444,,1/25/2021 13:06,1/26/2021 10:08,Aren't scores in the Wasserstein GAN probabilities?,,1,1,,,,CC BY-SA 4.0 25973,2,,25971,1/25/2021 9:48,,4,,"

A Q table allows you to look up any state/action pair in it and find the associated action value. It is not itself a policy. However, in order to calculate the action values, you will have assumed something about the policy.

The most common policy scenarios with Q learning are that it will converge on (learn) the values associated with a given target policy, or that it has been used iteratively to learn the values of the greedy policy with respect to its own previous values. The latter choice - using Q learning to find an optimal policy, using generalised policy iteration - is by far the most common use of it.

A policy is not a list of values, it is a map from state to actions. The question wants you to show the policy that you have learned the Q values for.

The policy in your case is therefore likely to be to pick the action that has the highest action value in each state. You may be able to decribe your answer in text ("always turn left unless next to the exit") or as a graphic (draw arrows on a grid world to show the preferred direction). Or you could write out a table of states showing the chosen action in each one.

The maths notation for how you derive the policy from a Q table can be written:

$$\pi(s) = \text{argmax}_a Q(s,a)$$

Or a bit more formally:

$$\pi: \mathcal{S} \rightarrow \mathcal{A} = \text{argmax}_{a \in \mathcal{A}(s)} Q(s,a)\qquad \forall s \in \mathcal{S}$$

",1847,,1847,,1/25/2021 9:55,1/25/2021 9:55,,,,0,,,,CC BY-SA 4.0 25974,1,,,1/25/2021 11:02,,1,71,"

TL;DR

I am unable to overfit batches with multiple samples using autoencoder.

Fully connected decoder seems to handle more samples per batch than conv decoder, but then also fails when number of samples increases.
Why is this happening, and how to debug this?


In depth

I am trying to use an auto encoder on 1d data points of size (n, 1, 1024), where n is the number of samples in the batch.

I am trying to overfit to that single batch.

Using a convolutional decoder, I am only able to fit a single sample (n=1), and when n>1 I am unable to drop the loss (MSE) below 0.2.

In blue: expected output (=input), in orange: reconstruction.

Single sample, single batch:

Multiple samples, single batch, loss won't go down:

Using more than one sample, we can see the net learns the general shape of the input (=output) signal, but greatly misses by an over-all constant.


Using a fully connected decoder does manage to reconstruct batches of multiple samples:


Relevant code:

class Conv1DBlock(nn.Module):
    def __init__(self, in_channels, out_channels, kernel_size):
        super().__init__()
        self._in_channels = in_channels
        self._out_channels = out_channels
        self._kernel_size = kernel_size

        self._block = nn.Sequential(
                nn.Conv1d(
                        in_channels=self._in_channels,
                        out_channels=self._out_channels,
                        kernel_size=self._kernel_size,
                        stride=1,
                        padding=(self._kernel_size - 1) // 2,
                ),
                # nn.BatchNorm1d(num_features=out_channels),
                nn.ReLU(True),
                nn.MaxPool1d(kernel_size=2, stride=2),
        )

    def forward(self, x):
        for layer in self._block:
            x = layer(x)
        return x


class Upsample1DBlock(nn.Module):
    def __init__(self, in_channels, out_channels, factor):
        super().__init__()
        self._in_channels = in_channels
        self._out_channels = out_channels
        self._factor = factor

        self._block = nn.Sequential(
                nn.Conv1d(
                        in_channels=self._in_channels,
                        out_channels=self._out_channels,
                        kernel_size=3,
                        stride=1,
                        padding=1
                ),  # 'same'
                nn.ReLU(True),
                nn.Upsample(scale_factor=self._factor, mode='linear', align_corners=True),
        )

    def forward(self, x):
        x_tag = x
        for layer in self._block:
            x_tag = layer(x_tag)
        # interpolated = F.interpolate(x, scale_factor=0.5, mode='linear') # resnet idea
        return x_tag

encoder:

self._encoder = nn.Sequential(
            # n, 1024
            nn.Unflatten(dim=1, unflattened_size=(1, 1024)),
            # n, 1, 1024
            Conv1DBlock(in_channels=1, out_channels=8, kernel_size=15),
            # n, 8, 512
            Conv1DBlock(in_channels=8, out_channels=16, kernel_size=11),
            # n, 16, 256
            Conv1DBlock(in_channels=16, out_channels=32, kernel_size=7),
            # n, 32, 128
            Conv1DBlock(in_channels=32, out_channels=64, kernel_size=5),
            # n, 64, 64
            Conv1DBlock(in_channels=64, out_channels=128, kernel_size=3),
            # n, 128, 32
            nn.Conv1d(in_channels=128, out_channels=128, kernel_size=32, stride=1, padding=0),  # FC
            # n, 128, 1
            nn.Flatten(start_dim=1, end_dim=-1),
            # n, 128
        )

conv decoder:

self._decoder = nn.Sequential(
    nn.Unflatten(dim=1, unflattened_size=(128, 1)),  # 1
    Upsample1DBlock(in_channels=128, out_channels=64, factor=4),  # 4
    Upsample1DBlock(in_channels=64, out_channels=32, factor=4),  # 16
    Upsample1DBlock(in_channels=32, out_channels=16, factor=4),  # 64
    Upsample1DBlock(in_channels=16, out_channels=8, factor=4),  # 256
    Upsample1DBlock(in_channels=8, out_channels=1, factor=4),  # 1024
    nn.ReLU(True),
    nn.Conv1d(in_channels=1, out_channels=1, kernel_size=3, stride=1, padding=1),
    nn.ReLU(True),
    nn.Flatten(start_dim=1, end_dim=-1),
    nn.Linear(1024, 1024)
)

FC decoder:

self._decoder = nn.Sequential(
    nn.Linear(128, 256),
    nn.ReLU(True),
    nn.Linear(256, 512),
    nn.ReLU(True),
    nn.Linear(512, 1024),
    nn.ReLU(True),
    nn.Flatten(start_dim=1, end_dim=-1),
    nn.Linear(1024, 1024)
)

Another observation is that when the batch size increases more, to say, 16, the FC decoder also starts to fail.

In the image, 4 samples of a 16 sample batch I am trying to overfit


What could be wrong with the conv decoder?

How to debug this or make the conv decoder work?

Please notice that the same infrastructure with only the encoder and decoder different do manage to overfit and generalize over MNIST.

(This is also posted here, but I think this is still ok to do. If not, please tell me and I will delete one).

",21645,,21645,,1/25/2021 12:17,1/25/2021 12:17,Underfitting a single batch: Can't cause autoencoder to overfit multi-sample batches of 1d data. How to debug?,,0,0,,,,CC BY-SA 4.0 25978,2,,25968,1/25/2021 13:38,,3,,"

You observation is correct although the terminology needs a little explaining.

The term 'out-of-distribution' (OOD) data refers to data that was collected at a different time, and possibly under different conditions or in a different environment, then the data collected to create the model. They may say that this data is from a 'different distribution'.

Data that is in-distribution can be called novelty data. Novelty detection is when you have new data (i.e. OOD) and you want to know whether or not it is in-distribution. You want to know if it looks like the data you trained on. Anomaly detection is when you test your data to see if it is different than what you trained the model. Out-of-distribution detection is essentially running your model on OOD data. So one takes OOD data and does novelty detection or anomaly detection (aka outlier detection).

Below is a figure from What is anomaly detection?

In time series modeling, the term 'out-of-distribution' data is analogous to 'out-of-sample' data and 'in-distribution' data is analogous with 'in-sample' data.

",5763,,5763,,1/25/2021 16:47,1/25/2021 16:47,,,,2,,,,CC BY-SA 4.0 25979,2,,21142,1/25/2021 14:22,,3,,"

Swarm intelligence (SI) is a sub-field of or an approach to artificial intelligence (AI), where you have multiple individuals (for example, artificial ants), which collectively can produce what we (or most of us) would intuitively call intelligent behaviour.

SI is sometimes categorized as a sub-field of evolutionary computation (which also includes evolutionary algorithms, such as genetic algorithms, genetic programming, evolution strategies, and so on), which is often considered a sub-field of AI or techniques to produce artificial intelligence, because all these techniques are often based on the use of multiple individuals/solutions (that either compete or collaborate with each other).

One of the most commonly used SI techniques is ant colony optimization algorithms (proposed by M. Dorigo and further developed by other people like Luca M. Gambardella), which have been successfully applied to solve the non-decision version of the NP-complete problem (in simple words, it's a combinatorial problem that may require exponential time to be solved in the usual case) known as the travelling salesman problem. There are other SI techniques, which are somehow similar to ACO algorithms, such as particle swarm optimization or the artificial bee colony algorithm.

Occasionally, SI may also be categorised as a sub-field of computational intelligence, which often refers to specific techniques to create artificially intelligent systems (i.e. programs that exhibit what we would call intelligence) that are more based on or inspired by the biology, such as neural networks, genetic algorithms, or, in fact, SI algorithms, such as ACO algorithms. However, CI can also be considered a sub-field of AI, given that it studies techniques to produce artificial intelligence, so, in the end, as I said above, SI is an approach to AI, which includes other approaches, such as evolutionary algorithms, rule-based systems, deep learning or other machine learning techniques.

",2444,,2444,,1/14/2022 17:56,1/14/2022 17:56,,,,2,,,,CC BY-SA 4.0 25981,2,,25967,1/25/2021 16:36,,1,,"

TLDR;

The output of Q-value function will eventually saturate. Can't say when, but it will surely do.

Detailed Answer

if you don't have TD error

It meant, if I don't have logs of the error.

$Q(s, a)$ vs Episodes Graph

To understand, how $Q(s, a)$ behaves as the episodes increase.

I was under a wrong impression that, the $Q(s, a)$ will saturate around the values given by the reward function.

As evident from Loss vs Training Episodes Curve, we can see that loss (TD Error) is almost saturated.

However, $Q(s, a)$ vs training episodes curve is not saturated yet.

The only explanation for the above two graphs could be given as follows: The target estimate ($r + \gamma Q(s', a')$) and $Q(s, a)$ are almost similar due to which, the error is quite low. But, $Q(s, a)$ is still nowhere near optimal value $Q^{*}(s, a)$.

Hence, I gave it another shot and made it run for twice training episodes i.e. 20000, and below are the results.

A nearly saturated loss

and a nearly saturating $Q(s, a)$. .

Note that, the value of $Q(s, a)$ saturating is around 250 - 300 (will run it for more iterations) and it is nowhere around the reward values $ \in [-100, 35]$.

Hence, the $Q(s, a)$ vs episodes will saturate.

",37180,,,,,1/25/2021 16:36,,,,0,,,,CC BY-SA 4.0 25983,1,26272,,1/25/2021 22:53,,4,130,"

I am reading this paper Anxiety, Avoidance and Sequential Evaluation and is confused about the implementation of a specific lab study. Namely, the authors model what is called the Balloon task using a simple MDP for which the description is below:

My confusion is the following sentence:

...The probability of this bad transition was modeled using normal density function, with parameters $N(16, 0.5)$

But the fact that this is a continuous, normal distribution makes me stumped. In MDP's, usually there is a nice, discrete transition matrix and so there is no ambiguity as to how to implement it. For instance, if they said the transition to a bad state is modeled by a Bernoulli random variable with parameter $p,$ then it is clear how to implement it. I would do something like:

def step(curr_state, curr_action):
   if uniform random variable(0,1) < p:
      next_state = bad state

But they are using a normal random variable for this "bad" transition, so how do I implement this?

",38234,,2444,,1/26/2021 23:28,2/8/2021 0:14,How should I implement the state transition when it is a Gaussian distribution?,,1,0,,,,CC BY-SA 4.0 25984,1,26292,,1/26/2021 1:55,,4,231,"

Why don't those developing AI Deepfake detectors use two differently trained detectors at once that way if the Deepfake was trained to fool one of the detectors the other would catch it and vice-versa?

To be clear this is really a question of can deepfakes be made to fool multiple high-accuracy detectors at the same time. And if so then how many can they fool before they become human detectable from noticeable noise?

I've heard of papers where they injected a certain noise into their deepfake videos which allows them to fool a given detector (https://arxiv.org/abs/2009.09213, https://delaat.net/rp/2019-2020/p74/report.pdf), so I thought well if they simply used two high-accuracy detectors then any pattern of noise used to fool one detector would interfere with the pattern of noise used to fool the other detector.

",44117,,44117,,1/26/2021 18:42,2/9/2021 9:59,Why don't those developing AI Deepfake detectors use two detectors at once so as to catch deepfakes in one or the other?,,1,0,,,,CC BY-SA 4.0 25985,1,25987,,1/26/2021 2:14,,0,50,"

Here is a model that trains time series data in (batch, step, features) way.

I have kept the random state for train test split function the same. Every parameter below the same, running the model training yields different outcomes every time and the outcomes are drastically different.

What may be the factors that led to this? Regularization?

X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.2, random_state=666)

def attention_model(X_train, y_train, X_test, y_test,num_classes,dropout=0.2, batch_size=68, learning_rate=0.0001,epochs=20,optimizer='Adam'):
    
    Dense_unit = 12
    LSTM_unit = 12
    
    attention_param = LSTM_unit*2
    attention_init_value = 1.0/attention_param
    
    
    u_train = np.full((X_train.shape[0], attention_param),
                      attention_init_value, dtype=np.float32)
    u_test = np.full((X_test.shape[0],attention_param),
                     attention_init_value, dtype=np.float32)
    
    
    with keras.backend.name_scope('BLSTMLayer'):
        # Bi-directional Long Short-Term Memory for learning the temporal aggregation
        input_feature = Input(shape=(X_train.shape[1],X_train.shape[2]))
        x = Masking(mask_value=0)(input_feature)
        x = Dense(Dense_unit,kernel_regularizer=l2(0.005), activation='relu')(x)
        x = Dropout(dropout)(x)
        x = Dense(Dense_unit,kernel_regularizer=l2(0.005),activation='relu')(x)
        x = Dropout(dropout)(x)
        x = Dense(Dense_unit,kernel_regularizer=l2(0.005),activation='relu')(x)
        x = Dropout(dropout)(x)
        x = Dense(Dense_unit,kernel_regularizer=l2(0.005), activation='relu')(x)
        x = Dropout(dropout)(x)


        y = Bidirectional(LSTM(LSTM_unit,activity_regularizer=l2(0.000029),kernel_regularizer=l2(0.027),recurrent_regularizer=l2(0.025),return_sequences=True, dropout=dropout))(x)
#         y = Bidirectional(LSTM(LSTM_unit, kernel_regularizer=l2(0.01),recurrent_regularizer=l2(0.01), return_sequences=True, dropout=dropout))(y)

    with keras.backend.name_scope('AttentionLayer'):
        # Logistic regression for learning the attention parameters with a standalone feature as input
        input_attention = Input(shape=(LSTM_unit * 2,))
        u = Dense(LSTM_unit * 2, activation='softmax')(input_attention)

        # To compute the final weights for the frames which sum to unity
        alpha = dot([u, y], axes=-1)  # inner prod.
        alpha = Activation('softmax')(alpha)

    with keras.backend.name_scope('WeightedPooling'):
        # Weighted pooling to get the utterance-level representation
        z = dot([alpha, y], axes=1)

    # Get posterior probability for each emotional class
    output = Dense(num_classes, activation='softmax')(z)

    model = Model(inputs=[input_attention, input_feature], outputs=output)

    optimizer = opt_select(optimizer,learning_rate)
    model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer=optimizer)


    hist = model.fit([u_train, X_train], 
                     y_train, 
                     batch_size=batch_size, 
                     epochs=epochs, 
                     verbose=2, 
                     validation_data=([u_test, X_test], y_test))
    

#kernel_regularizer=l2(0.002),recurrent_regularizer=l2(0.002),
    return hist

batch_size= 150
#217
epochs = 1000
learning_rate = 0.00081
optimizer = 'RMS'
num_classes = y_train.shape[1]
dropout=0.22

tf.keras.backend.clear_session()

history = attention_model(X_train, y_train, X_test, y_test, num_classes,dropout = dropout,batch_size=batch_size, learning_rate=learning_rate,epochs=epochs,optimizer=optimizer
)
",44085,,,,,1/26/2021 15:02,Factors that causing totally different outcomes from an exactly same model and datasets,,1,2,,,,CC BY-SA 4.0 25986,1,,,1/26/2021 4:28,,4,657,"

I get the part from the paper where the image is split into P say 16x16 (smaller images) patches and then you have to Flatten the 3-D (16,16,3) patch to pass it into a Linear layer to get what they call "Liner Projection". After passing from the Linear layer, the patches will be vectors but with some "meaning" to them.

Can someone please explain how the two types of embeddings are working?

I visited this implementation on github, looked at the code too and looked like a maze to me.

If someone could just explain how these embeddings are working in laymen's terms, I'll look at the code again and understand.

",36062,,36062,,12/1/2021 14:29,12/1/2021 16:00,How does the embeddings work in vision transformer from paper?,,1,0,,,,CC BY-SA 4.0 25987,2,,25985,1/26/2021 5:17,,0,,"

By default, Keras sets shuffle argument True, so you should set the numpy seed before importing Keras.

CPU

from numpy.random import seed
seed(25)
from keras.models import Sequential

GPU

tf.random.set_seed(seed)
",34371,,2444,,1/26/2021 15:02,1/26/2021 15:02,,,,3,,,,CC BY-SA 4.0 25988,1,26088,,1/26/2021 6:48,,3,243,"

I have a task of extremely sparse binary segmentation, i.e. the segmentation mask contains either 0 or 1, and there are ~95% zeros and only ~5% ones. I use the focal loss to address the sparseness (which is equivalent in my case to imbalances). I have another piece of information that I want to incorporate in the loss term.

The desired output is always symmetric over the diagonal. I was searching for a way to use this information in the loss, but I couldn't find a solution. How would I do this?

For some example of the symmetry in the segmentation maps, I added an arrow to show the axis of symmetry:

",41293,,2444,,1/30/2021 12:27,2/4/2021 18:28,How to incorporate a symmetry constraint in the loss function to train a CNN?,,1,1,,,,CC BY-SA 4.0 25989,1,,,1/26/2021 8:31,,9,280,"

I am working with generative adversarial networks (GANs) and one of my aims at the moment is to reproduce samples in two dimensions that are distributed according to a circle (see animation). When using a GAN with small networks (3 layers with 50 neurons each), the results are more stable than with bigger layers (3 layers with 500 neurons each). All other hyperparameters are the same (see details of my implementation below).

I am wondering if anyone has an explanation for why this is the case. I could obviously try to tune the other hyperparameters to get good performance but would be interested in knowing if someone has heuristics about what is needed to change whenever I change the size of the networks.


Network/Training parameters

I use PyTorch with the following settings for the GAN:

Networks:

  • Generator/Discriminator Architecture (all dense layers): 100-50-50-50-2 (small); 100-500-500-500-2 (big)
  • Dropout: p=0.4 for generator (except last layer), p=0 for discriminator
  • Activation functions: LeakyReLU (slope 0.1)

Training:

  • Optimizer: Adam
  • Learning Rate: 1e-5 (for both networks)
  • Beta1, Beta2: 0.9, 0.999
  • Batch size: 50
",44121,,18758,,10/9/2021 8:16,10/9/2021 8:16,Why is my GAN more unstable with bigger networks?,,0,3,,,,CC BY-SA 4.0 25990,1,25991,,1/26/2021 9:27,,2,851,"

Problem description:

Suppose we have an environment, where a reward at time step $t$ is dependent not only on the current action, but also on previous action in the following way:

  • if current action == previous action, you get reward = $R(a,s)$
  • if current action != previous action, you get reward = $R(a,s) - \text{penalty}$

In this environment, switching actions bears a significant cost. We would like the RL algorithm to learn optimal actions under the constraint that switching action is costly, i.e. we would like to stay in selected action as long as possible.

The penalty is significantly higher than an immediate reward, so if we do not take it into account, the model evaluation will have a negative total reward with almost 100% probability, since the agent will be constantly switching and extracting rewards from environment that are smaller than the cost of switching actions.

Action space is small (2 actions: left, right). I'm trying to beat this game with PPO (Proximal Policy Optimization)

Questions

  • How one might address this constraint: i.e. explicitly make the agent learn that switching is costly and it's worth sitting in one action even if immediate rewards are negative?

  • How can you make the RL algorithm learn that it's not the reward term $R(a_t|s_t)$ that is negative, and thus decreasing $Q(a_t|s_t)$ and $V(s_t)$, but it's the penalty term (taking the action that is different from the previous action at step $t-1$) that is pushing total reward down?

",44124,,2444,,1/26/2021 22:30,1/26/2021 22:30,Reinforcement Learning algorithm with rewards dependent both on previous action and current action,,2,0,,,,CC BY-SA 4.0 25991,2,,25990,1/26/2021 9:41,,2,,"

The answer to both your concerns is:

  • Add the previous action choice to the state representation.

It is all you need to do. It gives the agent the data it needs to learn the association of negative reward from not matching the previous action.

By making this data part of the state, you re-establish the Markov property in the MDP model of the environment, which you had otherwise lost by making the reward dependent on a variable that was both systematically changing and hidden from the agent.

The state in a MDP is often not just the current observations that the environment provides, but can include any relevant knowledge that the agent has. At the extreme that can include a complete history of all observations and actions taken to date. It is common practice to derive the state as a summary of recent history of observations and actions taken so far. In your case, all you need do is concatenate the previous action to the observation, because you know about the constraint and how it affects optimisation.

",1847,,1847,,1/26/2021 10:18,1/26/2021 10:18,,,,2,,,,CC BY-SA 4.0 25992,2,,25972,1/26/2021 10:08,,1,,"

Figure 3 in the original WGAN paper is actually quite helpful to understand the difference between the score in WGAN and the probability in GAN (see screenshot below). The blue distribution are real samples, and the green one are fake samples. The Vanilla GAN trained in this example identifies the real samples as '100% real' (red curve) and the fake samples as '100% fake'. This leads to the problem of vanishing gradients and the well-known mode collapse of original GANs.

The Wasserstein GAN, on the other hand, gives each sample a score. The benefit of the score is that we can now identify samples that are more likely real than others, or more likely fake. For example, the further to the left a distribution is located, the more negative the WGAN score will be. We have therefore a continuum that doesn't end in 0 and 1, but can compare between samples that are 'good' and those which are 'better'. A normal GAN would identify both as 'good', making further improvement difficult.

",44121,,,,,1/26/2021 10:08,,,,2,,,,CC BY-SA 4.0 25993,1,,,1/26/2021 10:13,,2,67,"

I'm trying to implement in code part of the following paper: Active Learning for Reward Estimation in Inverse Reinforcement Learning. I'm specifically referring to section 2.3 of the paper.

Let's define $\mathcal{X}$ as the set of states, and $\mathcal{A}$ as the set of actions. We then sample a set of observations $\mathcal{D}$ from an agent which follows an optimal policy.

$$ \mathcal{D}=\left\{\left(x_{1}, a_{1}\right),\left(x_{2}, a_{2}\right), \ldots,\left(x_{n}, a_{n}\right)\right\} $$

Our goal is to find the reward vector $\mathbf{r}$ s.t. the total likelihood $\Lambda_{r}(\mathcal{D})$ is maximised (every time we compute a new $\mathbf{r}$, the likelihood is updated by computing the action-value function $Q_{r}^{*}$ and taking the softmax).

$$ L_{r}(x, a)=\mathbb{P}[(x, a) \mid r]=\frac{e^{\eta Q_{r}^{*}(x, a)}}{\sum_{b \in A} e^{\eta Q_{r}^{*}(x, b)}} $$

$$ \Lambda_{r}(\mathcal{D})=\sum_{\left(x_{i}, a_{i}\right) \in \mathcal{D}} \log \left(L_{r}\left(x_{i}, a_{i}\right)\right) $$

Then, the paper suggests how to compute the derivatives w.r.t. $\mathbf{r}$ by defining the following quantities:

$$ \left[\nabla_{r} \Lambda_{r}(\mathcal{D})\right]_{x a}=\sum_{\left(x_{i}, a_{i}\right) \in \mathcal{D}} \frac{1}{L_{r}\left(x_{i}, a_{i}\right)} \frac{\partial L_{r}\left(x_{i}, a_{i}\right)}{\partial r_{x a}} $$

$$ \nabla_{r} L_{r}(x, a)=\frac{d L_{r}}{d Q^{*}}(x, a) \frac{d Q^{*}}{d r}(x, a) $$

Then, considering $\mathbf{T}=\mathbf{I}-\gamma \mathbf{P}_{\pi^{*}}$

$$ \frac{\partial Q^{*}}{\partial r_{z u}}(x, a)=\delta_{z u}(x, a)+\gamma \sum_{y \in \mathcal{X}} \mathrm{P}_{a}(x, y) \mathbf{T}^{-1}(y, z) \pi^{*}(z, u) $$

$$ \frac{d L_{r}}{d Q_{y b}^{*}}(x, a)=\eta L_{r}(x, a)\left(\delta_{y b}(x, a)-L_{r}(y, b) \delta_{y}(x)\right) $$

with $x, y \in \mathcal{X}$ and $a, b \in \mathcal{A} .$ In the above expression, $\delta_{u}(v)$ denotes the Kronecker delta function.

Finally, the update is trivially computed by $$ \mathbf{r}_{t+1}=\mathbf{r}_{t}+\alpha_{t} \nabla_{r} \Lambda_{r_{t}}(\mathcal{D}) $$

Here I suppose that the paper's author is considering $\mathbf{r}$ as a matrix of dimension number of states $\times$ number of actions (i.e. each element of this matrix represents $R(s,a)$)

My question is: what is the dimensionality of $\frac{d L_{r}}{d Q^{*}}(x, a)$ and $\frac{d Q^{*}}{d r}(x, a)$? (is that a point-wise product, a matrix-matrix product, a vector-matrix product?)

The more reasonable solution, dimensionally speaking, for me would be something like: $$ \nabla_{r} L_{r}(x, a)=\\ \frac{d L_{r}}{d Q^{*}}(x, a) \frac{d Q^{*}}{d r}(x, a) = \\ \left(\sum_{s'\in\mathcal{X}}\sum_{a'\in\mathcal{A}}\frac{d L_{r}}{d Q^{*}_{s'a'}}(x, a)\right) \begin{bmatrix} \frac{d Q^{\star}}{d r_{s_1a_1}}(x, a) & \dots &\frac{d Q^{\star}}{d r_{s_1a_m}}(x, a) \\ \vdots& \ddots & \vdots \\ \frac{d Q^{\star}}{d r_{s_na_1}}(x, a) & \dots & \frac{d Q^{\star}}{d r_{s_na_m}}(x, a) \end{bmatrix} $$

(where $n = |\mathcal{X}|$ and $m = |\mathcal{A}|$)

",30664,,30664,,1/27/2021 19:04,1/27/2021 19:04,"What is the dimensionality of these derivatives in the paper ""Active Learning for Reward Estimation in Inverse Reinforcement Learning""?",,0,0,,,,CC BY-SA 4.0 25994,1,,,1/26/2021 10:32,,2,27,"

I've just started a project which will involve having to detect certain events in a stream of kinematic sensor data. By searching through the literature, I've found a lot of highly specific papers, but no general reviews.

If I search up on computer vision, I'm likely to get 100s of articles giving overviews of different types of architectures for various vision tasks. They would look something like this:

  • We mainly use CNNs which work like this ...
  • For object detection we use one or two stage detectors which look like this...
  • For video classification we can use 3D CNNs or RNNs...
  • .... etc

So I'm looking for something similar with regard to kinematic motion sensors. As was pointed out to me on the signal processing SE, "kinematic" could mean a lot of things. So specifically, I'm referring to 1d time series data for:

  • acceleration/velocity/position
  • angular velocity / absolute orientation
",16871,,16871,,1/26/2021 13:40,1/26/2021 13:40,What are some of the main high level approaches to applying ML on kinematic sensor data?,,0,1,,,,CC BY-SA 4.0 25995,1,,,1/26/2021 11:04,,2,55,"

Considering input images to a CNN that have a large dimension (e.g. 256X256), what are some possible methods to estimate the exact dimensions (e.g. 16X16 or 32X32) to which it can be condensed in the final pooling layer within the CNN network such that the important features are retained? I have found references to using linear dimensionality estimates (such as PCA) and the Riemannian Metric for non-linear estimation, but am not confident of how accurate the predicted dimensions may be.

One paper that explores this issue in Deep Neural Networks in a better way can be found here. Answers specifically pertaining to processing of SAR images would be more helpful.

",32981,,,,,1/26/2021 11:04,Estimating dimensions to reduce input image size to in CNNs,,0,0,,,,CC BY-SA 4.0 25996,2,,25835,1/26/2021 12:59,,1,,"

I have calculated an upper bound and modified a calculation of a lower bound for a similar game. I assume the real size is closer to the upper bound.

$$ 7.36 \cdot 10^{27} \leq |SPADES| \leq 3.09 \cdot 10^{72} $$

Upper bound

The State-space complexity of Spades can be computed by counting for each possible starting state (deal), all the possible sequences of actions. While that calculation could be hard, an upper bound is easy to achieve by multiplying the number of initial positions $|S_0|$ by an upper bound on the number of different possible histories $|\bar{H}|$. $$ |SPADES| = \sum_{s \in S_0} |H_{s}| \leq |S_0| \cdot |\bar{H}| $$

The number of possible starting positions $|S_0|$ is the number of different ways to deal a 52 cards deck into 4 players $$|S_0| = \frac{52!}{13!^4} \approx 5.3\cdot10^{28}$$

An information set in Spades contains two objects: the player's hand, and the sequence of actions each of the players have made from the start of the round. Before any card is played, each information set (a single hand) can be completed into a full game-state (4 hands) in the following number of ways $$ |\bar{S}| = \frac{39!}{13!^3} \approx 8.45 \cdot 10^{16} $$

During a round, each player first decide upon a bid, then choose one out of her 13 cards, than choose one out of 12, etc. Thus, the size of the decision tree, which is the number of histories is bounded by $$|H| \leq {14!}^4 \approx 5.77 \cdot 10^{43}$$ This is an upper bound since some of those histories violates the rules of the game.

Therefore, the number of legal positions in Spades is bounded by $$ |SPADES| = \sum_{s \in S_0} |H_{s}| \leq |S_0| \cdot |H| \leq \frac{52!}{13!^4} \cdot {14!}^4 \approx 3.09 \cdot 10^{72} $$

Lower bound

In the paper: Understanding the Success of Perfect Information Monte Carlo Sampling in Game Tree Search, Sturtevant et al. describe a loose lower bound for the size of Skat:

Skat is a 3-player card game with 32 cards, from which 10 are dealt to each player. There are $$ {32 \choose 10} = 364, 512, 240 $$hands each player can have and $$ H := {22 \choose10} \cdot {12\choose10} = 42, 678, 636 $$ hands for the other players which is what constitutes an information set at the start of a game. At the beginning of each trick, the trick leader can choose to play any of his remaining cards. Therefore, there are at least $$ 10!H \approx 1.54 · 10^{14}$$ information sets. But, from an opponents perspective there are actually 20 or 22 unknown cards that can be lead, so this is only a loose lower bound on the size of the tree. This should clearly establish that even when using short decks it is infeasible to solve even a single game instance of a trick-based card game.

Using the same logic to produce a loose lower bound on the size of Spades tree produce the following:

$$ H := {39 \choose 13} \cdot {26 \choose 13} = 8.45 \cdot 10^{16} $$

At the start of the round (before bidding has been made) the number of information sets is lower bounded by $$ 14!H \approx 7.36 \cdot 10^{27}$$

",43351,,43351,,2/14/2021 7:23,2/14/2021 7:23,,,,0,,,,CC BY-SA 4.0 25997,2,,25990,1/26/2021 16:14,,0,,"

As said, you will have to include previous input state(s) in your training input patterns.

Suppose we use straightforward NN backpropagation learning..

You would expand the input layer with additional neurons and weights connected to the past. The time window neurons should introduce additional weights into the first hidden layer. A neural net architecture doing this is called Time Delay neural net (TDNN)

https://stats.stackexchange.com/questions/160070/difference-between-time-delayed-neural-networks-and-recurrent-neural-networks

I used TDNN in the past for signal processing. My TDNN includes the past of N inputs, supporting M steps in the past. A flexible TDNN can also be configured to connect some hidden layer past output to the next hidden layer.

https://neuron.eng.wayne.edu/tarek/MITbook/chap5/5_4.html

https://www.mathworks.com/help/deeplearning/ref/timedelaynet.html

",44131,,44131,,1/26/2021 16:23,1/26/2021 16:23,,,,0,,,,CC BY-SA 4.0 25998,1,,,1/26/2021 16:18,,1,11,"

I am creating a signboard translation application from scratch. I have images of signboards where there are multiple texts and I have the corresponding set of coordinates of bounding boxes for multiple texts. I want to create a regression model which will try to predict the coordinates if there is some text in the image. I am really stuck at a place. In some cases, I have multiple words in the image, so each word will have its own set of coordinates. So, how can I make a model such that if there is a single word then it will output single set of coordinates, but if there are 5 words then it should give me 5 set of coordinates? The number of output may vary with each image. What kind of neural net should I use? I don't want to use sliding window approach. Please help me out.

",41672,,,,,1/26/2021 16:18,How to predict multiple set of coordinates (of bounding boxes) for signboards text localization through neural network?,,0,0,,,,CC BY-SA 4.0 26001,2,,24440,1/26/2021 18:50,,2,,"

The book Machine Learning (1997) by Tom Mitchell covers case-based reasoning (CBR), a form of instance-based learning (nearest neighbor is the typical example of IBL) in chapter 8 (p. 230).

T. Mitchell writes

Instance-based methods such as $k$-NEAREST NEIGHBOR and locally weighted regression share three key properties. First, they are lazy learning methods in that they defer the decision of how to generalize beyond the training data until a new query instance is observed. Second, they classify new query instances by analyzing similar instances while ignoring instances that are very different from the query. Third, they represent instances as real-valued points in an $n$-dimensional Euclidean space. Case-based reasoning (CBR) is a learning paradigm based on the first two of these principles, but not the third. In CBR, instances are typically represented using more rich symbolic descriptions, and the methods used to retrieve similar instances are correspondingly more elaborate. CBR has been applied to problems such as conceptual design of mechanical devices based on a stored library of previous designs (Sycara et al. 1992), reasoning about new legal cases based on previous rulings (Ashley 1990), and solving planning and scheduling problems by reusing and combining portions of previous solutions to similar problems (Veloso 1992).

He then goes on and gives the example of a CBR system: the CADET system. He also formulates CBR as a learning problem and uses the term "learn" to refer to a search process that CADET goes through, which is similar to what k-NN does.

He then writes

To summarize, case-based reasoning is an instance-based learning method in which instances (cases) may be rich relational descriptions and in which the retrieval and combination of cases to solve the current query may rely on knowledge-based reasoning and search-intensive problem-solving methods.

To conclude, yes, CBR can be considered a machine learning technique (if you also consider k-NN a learning algorithm, which people often do), even though it may rely on knowledge-based reasoning and search-intensive problem-solving methods.

You may also be interested in the paper Representation in case-based reasoning (2005) by Ralph Bergmann et al. Moreover, the famous AIMA book (3rd edition) mentions case-based reasoning in chapter 19 (p. 799), which is dedicated to knowledge and learning.

",2444,,2444,,2/1/2021 1:53,2/1/2021 1:53,,,,0,,,,CC BY-SA 4.0 26002,2,,9924,1/26/2021 18:57,,2,,"

TL;DR: What makes AI is not if-then statements, but rather the automated reasoning that went into selecting those particular if-then statements.

You're focusing on the structure of the output rather than how the output was produced. Having if-then control flow statements is not sufficient to make a program "AI". AI aims to enable machines to solve problems which currently people are better at. Machine learning, a subset of AI, extracts useful patterns from data. A commonly cited example of expert systems is diagnostic medicine.

This begs a lot of questions like, "What does 'useful' mean?" If the problem we were addressing was finding cancer in medical images, "useful" might mean, "able to accurately identify cancer in images at or above the rate of a skilled human examiner." There's also questions about the amount of data needed, the quality of the data, etc. These are outside the scope of your question.

There are various AI/ML systems that produce models consisting of if-then states. Decision trees like C4.5 build a hierarchy of if-thens (and random forests combine many decision trees). Learning classifier systems (both Michigan and Pittsburgh varieties) come out of genetic algorithms and form similar collections of logic.

",19703,,19703,,2/25/2021 23:31,2/25/2021 23:31,,,,1,,,,CC BY-SA 4.0 26007,1,32312,,1/26/2021 19:32,,5,188,"

Is there empirical evidence that some approaches to achieving AGI will definitely not work? For the purposes of the question the system should at least be able to learn and solve novel problems.

Some possible approaches:

  1. A Prolog program
  2. A program in a traditional procedural language such as C++ that doesn't directly modify its own code
  3. A program that evolves genetically in response to selection pressures in some constructed artificial environment
  4. An artificial neural net
  5. A program that stores its internal knowledge only in the form of a natural human language such as English, French, etc (which might give it desirable properties for introspection)
  6. A program that stores its internal knowledge only in the form of a symbolic language which can be processed unambiguously by logical rules
",44135,,44135,,1/27/2021 10:51,12/7/2021 0:03,Are there any approaches to AGI that will definitely not work?,,1,7,,,,CC BY-SA 4.0 26008,1,26024,,1/26/2021 20:53,,2,46,"

I have a data set of house prices and their corresponding features (rooms, meter squared, etc). An additional feature is the sold date of the house. The aim is to create a model that can estimate the price of a house as if it was sold today. For example a house with a specific set of features (5 rooms, 100 meters squared) and today's date (28-1-2020), what would it sell for? Time is an important component, because prices increase (inflate over time). I am struggling to find a way to incorporate the sold date as a feature in the gradient boosting model.

I think there are a number of approaches:

  1. Convert the data into an integer, and include it directly in the model as a feature.
  2. Create a separate model for modelling the house price development over time. Let's think of this as some kind of an AR(1) model. I could then adjust all observations for inflation, so that we would get an inflation adjusted price for today. These inflation adjusted prices would be trained on the feature set.

What are your thoughts on these two options? Are there any alternative methods?

",40688,,40688,,1/28/2021 11:20,1/28/2021 11:20,House price inflation modelling,,1,0,,,,CC BY-SA 4.0 26009,2,,18234,1/26/2021 22:28,,0,,"

Are you using BinaryCrossEntropy through tensorflow? If so, check if you are using the logits argument. I am using from_logits=True .It is not similar to the original BinaryCrossEntropy loss.

",44140,,44140,,4/26/2021 16:18,4/26/2021 16:18,,,,0,,,,CC BY-SA 4.0 26010,1,,,1/27/2021 5:23,,0,62,"

Can anyone explain the following observation?

Why did the accuracies keep to be a straight line with a very smooth decrease of loss?

Is this because of the learning rate or other reasons?

Some info:

The input is in the dimension of (319,50,40) as (batch, step, features)

The dataset consists of 319 samples. It was split using train_test_split() to yield 0.2 test size.

I used a self-attention LSTM model with 4 Dense layers and 1 self-attention LSTM layer. The codes are long, I will post them if it is needed.

The hyperparameters:

batch_size= 100
epochs =1400
learning_rate = 0.00014
optimizer = 'RMS'
num_classes = y_train.shape[1]
dropout=0.37

In addition, If I don't set random seed and keep shuffle=True, Sometimes, I get a horizontal line till the endo of training without any increasing.

",44085,,44085,,1/28/2021 7:51,1/28/2021 7:51,Accuracy goes straight for about 200 epochs then start increasing,,0,5,,,,CC BY-SA 4.0 26012,2,,25947,1/27/2021 7:18,,2,,"

Your task is text recognition, however your code is for classification task. So you need to use different approach for that. You mentioned that you're going to give model 123 and get 123. But you can not do that with just convolutional networks. Images with text are sequential, so you need to use CRNN(Convolutional-Recurrent-Neural-Networks), LSTM(Long-Short-Term-Memory), BiLSTM(Bidirectional-LSTM). In most of the research papers there are convolutional networks are being used just for feature extraction stage. For prediction stage they used recurrent units, such as LSTM cells or RNN.

",34371,,,,,1/27/2021 7:18,,,,1,,,,CC BY-SA 4.0 26013,1,,,1/27/2021 7:40,,1,39,"

I have an $x$-$y$ plane, inside that plane I have 9 paths $(p_1, p_2, \dots, p_3)$. Each path is classified into one of the three classes $(c_1, c_2, c_3)$. Each path has 100 coordinates points i.e $((x_1, y_1),(x_2, y_2), \dots, (x_n, y_n))$. Totally I have 1800 input coordinate points. Now I am interested in training the LSTM model in such a way that if I feed some test path $p_{10}$, the model should be supposed to predict which class it belongs to. This is my problem definition. Regarding this, I have some questions

  1. First of all, is it necessary to use LSTM models to obtain a solution?
  2. Are there any other simple models to attain a solution to this problem?
  3. I did some literature surveys for this kind of problem using LSTM, they are having time has one of the parameters along with $(x_1, y_1, t_1)$.

The paper I have read is "A Single-Shot Approach Using an LSTM for Moving Object Path Prediction".

I am a beginner to sequence model neural networks. A link or examples to similar works is very much beneficial.

",41520,,2444,,1/28/2021 21:01,1/28/2021 21:01,Using LSTM model to train spatial inputs,,0,2,,,,CC BY-SA 4.0 26015,2,,22844,1/27/2021 9:02,,2,,"

After some research on the internet, I realized that using VOSK toolkit in python, it can be found (detect) any particular word in audio file or real time audio streaming.

https://alphacephei.com/vosk/

",23216,,,,,1/27/2021 9:02,,,,0,,,,CC BY-SA 4.0 26016,2,,25947,1/27/2021 11:51,,2,,"

From your question there is no indication that there is any pattern to these digits. If there were, the recommendation for an LSTM or RCNN would make sense. In the case of random values, I have found that a two or three layer CNN that then descends through two parallel dense networks does an excellent job identifying CAPTCHA style random characters. One path is primarily responsible for identifying bounding boxes, the other is determining which characters are present.

There are many other ways to solve this problem. You might do well to research CAPTCHA solving via neural networks. More specifically, text based CAPTCHAs. If your task really is just OCR, then research OCR through neural networks. The technique I describe here will work but will become cumbersome for a page of text, for example. In that case a sliding window CNN coupled with a dense layer and an LSTM makes the most sense since you will be dealing with predictable sequences of characters.

",30426,,,,,1/27/2021 11:51,,,,0,,,,CC BY-SA 4.0 26017,2,,25963,1/27/2021 14:22,,-1,,"

Regularization is trying to discourage complex information from being learned so we want to eliminate the model from actually learning to memorize the training data. We don't want to learn like very specific pinpoints of the training data that don't generalize well to test data.

Dropout, the idea of drop out is that during training we randomly set some of the activations of the hidden neurons to zero with some probability say 0.5. This idea is extremely powerful because it allows the network to lower its capacity, it also makes it such that the network can't build these memorization channels through the network where it tries to just remember the data because on every iteration 50% of that data is going to be wiped out so it's going to be forced to not only generalize better but it's going to be forced to have multiple channels through the network and build a more robust representation of its prediction. We repeat this on every iteration so on the first iteration we dropped out one 50% of the nodes on the next iteration we can drop out a different randomly sampled 50% which may include some of the previously sampled nodes as well and this will allow the network to generalize better to new test data.

Early stopping, when the network is improving its performance during training there comes a point where the training data starts to diverge from the testing data, at some point the network is going to start to do better on its training data than its testing data, what this means is basically that the network is starting to memorize some of the training data and that's what you don't want so what we can do is we can identify this inflection point where the test data starts to increase and diverge from the training data so we can stop the network early and make sure that our test accuracy is as minimum as possible.

",40565,,,,,1/27/2021 14:22,,,,1,,,,CC BY-SA 4.0 26018,1,26112,,1/27/2021 14:54,,2,113,"

I'm trying to implement a research paper, as explained in this other post, here the author of the paper assumed R as a function of both states and actions, while the code (and the MDP) I'm using to test this algorithm assumes R as a function of only states.

My question is:

Given $\mathcal{X}$ as the set of states of an MDP and $\mathcal{A}$ as the set of actions of an MDP. Supposing I have four states ($1$,$2$,$3$,$4$), two actions $a$ and $b$ and a reward function $R: \mathcal{X}\to\mathbb{R}$ s.t.

$R(1) = 0$

$R(2) = 0$

$R(3) = 0$

$R(4) = 1$

If I need to change the current reward function to a new reward function $R:\mathcal{X}\ \times \mathcal{A} \to\mathbb{R}$ is it ok to compute it as $\forall a,R(s,a) = R(s)$?

$R(1,a) = 0$

$R(1,b) = 0$

$R(2,a) = 0$

$R(2,b) = 0$

$R(3,a) = 0$

$R(3,b) = 0$

$R(4,a) = 1$

$R(4,b) = 1$

More generally, what's the correct way of generalising a reward function $R: \mathcal{X}\to\mathbb{R}$ to a reward function $R:\mathcal{X}\ \times \mathcal{A} \to\mathbb{R}$?

",30664,,30664,,1/31/2021 19:00,1/31/2021 19:03,"How can I go from $R(s)$ to $R(s,a)$ in this specific MDP?",,1,0,,,,CC BY-SA 4.0 26019,1,,,1/27/2021 16:56,,4,154,"

I have come to notice that the most commonly used activation functions are continuous. Is there any specific reason behind this? Results such as this paper have worked on training networks with discontinuous activations yet this does not seem to have taken off. Does anybody have insight into why this happens, or better yet an article talking about this?

",31649,,2444,,1/27/2021 17:02,1/27/2021 17:02,Why are most commonly used activation functions continuous?,,0,3,,,,CC BY-SA 4.0 26020,1,,,1/27/2021 18:30,,1,148,"

I'm trying to solve the OpenAI's CarRacing-v0 environment with the DDPG algorithm. I've observed that after a period of learning, the agent's performance starts to deteriorate slowly. For some hyperparameter configurations this is followed by a rebound and again a slump. Here's what a typical reward plot looks like:

Since I'm new to reinforcement learning (this is my first shot at it), I don't know if this a common phenomenon. I know of catastrophic forgetting, but I believe that's not the case here, since this is more akin to a "languishing dementia". As far as I understand, "catastrophic forgetting" is an abrupt event, which contrasts with a gradual change I've been seeing in my attempts.

Is this some kind of general phenomenon with coverage in the existing literature or is this rather a quirk of my specific setup (algorithm + hyperparameters) for which the solution would be "change the setup"?

For reference, the implementation I'm using: https://github.com/hirekk/pytorch-rl

",44156,,2444,,1/28/2021 12:24,1/28/2021 12:24,Gradual decrease in performance of a DDPG agent,,0,2,,,,CC BY-SA 4.0 26021,2,,25822,1/27/2021 19:00,,1,,"

After further research, I have found the answer. nbro was of course right, the weighting is not implementation dependant, it was already introduced in the paper (arXiv). However, there is minimal information about that, only it is mentioned within the optimization function:

$\arg \min_G \max_D \mathcal{L}_{cGAN}(G,D) + \lambda \mathcal{L}_{L1}(G)$

In fact, the parameter lambda stands for weight of $L_1$ loss. There should be a second parameter $\epsilon$ for $\mathcal{L}_{cGAN}(G,D)$ because in real implementation you can actually change that as well. In theory, however, only modulation of $L_1$ loss is needed. The paper does not state why the $\lambda$ parameter is needed, only mentions that setting it to 0 will lead to pure cGAN implementation. David Brownlee in his blog states:

The adversarial loss influences whether the generator model can output images that are plausible in the target domain, whereas the L1 loss regularizes the generator model to output images that are a plausible translation of the source image. As such, the combination of the L1 loss to the adversarial loss is controlled by a new hyperparameter lambda, which is set to 10, e.g. giving 10 times the importance of the L1 loss than the adversarial loss to the generator during training.

The loss weights thus are hyperparameters that tell the network how much plausible translation of the source image do we need.

",40205,,,,,1/27/2021 19:00,,,,1,,,,CC BY-SA 4.0 26023,1,26106,,1/27/2021 20:52,,2,255,"

I am working on a restricted reinforcement learning environment, i.e. the environment breaks very often (i.e.: the communication between the simulator and reinforcement learning agent breaks after some time). So, it is getting difficult for me to continue training in this environment.

The continuous state-space is $\mathcal{S} \subseteq \mathbb{R}^{10}$ and the continuous action-space $\mathcal{A} \subseteq \mathbb{R}^{2}$.

What I want to know is whether I can add expert data to the replay buffer, given that DDPG is an off-policy algorithm?

Or I should go with the behavior cloning technique to train the actor-network only, so that it converges rapidly?

I just want to get the work done first and then I can think of exploring the environment.

",32531,,32531,,2/1/2021 13:11,2/1/2021 13:11,Can I add expert data to the replay buffer used by the DDPG algorithm in order to make it converge faster?,,1,0,,,,CC BY-SA 4.0 26024,2,,26008,1/27/2021 21:05,,1,,"

The sold date is a feature like any other. You can do this as follow. I am assuming the features are in a pandas data frame called df where the column date is called date. Easiest way is to use the pandas to_datetime function. Documentation is here.

def encode_dates(df, column):
    df = df.copy()
    df[column] = pd.to_datetime(df[column] )
    df[column + '_year'] = df[column].apply(lambda x: x.year)
    df[column + '_month'] = df[column].apply(lambda x: x.month)
    df[column + '_day'] = df[column].apply(lambda x: x.day)
    df = df.drop(column, axis=1)
    return df
df=encode_dates(df. 'date')

This function will modify the df data frame. It will create 3 new columns labeled date year, date month and date day and it will remove the date column from the data frame. Now these new columns along with the other features can be used to train your model.

",33976,,,,,1/27/2021 21:05,,,,0,,,,CC BY-SA 4.0 26029,1,,,1/28/2021 0:44,,1,298,"

I'm learning the basics of RL and I'm struggling to understand the notion of terminal state in MDPs.

To ask my question straightforwardly: is there a natural way to define the terminal state from the MDP transition probabilities $p(s',r|s,a)$? If I need to be more restrictive, assume a game setting, for example, chess.

My first hypothesis would be to define the terminal state as the state $s_T$ such that $p(s',r|s_T,a) = p(s',r|s_T)$, a state from which the transition is independent of the agent's actions. But that does not seem quite right. First, there is no particular reason why this state should be unique. Second, from this definition, it could also just be an intermittent state of "lag".

",44166,,2444,,1/28/2021 12:14,2/27/2021 13:05,"Is there a natural way to define the terminal state from the MDP transition probabilities $p(s',r|s,a)$?",,2,2,,,,CC BY-SA 4.0 26030,2,,3777,1/28/2021 0:59,,0,,"

Either you missed it or I don't fully understand why you were confused when you asked this question, but the same Bishop argues (in the same sentence where he says what you are wondering about) why tree-based models are popular in fields such as medical diagnosis.

A key property of tree-based models, which makes them popular in fields such as medical diagnosis, for example, is that they are readily interpretable by humans because they correspond to a sequence of binary decisions applied to the individual input variables. For instance, to predict a patient's disease, we might first ask "is their temperature greater than some threshold?". If the answer is yes, then we might next ask “is their blood pressure less than some threshold?". Each leaf of the tree is then associated with a specific diagnosis.

Nowadays, with the successes of neural networks (for example, in Go, Atari, image classification and segmentation, and even machine translation), which are not easily interpretable (so they are known as black-box models), there are always more studies/research on interpretable models or techniques to interpret black-box models, such as neural networks. You can take a look at this answer for a list of explainable/interpretable AI approaches that have been developed. This post contains many answers that further motivate the need for explainble AI.

",2444,,,,,1/28/2021 0:59,,,,0,,,,CC BY-SA 4.0 26031,2,,26029,1/28/2021 1:19,,1,,"

I don't know if there is a general definition of the terminal state based on the MDP transition probabilities.

But remember that we define our MDP problem in a $\mathbb{S}$ set of all possible states and a $\mathbb{A}(s)$ representing the set of all possible actions for each state. Based on that, probably there aren't any possible actions for the terminal state, so the transition probability $p(s', r |s_T, a)$ can be undefined. Based on that, in episodic tasks we need to define a the set of all possible terminal states $\mathbb{S}^+$ and distinguish this with $\mathbb{S}$.

",43945,,,,,1/28/2021 1:19,,,,0,,,,CC BY-SA 4.0 26033,1,26041,,1/28/2021 2:11,,9,1107,"

I am studying the state of the art of Reinforcement Learning, and my point is that we see so many applications in the real world using Supervised and Unsupervised learning algorithms in production, but I don't see the same thing with Reinforcement Learning algorithms.

What are the biggest barriers to get RL in production?

",43945,,2444,,1/28/2021 11:29,1/29/2021 7:04,What are the biggest barriers to get RL in production?,,2,0,,,,CC BY-SA 4.0 26034,1,26085,,1/28/2021 3:12,,0,77,"

I'm trying to improve my evaluation and I saw this here

materialScore = kingWt  * (wK-bK)
              + queenWt * (wQ-bQ)
              + rookWt  * (wR-bR)
              + knightWt* (wN-bN)
              + bishopWt* (wB-bB)
              + pawnWt  * (wP-bP)

How do I get the value, let's say wK? Do I get the position of the king and score it relative to the board? For example, wK is more safe than the bK, so let's say wK - bK = 1 - 0.5. So, the result will be 90 * (0.5). Is this really how it works?

",44021,,2444,,1/28/2021 12:37,1/30/2021 12:25,What is the meaning of the terms in this evaluation function for chess?,,2,0,,,,CC BY-SA 4.0 26035,2,,26034,1/28/2021 4:16,,0,,"

The function you are showing (from this website) only calculates the score of a position based on how many pieces are on the board. It does not take into account where the pieces are located. So wK, bK, wQ, bQ, etc. are simply the number of the specific pieces that are on the board. The weights rank the pieces according to their importance.

So for example if white has 8 pawns and black has 6 pawns, the last term of the score would be pawnWt * (wP-bP) = 1 * (8-6) = 2. (assuming the value of a pawn to be 1) That leads to a relative advantage for white only based on pawns, but according to how many of the other pieces are on the board, black might still have an advantage.

",44121,,,,,1/28/2021 4:16,,,,0,,,,CC BY-SA 4.0 26036,1,26069,,1/28/2021 4:35,,3,355,"

In Sham Kakade's Reinforcement Learning: Theory and Algorithms, this equation (page 17) is used preceding the proof of performance difference lemma.

I am attempting to prove equation 0.6. Here is my current attempt:

\begin{align*} \mathbb{E}_{\tau \sim \rho^\pi}\left[\sum\limits_{t=0}^\infty \gamma^t f(s_t,a_t)\right] &= \sum\limits_{t=0}^\infty \gamma^t \mathbb{E}_{\tau \sim \rho^\pi} [f(s_t,a_t)]\\ &= \sum\limits_{t=0}^\infty \gamma^t \mathbb{E}_{s_t, a_t} [f(s_t,a_t)]\\ &= \sum\limits_{t=0}^\infty \gamma^t \sum\limits_{s, a} \mathbb{P}(s_t = s, a_t = a) f(s,a)\\ &= \sum\limits_{t=0}^\infty \gamma^t \sum\limits_{s} \mathbb{P}(s_t = s) \sum\limits_{a}\pi(a_t = a|s_t = s) f(s,a)\\ &= \frac{1 - \gamma}{1 - \gamma}\sum\limits_{t=0}^\infty \gamma^t \sum\limits_{s} \mathbb{P}(s_t = s) \mathbb{E}_{a \sim \pi(s)} [f(s,a)]\\ &= \frac{1}{1 - \gamma} \sum\limits_{s} (1-\gamma) \sum\limits_{t=0}^\infty \gamma^t \mathbb{P}(s_t = s) \mathbb{E}_{a \sim \pi(s)} [f(s,a)]\\ &=\frac{1}{(1-\gamma)} \mathbb{E}_{s \sim d^\pi}\left[\mathbb{E}_{a \sim \pi(s)}\left[f(s,a)\right]\right] \\ \end{align*}

Is the swapping of expectation and summation in this way allowed (given that the series converges)?

Note that this is not the proof of the performance difference lemma, but just an attempt to show equation 0.6, which is used but not proved in the book.

",44171,,44171,,1/29/2021 7:10,1/29/2021 22:09,"Is my proof of equation 0.6 in the book ""Reinforcement Learning: Theory and Algorithms"" correct?",,1,0,,,,CC BY-SA 4.0 26037,2,,26029,1/28/2021 6:46,,1,,"

As far as I remember, terminal state is a state from which agent cannot escape, i.e if the agent reached this state, he will never escape. In mathematical notation can be written as: $$ p(s^{'}, r|s_T,a) = \delta_{s^{'}s_T} \delta_{rr_{S_T}} $$ Where $\delta_{ab}$ is a Kronecker symbol, and by $r_{S_T}$ I mean the reward collected by the agent sitting in the terminal state from now till the end of the episode.

This state doesn't have to be unique. Imagine a Markov chain as a set of points, representing states, and arrows between the states with associated probabilities. Nothing prevents you from defining MDP where several nodes have only one outgoing arrow pointing to itself with the probability 1.

",38846,,2444,,1/28/2021 12:18,1/28/2021 12:18,,,,0,,,,CC BY-SA 4.0 26038,1,,,1/28/2021 7:06,,0,181,"

In the paper Hopfield networks is all you need, the authors mention that their modern Hopfield network layers are a good replacement for pooling, GRU, LSTM, and attention layers, and tend to outperform them in various tasks.

I understand that they show that the layers can store an exponential amount of vectors, but that should still be worse than attention layers that can focus parts of an arbitrary length input sequence.

Also, in their paper, they briefly allude to Neural Turing Machine and related memory augmentation architectures, but do not comment on the comparison between them.

Has someone studied how these layers help improve the performance over pooling and attention layers, and is there any comparison between replacing layers with Hopfield layers vs augmenting networks with external memory like Neural Turing Machines?

Edit 29 Jan 2020 I believe my intuition that attention mechanism should outperform hopfield layers was wrong, as I was comparing the hopfield layer that uses an input vector for query $R (\approx Q)$ and stored patterns $Y$ for both Key $K$ and Values $V$. In this case my assumption was that hopfield layer would be limited by its storage capacity while attention mechanism does not have such constraints.

However the authors do mention that the input $Y$ may be modified to ingest two extra input vectors for Key and Value. I believe in this case it would perform hopfield network mapping instead of attention and I do not know how the 2 compare.

",44174,,44174,,1/29/2021 13:22,10/25/2022 22:02,Reasoning behind performance improvement with hopfield networks,,1,5,,,,CC BY-SA 4.0 26039,1,,,1/28/2021 8:23,,1,162,"

The Context

From all of the problems I have worked with in computer vision, the most challenging one is the object detection. This is not because the problem itself is complex to understand or bad formulated. But because we need to inject some strong priors about how we understand the world. Those priors are the anchors (which are priors about object shapes, aspect ratio...).

This prior information, although very simple to understand, it is very hard to inject on the training logic. Hence making the computation of the ground truth very messy and prone to errors. It is even harder when different object detection backbones propose different ground truth computation methods.

The Question

From mid-2019 till now there is a growing trend on research about one-stage object detectors that do not rely on anchors: hence dropping the costly NMS postprocessing and in some cases even the IoU computation. I would like to do a proof of concept with some of them so here is my question:

What are some good object detectors that do not use anchors? Or said in other words, what are the go-to object detectors for this new research trend?

",26882,,2444,,2/16/2021 15:11,2/17/2021 5:00,Object detection approaches without anchors and NMS,,0,3,,,,CC BY-SA 4.0 26040,2,,26033,1/28/2021 9:56,,4,,"

Technical barriers: There should be at least these common sense big barriers:

  • Trial-and-error technique makes the model hard to learn (too many), compared to ready-to-use supervised data
  • Number of time-steps (which usually equals the number of actions of the agent in the trajectory) is large, thus brute-force exploration won't work as the number of trials to find errors is exponential, although negative rewards may help cut short the brute-force tree.
  • Real-life RL takes unlimited number of episodes (for each episode, a sequence of actions should be learnt), and the incremental training is harder and harder in time with more explored data, unless some past and no-longer-related data are removed, just like humans, we forget some of the past to learn more, remember more the present.

The technical barriers are at first the barriers to applying them to business. People may produce some supervised data manually rather quick, and thus supervised learning is usually opted first, nobody wish to try RL.

Harder to find human resources: AI engineers with experiences in supervised learning are more popular and easier to find some; fewer work with RL, thus business projects are not carried out easily if using RL.

However, from my point of view, RL is very much promising in future as AI entities are now more and more on their own.

",2844,,2844,,1/29/2021 7:04,1/29/2021 7:04,,,,0,,,,CC BY-SA 4.0 26041,2,,26033,1/28/2021 11:35,,7,,"

There is a relatively recent paper that tackles this issue: Challenges of real-world reinforcement learning (2019) by Gabriel Dulac-Arnold et al., which presents all the challenges that need to be addressed to productionize RL to real world problems, the current approaches/solutions to solve the challenges, and metrics to evaluate them. I will only list them (based on the notes I had taken a few weeks ago). You should read the paper for more details. In any case, for people that are familiar with RL, they will be quite obvious.

  1. Batch off-line and off-policy training
    • One current solution is importance sampling
  2. Learning on the real system from limited samples (sample inefficiency)
    • Solutions: MAML, use expert demonstrations to bootstrap the agent, model-based approaches
  3. High dimensional continuous state and action spaces
    • Solutions: AE-DQN, DRRN
  4. Satisfying safety constraints
    • Solutions: constrained MDP, safe exploration strategies, etc.
  5. Partial observability and non-stationarity
    • Solutions to partial observability: incorporate history in the observation, recurrent neural networks, etc.
    • Solutions to non-stationarity: domain randomization or system identification
  6. Unspecified and multi-objective reward functions
    • Solutions: CVaR, Distributional DQN
  7. Explainability
  8. Real-time inference
  9. System delays (see also this and this answers)

There's also a more recent and related paper An empirical investigation of the challenges of real-world reinforcement learning (2020) by Gabriel Dulac-Arnold et al, and here you have the associated code with the experiments.

However, note that RL (in particular, bandits) is already being used to solve at least one real-world problem [1, 2]. See also this answer.

",2444,,2444,,1/28/2021 16:44,1/28/2021 16:44,,,,0,,,,CC BY-SA 4.0 26045,1,26049,,1/28/2021 17:39,,0,112,"

I am told to express a fitness function for a question I have been presented. I am unsure how I would express the function. In words, what I have written down makes sense but turning this into a mathematical formula is proving a bit difficult. My understanding is:

The fitness function for this scenario will want to ensure that the best offer for building the computers is chosen whilst the price of the final optimal offer is low.

The fitness function in this case would want to consider a few factors. The first factor is that the quantity of the computer parts was enough that each offers that were returned had a sufficient quantity of parts. Ideally, it would be best if we did not have any duplicates of parts in the offers. The cost is low too, but all parts have been found amongst the different offers that we have.

The fitness function will need to ensure all of this is factored in.

The scenario and question are below:

For the production of a number of laptops, a computer company needs a quantity of each component such as screens (S), hard drives (HD), optical drives (OD), RAM, video cards (VC), CPU, Ports, etc. The company received a number of priced offers. Note that offers do not contain all components. As examples:

  • Offer 1: 1000 RAMs, 800 HDs, 2000 ODs – £75K
  • Offer 2: 1850 S, 1570 OD - £40K
  • Offer 3: 3000 HD, 2000 RAM – £70K
  • Offer 4: 1500 RAM, 2000 VC, 1700 S – £55Ketc.

The company would be interested to accept cheaper offers in the first place. Answer the following: Give the expression of the fitness function.

Any help would be greatly appreciated 😊.

",,user43972,2444,,1/28/2021 18:40,1/28/2021 18:55,How to design a fitness function for a problem where there are 2 objectives?,,1,0,,,,CC BY-SA 4.0 26046,1,,,1/28/2021 18:04,,0,109,"

I have to extract part of a source image, then I have to check if it is similar or almost similar to any of the 10 target images, so that I can do further processing on that one specific target image, which is similar to the source image. It's like template matching, but they have to loop over 10 different images to find whether a matching template is found in any of those images or not.

I wanted to use a CNN-based solution, as a classical distance-based solution is giving poor results.

Can I use a CNN for template matching, so that there is robustness, as the background of the target image is not that good, and it causes a problem? If some resource can be pointed that would be great too.

",44197,,2444,,1/28/2021 23:32,1/28/2021 23:32,"Can I use a CNN for template matching, so that there is robustness, as the background of the target image is not that good?",,0,5,,,,CC BY-SA 4.0 26049,2,,26045,1/28/2021 18:33,,0,,"

If we assume that each laptop requires 1 component of each type, so 1 screen, 1 hard drive, 1 RAM, etc. (i.e. the company has no preference for the type of component), then the company, to maximize the number of laptops it can build (which is supposedly the ultimate goal of the company), it should

  1. maximize the number of instances of the least available component in the offer, and

  2. minimize the cost of the offer

So, one possible fitness function would then need to take these 2 objectives into account, so this would be a multi-objective optimization problem.

Recall that a fitness function evaluates individuals in the population. If we assume that the individuals are the offers and the goal would be to find the best offer in the space of offers, then we can devise some fitness function $f$ of the form

$$ f(o) = (1 - \alpha)\frac{1}{1 + p(o)} + \alpha \min(o), \label{1}\tag{1} $$ where

  • $o$ is an offer, so $f(o)$ is the fitness of the offer $o$, i.e. how much it is desirable (i.e. we want the fitness to be high, usually).
  • $p(o)$ is the price/cost of the offer $o$
  • if we assume that $o$ is an array of the form $o = [1000, 800, 2000, \dots]$, where $o_i$ is the number of items of type $i$ (e.g. $i$ is RAM, then $o_i$ would be the number of RAMs in the offer $o$), then $\min(o)$ would be the number associated with the least available item in the offer (e.g. if the indidivudal was $[12, 4, 5]$, then $\min(o) = 4$)
  • $\alpha \in [0, 1]$ is a hyper-parameter that trades off the two objectives (i.e. prices and the smallest number)

The fitness function $f$ in equation \ref{1} will be maximal when $p(o) = 0$ and $\min(o) = N$, where $N$ is some maximum threshold of possible number of items of the same type that an offer can have.

I have just come up with this fitness function. I don't know whether it will work in practice or not, but this is the idea of what you have to do. You can design other similar fitness functions (for example, how would you design a fitness function that takes into account that the number of components of each type should be more or less the same?). You may want to try to implement this e.g. with DEAP and see how it behaves. You probably also want to make sure that $o$ are arrays of integers (and not floating-point numbers). You probably also want to read this answer.

",2444,,2444,,1/28/2021 18:55,1/28/2021 18:55,,,,0,,,,CC BY-SA 4.0 26055,2,,25904,1/29/2021 0:33,,1,,"

An environment is said to have a discrete state-space, when the number of all possible states of the environment is finite. For example, $3\times3$ Tic-tac-toe game has a discrete state-space, since there are 9 cells on the board and only so many different ways to arrange Os and Xs.

A state-space can be discrete regardless of whether integers or non-integers are used to describe it. For example, consider an environment where a state is represented with a single number. If the set of all possible states is $ \{0, 0.3, 0.5, 1\}$, your state-space is discrete, because there are only $4$ states. However, if all possible states is the set of real numbers from $0$ to $1$, than it's not discrete anymore - because there are infinitely many of them. State-space can still be discrete, even if possible states are represented with multiple numbers. For example, our environment could be $10\times10\times10$ cube, where the agent is only allowed to stand on integer coordinates. In this scenario, there are $1000$ different places where the agent can be and hence the state-space is discrete.

The Deep Q-Network can be designed to accept any type of input; just like a regular ANN, it's not restricted to only one integer. An input to DQN is the state of the environment, regardless of how it's represented. For the previous example, you can setup the DQN to have an input layer with $3$ neurons, each one accepting an integer that describes agent's position along the $x, y, z$ axes.

One downfall of Q-learning is that when the environment has a very large number of states and actions, representing each state-action pair becomes impractical in terms of memory. Think of chess, where there are so many different possible positions, and multiple available moves in each of them. Moreover, for an agent to learn properly, every state-action pair value must be visited (agent needs to determine Q-values), which can impractical in terms of training time.

DQN algorithm takes care of these problems: it only needs to store the neural network (also few other things if you use a variation of DQN), and it doesn't need to visit every state-action pair to learn. They way it learns is by adjusting the weights and biases in the network in order to approximate the optimal policy. Given the algorithm is implemented correctly, the agent should be able to pick up some useful patterns (or solve the environment).

I used this paper as a reference for one of my projects. It implements the DQN algorithm to learn to play Sungka (a game similar to Mancala), which has finite number of possible states and actions.

",38076,,,,,1/29/2021 0:33,,,,0,,,,CC BY-SA 4.0 26062,2,,22961,1/29/2021 2:07,,2,,"

The output of YOLO is (x,y,w,h,confidence,class). The confidence value presents whether the rectangle holds an object, the rectangle is non-classed when confidence is low.

The class value will be used, only when confidence is high.

",2844,,,,,1/29/2021 2:07,,,,0,,,,CC BY-SA 4.0 26063,1,,,1/29/2021 5:09,,2,47,"

I have created (not me exactly) a short film entirely made by AI. There many short films (like Sunspring) 'written' by AI but were acted out by humans. In my short film, the story is by the AI, the music is by the AI, the title art is by the AI, the visuals and acting is by the AI (yes the AI can act) and the dialogue is by the AI. So, everything is AI. What I wanted to know is if this is the first one like this. I can't seem to find others online.

",44211,,2444,,1/29/2021 20:59,1/29/2021 20:59,What is the first short film completely made by AI?,,0,3,,,,CC BY-SA 4.0 26065,1,,,1/29/2021 9:19,,1,538,"

Assuming we use an MSE cost function of the form

$$ \sum_s\mu(s)(V_{\pi}(S_t)-\hat{V}(S_t,\theta_t))^2 = E_{\mu(s)}[(V_{\pi}(S_t)-\hat{V}(S_t,\theta_t))^2])$$

The Stochastic Gradient Descent is used to approximate the true update algorithm, which looks like this

$$\theta_{t+1} = \theta_{t} - \frac{\eta_t}{2}\nabla_{\theta}(E_{\mu(s)}[(V_{\pi}(S_t)-\hat{V}(S_t,\theta_t))^2])$$

to this

$$\theta_{t+1} = \theta_{t} - \frac{\eta_t}{2}\nabla_{\theta}(U_t-\hat{V}(S_t,\theta_t))^2$$

where, for simplicity, $U_t$ represents an unbiased estimate of the true value function $V_{\pi}(s_t)$. This expression is the source of many learning algorithms used in reinforcement learning.

One of the conditions for SGD requires that samples used for updating the parameters must be I.I.D according to the distribution $\mu(s)$. However, in both on-policy and off-policy learning methods, updates at each time-step are based on trajectories generated. Since, along a trajectory, the state $s_{t+1}$ depends on the state at ${s_t}$, this means that the sample used to update $\theta_t$ and $\theta_{t+1}$ are not independent. Many, if not all, sample-based learning algorithms used in RL rely on using SGD, such as the Gradient Monte Carlo Algorithm but I've not really seen anywhere that mentions these algorithms have the "issue" that I mention so I feel like I'm missing something. More formally,


My Question: Does the fact that parameter updates are not I.I.D mean we can't really use stochastic gradient descent AT ALL in learning algorithms, and, if so, why then do these algorithms "work"?


As far as I know, this question applies equally to all forms of parameterised function approximation that are used with learning algorithms (tabular functions*, linear functions and non-linear functions). But, if anyone knows a special reason as to whether these cases should be treated separately could they make clear why

*I understand that when learning algorithms with tabular functions, there exists theory beyond SGD that ensures convergence, however, I'm not entirely sure what this theory is and whether if this makes them exempt, so if anyone knows whether or not it does make them exempt could they also make this clear!


Edit:

It has been highlighted in the comments that replay buffers have been used to resolve the issue of correlated sampling in cases such as DQN and variants of it. This implies correlated sampling is an issue in these cases. Aside from this, I've not heard of replay buffers being used elsewhere (correct me if I'm wrong), so why are replay buffers needed with this off-policy NN approach but not in other learning algorithms given that they all suffer from the issue of correlated sampling.

",42514,,2444,,1/31/2021 0:14,2/2/2021 15:22,Can stochastic gradient descent be properly used in any sample based learning algorithm in Reinforcement Learning?,,2,9,,,,CC BY-SA 4.0 26066,1,,,1/29/2021 11:59,,3,126,"

I'm considering using GANs for medical image denoising, based on previous literature, like this and this. My input to the GAN would be a high-noise image and my ideal output would be a low-noise, high-quality image.

Is the GAN architecture better suited for applications where the inputs are just random noise? Is the discriminator necessary in this case or is it better to just use a Deep CNN/Autoencoder? How do I justify using a GAN for my application?

",44218,,2444,,1/30/2021 3:00,1/30/2021 3:00,Is the GAN architecture better suited for medical image denoising than the CNN?,,0,2,,,,CC BY-SA 4.0 26069,2,,26036,1/29/2021 19:46,,1,,"

The expectation of a sum is equal to the sum of the expectation this just follows from the linearity property of expectations

$$ \begin{aligned} E[\sum_{t} f(s_t,a_t)] &= \sum_{\tau} p(\tau)\left(\sum_t f(s_t,a_t)\right) \\ &= \sum_\tau\sum_{t}p(\tau)f(s_t,a_t) \\ &= \sum_t\sum_\tau p(\tau)f(s_{t,\tau},a_{t,\tau}) \\ &= \sum_tE[f(s_t,a_t)] \end{aligned} $$

note that in the penultimate line I swapped the sums around which can be understood as looking at all trajectories for a single time-step instead of looking at time-steps within a single trajectory (the second expression). I append subscript for variables in the penultimate line for clarity since they also depend on which trajectory they were drawn from.


As mentioned by nboro in the comments, the linearity of property only holds for infinite sums if

$$\sum_{i=0}^\infty E[|X|]< \infty \quad or \quad E\left[\sum_{i=0}^{\infty}|X|\right]. <\infty$$

More details can be found here. Using this theory alongside knowledge that since we know that value functions in RL are always bounded (in continuing tasks discounting factors are introduced) then we can say there exists an $M$ such that

$$ |f(s_{t,\tau},a_{t,\tau})| \leq M \qquad \forall t,\tau $$

Then

$$ \begin{aligned}\sum_t\sum_\tau \lambda^t p(\tau)|f(s_{t,\tau},a_{t,\tau})| &\leq \sum_t\sum_\tau \lambda^t p(\tau)M \\ &=M\sum_t\lambda^t\sum p(\tau) \\ &=M\sum\lambda^t \\ &< \infty \end{aligned}$$

As for the rest of the proof is there anything else you need clarity on? it looks good to me. From that step to the next step where you change distribution from $p(\tau)$ to $p(s,a)$ is just because of marginalisation but I think you got that.

",42514,,42514,,1/29/2021 22:09,1/29/2021 22:09,,,,1,,,,CC BY-SA 4.0 26074,2,,21135,1/29/2021 23:59,,2,,"

I am not aware of any empirical results regarding this question. But in theory, adding a regularization term shall make the learning task actually even harder, since there is suddenly a second loss term that the network has to be optimized for, which is not even directly related to achieving the original task of fitting the model to the data. It is true that the regularization term will try drive as many of the weights towards low values as it can. But, at the same time, the other loss term (computed on the original optimization criterion) will try to drive many of the weights to larger values unequal 0 in order to achieve the original training task. (If that wasn't the case, you would be good to go without regularization in the first place.) Having these competing interests shall then make achieving the original task harder, which I would in theory expect to be more time consuming than, alternatively, allowing the model overfit until a certain admissible error is reached.

After all, the goal of regularization is to improve generalization of a learned model (i.e. to prevent overfitting, as you said), which is about the quality of the outcome and not about how quickly you get to the desired outcome.

",37982,,,,,1/29/2021 23:59,,,,0,,,,CC BY-SA 4.0 26085,2,,26034,1/30/2021 12:25,,0,,"

Is this really how it works?

Yes and no. An evaluation function based on pure material advantage is a perfectly legal function. It's certainly better than nothing. However, it's too simple in practice.

The state-of-the-art methods involve neural network. Google "Stockfish NNUE" for details.

",6014,,,,,1/30/2021 12:25,,,,0,,,,CC BY-SA 4.0 26087,1,,,1/30/2021 15:12,,1,43,"

I'm using genetic algorithms to train deep reinforcement learning (DRL) agents, similarly to what was done in this paper. DRL policies are therefore represented by deep neural networks, which map states into deterministic actions. My state space consists of three state variables $v_1, v_2$ and $v_3$. Variable $v_1$ is extremely noisy and seems to be degrading the performance (i.e. the return or discounted cumulative reward) of my RL agent but, for certain reasons, I have to include it. The return is precisely my fitness function.

Currently, my DNN looks like this:

There is only 1 output since the action space is 1-dimensional (one degree of freedom, which is used to control a system).

The DNN tends to overfit more quickly when $v_1$ is present. I'm considering creating a custom NN that looks like this:

By doing this I would reduce the complexity of the influence of the variable $v_1$ on the output, since the number of layers between $v_1$ and the output node would be reduced.

I have reasons to believe that the optimal control depends linearly or (something close to linearity) on $v_1$.

Does this make any practical sense and are there are reasons why one should avoid doing this?

",26195,,2444,,1/30/2021 20:02,1/30/2021 20:02,"If one of the inputs to a neural network (that represents a policy) is noisy and degrades the performance, would this architecture solve the issue?",,0,1,,,,CC BY-SA 4.0 26088,2,,25988,1/30/2021 16:48,,2,,"

If you know it is symmetric, then you could do a couple things.

  1. Zero out a half.

Don't bother learning both halves of the image. Just put a zero mask over the upper or lower half of the output matrix and just have the network regress the other half. Just don't make the network do more work than it needs to do.

  1. Learn both, but add symmetric loss

In your case, it looks like you could create two loss functions added together.

$focal(x, y) + focal(x^T, y)$

This will help the network learn both halves equally.

  1. L1 between $x$ and $x^T$

This might be silly but adding a Huber loss between $x$ and $x^T$ might help promote symmetry, but I'm not as much if a fan of it. Personally I'm more partial to (1).

  1. Combine (1) and (2)

You could take the loss function of (2) at train time but at test time just use whatever half had better metrics and copy that to the bottom half.

  1. Post Processing

Add a post processing custom layer that takes the average of the two halves so that you guarantee symmetry.

$x' = \frac{1}{2}(x + x^T)$

Then do your normal focal loss. Personally I like this the best since it always guarantees a symmetric output and is a pretty easy custom layer.

",17408,,17408,,2/4/2021 18:28,2/4/2021 18:28,,,,0,,,,CC BY-SA 4.0 26089,2,,25952,1/30/2021 17:51,,1,,"

According to Table S3 of the AlphaZero paper (p. 15)

AlphaZero was trained for 9 hours and, during these 9 hours, it played 44 million games of chess.

According to this Wikipedia article, the longest human lifespan is that of Jeanne Calment, who lived to age 122 years and 164 days.

Let's assume that humans cannot live more than 123 years (which is a reasonable assumption, although this record could eventually be broken). Let's also assume that a chess game lasts at least 10 minutes, which means that you can play at most 6 games in 1 hour, which means that you can play at most $6*24 = 144$ games in one day (assuming that you never sleep, which is, of course, impractical, but I'm just trying to show you an upper bound). Let's say that a year has 365 days. So, here is roughly the maximum number of games that a human could play $$ 6*24*365*123 = 6464880 \tag{1}\label{1}, $$ which is smaller than 44 million games by a factor of more than 6, i.e. any human could at most play 1/6 of the games that AlphaZero played, and \ref{1} is a very loose upper bound that doesn't take into account that humans need to sleep, eat, and do many other things.

So, humans learn to play chess a lot slower than AlphaZero. This is not surprising at all, given that computers can perform calculations a lot faster than us (that's why they are called computers), and this has been the case for many years. We (humans) just made computers make the right calculations for them to approximately play chess better than us. That's it.

",2444,,,,,1/30/2021 17:51,,,,2,,,,CC BY-SA 4.0 26091,2,,26065,1/30/2021 19:47,,3,,"

First I will address the issue of Tabular methods. These do not use SGD at all. Although the updates are very similar to an SGD update there is no gradient here and so we are not using SGD. Many Tabular methods are proven to converge, for instance the paper by Chris Watkins titled "Q-Learning" introduces and proves that Q-learning converges. Also you include tabular methods as being parameterised function approximators. This is not true. Tabular methods maintain an estimate of the value function in a look-up table for each state-action pair and there is no function approximator being used.

Now for non-tabular methods, i.e. Deep Reinforcement Learning. Here we are using SGD only to optimise the parameters of the networks (assuming of course that we are using NN's as our function approximators, but this is a fair assumption if you read the literature). This is why off-policy methods are typically preferred because they can use a replay buffer which allows the use of data from any past trajectory. When using a replay buffer we sample random past experiences which de-correlates the data and allows the i.i.d. assumption to hold when using SGD.

If we were to use an on-policy algorithm, in theory SGD may not converge to any local optima because we are violating the i.i.d. assumption. However, all this means is that we are not guaranteed to converge. For instance, I have run an experiment using REINFORCE (an on-policy learning algorithm) using NN's as function approximators and they were able to obtain an optimal policy. However, this was for a very simple environment using modestly sized networks, so this is likely why they were able to be trained using non-i.i.d. data. A more in depth question/answer as to why NN's require i.i.d. data can be found here.

To address your edit, Replay Buffers are not required but if we can use them, then why would we not? It helps to maintain the i.i.d. assumption and so it helps us obtain a local optima. If we did not use them then there would not be the guarantee that it would converge. As an aside, Replay Buffers are typically used because they make the algorithm have a greater sample efficiency - this means we can obtain an optimal policy using much less data.

If you are wondering why don't on-policy methods use a replay buffer, the answer is because they are on-policy. The actions used in updates must be taken according to our current policy that we are learning the value functions for (or are optimising the policy of in Policy Gradient Methods). This is not the case in off-policy algorithms - e.g. in Q-learning we are learning the value functions of the greedy policy but we follow a different policy that allows for exploration.

",36821,,,,,1/30/2021 19:47,,,,10,,,,CC BY-SA 4.0 26093,2,,2723,1/30/2021 20:37,,0,,"

Yes, evolutionary algorithms (EAs) can be used to solve/play games too. For example, OpenAI has used evolution strategies (a subset of EAs that uses fixed-length real-valued vectors and self-adaptive mutation rates) to play Atari games. In this blog post, they write

We've discovered that evolution strategies (ES), an optimization technique that's been known for decades, rivals the performance of standard reinforcement learning (RL) techniques on modern RL benchmarks (e.g. Atari/MuJoCo), while overcoming many of RL's inconveniences.

There is also the related paper Evolution Strategies as a Scalable Alternative to Reinforcement Learning (2017) and code.

",2444,,2444,,1/30/2021 20:47,1/30/2021 20:47,,,,0,,,,CC BY-SA 4.0 26099,1,,,1/31/2021 7:53,,1,67,"

I was reading the ESRGAN whitepaper, where I came across this line:

Relativistic discriminator [2] is developed not only to increase the probability that generated data are real but also to simultaneously decrease the probability that real data are real.

Can somebody explain this?

",44251,,2444,,1/31/2021 14:39,1/31/2021 14:39,Why does the relativistic discriminator increase the probability that generated data are real and decrease the probability that real data are real?,,0,0,,,,CC BY-SA 4.0 26100,1,26107,,1/31/2021 8:57,,3,280,"

So far, I have not been able to find many papers that do not involve neural networks, so I was hoping I can gain some insight here. Any references would be greatly appreciated.

",41856,,2444,,1/31/2021 13:38,1/31/2021 13:38,Are there any active areas of research in machine learning that do not involve neural networks at all?,,1,1,,,,CC BY-SA 4.0 26104,1,,,1/31/2021 11:14,,3,54,"

SO the YOLO V3 and RetinaNet both uses the Feature pyramids which look something like this: (except b and e which have one output)

I'm just confuse how the predictions and training is done? Do we have to give EACH feature map a different Y label? IF yes, how is that possible? We need to have N different ground truth in my opinion. (Also ther'll be 3 different losses I think?)

If not, then how are these done at once?

There is a lot of confusion on these networks because I am not able to get my head around How are y-labels provided, trained and predicted in YOLOv3 and RetinaNet . Everything will make sense about loss, multioutputs and all if I know this one thing.

",36062,,,,,1/31/2021 11:14,How are Ground truth provided to each Pyramid map in RetinaNet or YOLOv3 Paper? How is the mapping of Feature Pyramids done to Ground Truth,,0,2,,,,CC BY-SA 4.0 26106,2,,26023,1/31/2021 12:00,,3,,"

What I want to know is whether I can add expert data to the replay buffer, given that DDPG is an off-policy algorithm?

You certainly can, that is indeed one of the advantages of off-policy learning algorithms; they're still "correct", regardless of which policy generated the data that you're learning from (and a human expert providing the experience to learn from can also be viewed as such a "policy").

There are potential issues to be aware of though. For example, if you just put some expert-generated data in there and don't allow your agent to explore by itself, the experiences that you can learn from may be quite limited in the parts of the state-action space that they explore. So if your expert does not sufficiently explore the entire space, you cannot expect the agent to learn how to act if for whatever reason it ever ends up in some unexplored space. This is no different from what would happen if you trained with an agent that had too little exploration (like a greedy agent).

Or I should go with the behavior cloning technique to train the actor network only, so that it converges rapidly?

I cannot confidently say which approach would work better, so I cannot really answer this... I imagine the answer may also be different for specific different problem domains. But the basic principle of learning from expert data with an off-policy algorithm is not inherently wrong.

",1641,,,,,1/31/2021 12:00,,,,1,,,,CC BY-SA 4.0 26107,2,,26100,1/31/2021 12:47,,3,,"

If you look into the top conferences on machine learning and neural networks, such as NeurIPS, ICLR, and ICML, you will find many papers related to neural networks and deep learning, given that these are still very hot/promising topics. However, occasionally, you will find accepted papers that do not involve neural networks. Here's a small list of them that I've found after a quick search.

So, yes, research in machine learning is not exclusively devoted to neural networks and deep learning. You can probably find more papers here or in the proceedings of similar conferences or journals.

",2444,,,,,1/31/2021 12:47,,,,0,,,,CC BY-SA 4.0 26110,2,,25949,1/31/2021 15:54,,1,,"

Assigning a value of $\infty$ to unvisited nodes is indeed the "default" or most basic choice, and it indeed ensures that the search never visits a node for a second time if it also still has siblings that have not had any visits. But many other kinds of values have been tried in the literature too.

Gelly and Wang, in "Exploration exploitation in Go: UCT for Monte-Carlo Go" referred to the parameter as "First-play Urgency" (FPU), and indeed really treated it as a hyperparameter; they simply tried various different values and some were found to work better than others.

Other more specific values that you may want to consider (but in most cases we can't really say much about which one will be better than which other ones, without empirical evaluations) are:

  • The value of a loss; treating nodes that have not had any visits as losing nodes is probably the most "pessimistic" initialisation you could pick. It's not very clearly described in the papers, but there is some evidence out there that this was used by AlphaGo Zero and AlphaZero. Those are not simply vanilla UCTs though, those programs had very strong trained Deep Neural Networks to make additional recommendations for actions. If you don't have such strong neural networks to introduce additional biases in your selection, I would not recommend treating unvisited nodes as losing nodes.
  • The value of a draw; treating unvisited nodes as draws means that you may prioritise re-visiting nodes that look like winning nodes, but you won't prioritise re-visiting nodes that look like losing nodes.
  • The value of a win; this probably produces similar behaviour in practice as a value of $\infty$
  • The average value backpropagated into the parent (correcting for differences in player-colour-to-move); you may prioritise re-visiting nodes that perform better than average in this subtree, but won't prioritise re-visiting nodes that perform worse than average in this subtree
",1641,,1641,,1/31/2021 16:07,1/31/2021 16:07,,,,1,,,,CC BY-SA 4.0 26112,2,,26018,1/31/2021 16:47,,0,,"

As explained here, I can write $R(s,a) = R(s)\ \forall a$ since the reward of my specific MDP is dependent exclusively to the state $s$.

",30664,,2444,,1/31/2021 19:03,1/31/2021 19:03,,,,2,,,,CC BY-SA 4.0 26114,1,,,1/31/2021 18:25,,1,72,"

This is a question I posted here. I am asking it on this StackExchange branch as well, so that more people who could potentially answer get to see the question.

In the A3C algorithm from the original paper:

the gradient with respect to log policy involves the term

$$\log \pi(a_i|s_i;\theta')$$

where $s_i$ is the state of the environment at time step $i$, and $a_i$ is the action produced by the policy. If I understand correctly, the output of the policy is a softmax function, so that if there are $n$ different actions, then we get the $n$-dimensional vector output

$$\pi(s_i;\theta')=\left(\frac{e^{o_1(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}},\frac{e^{o_2(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}},...,\frac{e^{o_n(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}}\right),$$

where the $o_j(s)$ are softmax layer activations obtained from forward propagation of state $s_i$ through the neural network.

Do I understand correctly that in the A3C algorithm above the term $\log \pi(a_i|s_i;\theta')$ refers to

$$\log \pi(a_i|s_i;\theta') = \log\left(\frac{e^{o_j(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}}\right)$$

with index $j$ referring to the position of the largest element in vector $\pi(s_i;\theta')$ above? Or maybe all action options should be contributing according to their probabilistic weights, like so

$$\log \pi(a_i|s_i;\theta') = \sum_{j=1}^n\log\left(\frac{e^{o_j(s_i)}}{\sum_{l=1}^n e^{o_l(s_i)}}\right)~~~?$$

Or perhaps neither of these expressions is correct? In that case, what is the correct explicit formula for the expression $\log \pi(a_i|s_i;\theta')$ in terms of softmax layer activations $o_j$?

",44259,,2444,,2/1/2021 12:11,2/1/2021 12:11,Understanding loss function gradient in asynchronous advantage actor-critic (A3C) algorithm,,0,0,,,,CC BY-SA 4.0 26119,2,,25860,1/31/2021 18:52,,1,,"

The upper bound used here is derived from Hoeffding's inequality, which provides a symmetric, two-sided confidence interval. A good pair of blog posts on how this bound used in UCB for bandits is derived can be found here:

  1. First steps: Explore-then-Commit
  2. The Upper Confidence Bound Algorithm

Indeed, in practice when using this UCB for bandits, we do not actually care about the lower bound. We only need the upper found for the exploration mechanism. But the lower bound does still exist, even if we don't use it.

",1641,,,,,1/31/2021 18:52,,,,1,,,,CC BY-SA 4.0 26120,2,,9510,1/31/2021 19:43,,2,,"

Yes, there is research on this topic. The field that studies it is known as affective computing (AC). Emotion recognition seems to be a specific problem in affective computing, i.e. the recognition of emotions, while AC is also concerned with giving machines the ability to convey emotions (in fact, this paper differentiates the two). There's also sentiment analysis, which refers to the analysis of the sentiment of people e.g. in social media (e.g. in comments) using natural language processing techniques, which doesn't seem exactly what you are looking for. These fields/tasks are all very related/similar, but I don't think they are exactly synonymous (although I'm not an expert in this topic).

Apart from the linked Wikipedia articles, you can find another list of related resources (including important papers and books, and software) here. If you're interested in an easy overview of the AC field, you should probably read the paper Affective computing: challenges, by Rosalind W. Picard (who is one of the leading researchers in this field). In this paper, she differentiates between emotions (what you observe/express with gestures, facial expressions, etc.) and feelings (internal states, which sometimes are not even clear to the person that is having them), and compares emotions to the weather: more precisely, when you say that the weather is windy, you're just giving a label to a fuzzy set of conditions: for example, there's the wind, it could start raining soon, etc., but these conditions are not always clear: for instance, how strong should the wind be in order for you to consider the weather windy? I found this is a nice analogy that conveys what emotions are and why interpreting emotions seems to be a difficult task.

Additionally, you may also be interested in the loving AI project.

",2444,,2444,,1/31/2021 19:55,1/31/2021 19:55,,,,0,,,,CC BY-SA 4.0 26121,2,,17963,1/31/2021 19:48,,0,,"

You could rank every pixel in terms of brightness before normalization and after to verify that the ranks are preserved. Correct normalization would preserve the ranks. From the pictures 1 and 2, it seems that the ranks were not preserved (e.g. the top right pixels went from grey to black).

The tutorial's normalization is done incorrectly: x_train is a an array of 2D images, but the normalization was applied along the first axis. What happened is that each column of every image was normalized relatively to itself (compare image 1 and 2 column-wise – the ranks were preserved). The normalization should have been applied along the second axis – that would normalize every pixel relative to the image it's in. In seems that incorrect normalization wasn't enough to sabotage the learning though! Your proposed normalization should work fine.

",38076,,38076,,1/31/2021 22:22,1/31/2021 22:22,,,,0,,,,CC BY-SA 4.0 26122,1,,,1/31/2021 20:56,,4,119,"

As per these slides on page 35:

Sigmoids saturate and kill gradients.

when the neuron's activation saturates at either tail of 0 or 1, the gradient at these regions is almost zero.

the gradient and almost no signal will flow through the neuron to its weights and recursively to its data.

So, if the gradient is close to zero, then the error correction would be very minimal. But why would that cause that no signal flow through the neuron?

$$w(n+1) = w(n) - \text{gradient}$$

That would only cause the weights not to change.

",40052,,2444,,2/1/2021 0:19,2/1/2021 0:19,Why does sigmoid saturation prevent signal flow through the neuron?,,0,1,,,,CC BY-SA 4.0 26125,1,,,2/1/2021 6:42,,0,28,"

I have to create a model that can detect the user's emotion and stress level based on their mouse movement and keyboard typing activity. I didn't found any research work based on this. Is there any research on this?

",44275,,2444,,2/1/2021 11:48,2/1/2021 11:48,Is there any research on the detection of the user's emotion and stress based on the mouse movement and keyboard?,,0,2,,,,CC BY-SA 4.0 26126,1,26127,,2/1/2021 8:12,,5,598,"

In Q-learning, all resources I've found seem to say that the algorithm to update the Q-table should start at some initial state, and pick actions (which are sometimes random) to explore the state space.

However, wouldn't it be better/faster/more thorough to simply iterate through all possible states? This would ensure that the entire table is updated, instead of just the states we happen to visit. Something like this (for each epoch):

for state in range(NUM_STATES):
  for action in range(NUM_ACTIONS):
    next_state, reward = env.step(state, action)
    update_q_table(state, action, next_state, reward)

Is this a viable option? The only drawback I can think of is that it wouldn't be efficient for huge state spaces.

",44278,,2444,,2/1/2021 11:46,2/7/2021 16:49,"In Q-learning, wouldn't it be better to simply iterate through all possible states?",,3,0,,,,CC BY-SA 4.0 26127,2,,26126,2/1/2021 10:09,,5,,"

If your algorithm is executed multiple (or enough) times using an outer loop, it would converge to similar results as Q-learning would with $\gamma = 0$ (as you don't look what is the expected future reward).

In this case, the difference is that you would pass as much time to explore each possible couple of (state, action) while Q-learning would pass more time on the pair which seem more promising and, as you've said this wouldn't be efficient for a problem with a huge number of pair (state, action).

If the algorithm is executed only once, then, even for a problem with a few pairs (state, action), you need to assume that an action effected on a state will always bear the same result for your method to work.

In most cases, it isn't true either because there is some sort of randomness in the reward system or in the action (your agent can fail to make an action) or because the state of your agent is limited to its knowledge and so doesn't represent perfectly the world (and so the consequence of its action can vary just like if the reward had some randomness).

Finally, your algorithm doesn't look at the expected future reward, so it would be equivalent to having $\gamma = 0$. This could be fixed by adding a new loop updating the table after your current loops if you execute your algorithm only one time or by adding the expected future reward directly to your Q-table if there is an outer loop.

So, in conclusion, without the outer loop, your idea would work for a system with few pairs of (state, action), where your agent has a perfect and complete knowledge of its world, the reward doesn't vary, and where an agent can't fail to accomplish an action.

While these kinds of systems indeed exist, I don't think that it's an environment where one should use Q-learning (or another form of reinforcement learning), except if it's for educational purposes.

With an outer loop, your idea would work if you are willing to pass more time training to have a more precise Q-table on the least promising pair of (state, action).

",26961,,2444,,2/7/2021 16:49,2/7/2021 16:49,,,,4,,,,CC BY-SA 4.0 26128,1,26165,,2/1/2021 10:10,,-1,68,"
  • PROJECT: I am working on an e-commerce site where digital products can run out so there is need to reorder them 72h before they run out (reordering them sooner is not a problem but having notification a bit later so if the product would sell better that would be a problem because we cannot reorder products in time).
  • GOAL: is to know if products run out at least 72h earlier.
  • DATA COLUMNS: sales datetime, product id, current number of products, price of product, what currency it was purchased in, other data like profit currency was used for the purchase…
  • SIZE: Before grouping I have a few millions of rows after grouping hundreds of thousands so it is a lot of data point but DASK can handle them.
  • GRUPPING COLUMNS: I have grouped the data by PURCAHSEDATE & ID so each day has the product that were sold with all its feature. Features have been aggregated mostly buy summing (profit, expenses) and mean (percentage features like margin%)
  • HOW FAR I HAVE GONE WITH THE PROECJT: I have looked up a couple of Kaggle projects online that were focused on use https://www.kaggle.com/tejasrinivas/simple-xgb-starter
  • PROBLEMS: A.) Some product has been sold in the past but they are selling out in 1-2 days so it is hard to put trendline on it. B.) Some item just has 1-2 days of data because it just started to sell a few days ago. C.) I also have data of products that have been sold a lot for a mid or long run (hundreds of days thousands of times). So I could do time series modelling on the whole of the sales but for each individual item I don't always have data on it
  • CURRENT RESULTS: I have used XGBOSOT Regression like It predicts well number of products sales after the days is over with all the features, but that is not the goal - https://www.kaggle.com/tejasrinivas/simple-xgb-starter
  • PROJECT RECOMMEND:I am trying to use the following pick ideas from the following competitions: https://www.kaggle.com/c/demand-forecasting-kernels-only/notebooks?competitionId=9999&sortBy=voteCount , https://www.kaggle.com/c/competitive-data-science-predict-future-sales/code
  • GOAL: simple and easy solution, not LSTM or something complicated but something quick and easy (like xgboost regression so if I have more data I can use rapids.ai to GPU teach it) to implement because as I said it is not a problem if it is missing on the time frame on the positive side and the item gets reordered 96h early and not 72h early. I am guessing that somehow, I should shift the dates but as I said in many case items have not enough dates to shift their sales date.
",36036,,1429,,2/7/2021 16:04,2/7/2021 16:04,What model to use to get a robust model to predict next 3 days of sales even for products that have just sold once ever?,,1,0,,,,CC BY-SA 4.0 26129,1,,,2/1/2021 10:23,,2,41,"

I am playing around with an idea of using using Q-learning with a DQN (Deep Q-Network), to determine the optimal position of a number of 'units' on a grid of allowed locations, according to some reward-metric that is only calculated after placing all of the units. I am following much of Deep Convolutional Q-Learning with Python and TensorFlow 2.0, but I diverge with the grid output.

To do this, I use a CNN that takes in a one-hot encoded grid of the units (0=no units here, 1=one unit here), and outputs a grid with the same shape as the input with expected rewards of placing a unit at this location.

I can use the following Tensorflow/Keras code to get the expected rewards, where units can be placed in 3 dimensions and channels determining the different unit-styles:

from tensorflow.keras import layers, models

model = models.Sequential(name="DQN")
model.add(
    layers.Conv3D(
        filters=10,
        kernel_size=3,
        activation="relu",
        padding="same",
        input_shape=input_shape,
        bias_initializer="random_normal"
    )
)
model.add(layers.Conv3D(filters=10, kernel_size=3, activation="relu", padding="same"))
model.add(layers.Conv3D(filters=input_shape[-1], kernel_size=3, activation="relu", padding="same"))

model.compile(optimizer="adam", loss=tf.keras.losses.Huber(), metrics=["accuracy"])

Currently I am using a very simple training scheme, where Q-values are first generated from the current state. At the position where the agent earlier placed a unit, the calculated true reward is given and trained against. If the following state was a terminated state, the calculated reward is used directly, while the discounted reward is used in non-terminated states.

for state, action, next_state, reward, terminated in minibatch:
    # state: one-hot grid
    # action: index in the state where a unit is placed by the agent
    # terminated: True/False whether the 'next_state' is the terminated state

    q_target = model.predict(state)

    if terminated:
        q_target[0][action] = reward
    else:
        following_target = model.predict(next_state)
        q_target[0][action] = reward + gamma * np.amax(following_target)

    model.fit(state, q_target, epochs=1, verbose=0)

This means, that only a single value in the entire training tensor is the true reward - all other are approximated by the CNN. However, all of the expected rewards are used in training, instead of this singular value. So I was considering whether it would be possible to train the CNN towards this single value only, and whether it would make any sense at all?

I thought of creating a custom loss function that would calculate the loss function for this single action, so training is done against this. However, I can't really figure out how I would go about doing this. I've looked at something like Custom training with tf.distribute.Strategy, but I wasn't successful at it..

",44277,,,,,2/1/2021 10:23,Single-value loss/training in a CNN with a tensor output,,0,0,,,,CC BY-SA 4.0 26130,1,26859,,2/1/2021 12:16,,0,170,"

I need the q-value for my RL training, there are some approaches:

  • Brute-force the action sequence (this won't work for long sequence)
  • Use a classic algorithm to optimise and estimate (this ain't much AI)
  • Create Monte Carlo samples and train an approximator network for calculating q-value

I find the Monte Carlo method above rather widely applicable to different problems, and the more computing power, the more precise it is. Any other methods for calculating q-value?

",2844,,2444,,3/17/2021 11:10,3/17/2021 11:10,What are the popular approaches to estimating the Q-function?,,1,7,,,,CC BY-SA 4.0 26131,1,26161,,2/1/2021 13:53,,1,78,"

I am currently experimenting with the U-Net. I am doing semantic segmentation on the 2018 Data Science Bowl dataset from Kaggle without any data augmentation.

In my experiments, I am trying different hyper-parameters, like using Adam, mini-batch GD (MBGD), and batch normalization. Interestingly, all models with BN and/or Adam improve, while models without BN and with MBGD do not.

How could this be explained? If it is due to the internal covariate shift, the Adam models without BN should not improve either, right?

In the image below is the binary CE (BCE) train loss of my three models where the basic U-Net is blue, the basic U-Net with BN after every convolution is green, and the basic U-Net with Adam instead of MBGD is orange. The learning rate used in all models is 0.0001. I have also used other learning rates with worse results.

",43632,,2444,,2/2/2021 22:59,2/2/2021 22:59,"Why does my model not improve when training with mini-batch gradient descent, while it does with Adam?",,1,0,,,,CC BY-SA 4.0 26136,1,,,2/1/2021 17:25,,3,124,"

Why are the weights of a neural net updated only considering the old values of the later layer, not the already updated values?

I use this example to explain my problem. When applying the backpropagation chain rule, the weights of the previous layer ($w_1, w_2, w_3, w_4$) are updated making use of the chain rule:

$$\frac{\partial E_{total}}{\partial w_1} = \frac{\partial E_{total}}{\partial out_{h1}} * \frac{\partial out_{h1}}{\partial net_{h1}}*\frac{\partial net_{h1}}{\partial w_1}$$

He then says:

$$\frac{\partial net_{o1}}{\partial out_{h1}}=w_5$$

Although he has already calculated the updated value for $w_5$, he uses the old value of $w_5$ to update $w_1$? Because the updated value of $w_1$ will have an impact on the outcome together with the updated value of $w_5$?

",21157,,2444,,2/1/2021 21:30,11/22/2022 20:05,"Why are the weights of the previous layers updated only considering the old values of the weights of the later layer, not the updated values?",,2,1,,,,CC BY-SA 4.0 26137,2,,26126,2/1/2021 19:43,,2,,"

In short, yes, provided that you have a small number of states.

In pretty much any real system, the number of states is much higher than you could ever hope to explore exhaustively in any reasonable time. This is why you need to set some sort of exploration/exploitation policy to make sure that you mostly visit promising states while also checking states that might look initially poor but may lead to better states as you explore further. As a few minutes thought will convince you, determining the exact nature of that exploration/exploitation trade-off is probably the most important aspect of effective Q-Learning (and pretty much any other search algorithm for that matter).

",12509,,,,,2/1/2021 19:43,,,,0,,,,CC BY-SA 4.0 26138,2,,26136,2/1/2021 20:57,,2,,"

The basic idea of gradient descent is:

  • Calculate the gradient of some score with respect to parameters that you can control

  • Take a step in the direction of that gradient that improves the score (subract a multiple of gradient - for gradient descent - if you want to minimise some cost function)

The backpropagation using chain rule in neural network layers is part of the first part, calculating the gradient.

If you interleaved these steps during a single calculation, by taking an update step before propagating the gradient over all layers, you would not propagate the true gradient, but some interim value. There is no guarantee this value would reflect the true gradient of the affected layer when compared to training data and loss functions.

There are some manipulations of the gradient that are acceptable and improve convergence. For instance momentum, and various forms of weighting based on previous gradient calculations. However, I am not aware of any successful attempt to perform updates with partially-calculated gradient then attempt to continue the interrupted gradient calculations over the mixed updated and not-yet-updated parameters.

It is also possible to calculate gradients and then only update some of the possible parameters. In some cases this is useful, for instance it is a good choice when performing transfer learning, by only updating a few layers close to the output. This constrains the network to keep a lot of its trained parameters as-is, which reduces the chances of over-fitting to the smaller dataset that the network is learning from when transferring.

",1847,,1847,,2/1/2021 22:17,2/1/2021 22:17,,,,0,,,,CC BY-SA 4.0 26144,2,,26126,2/2/2021 6:27,,-2,,"

The issue is trading off reward vs learning, Q-learning attempts to learn and do things which produce reward(basically, operate suboptimally).

I'm not sure if Q-Learning is actually any different performance wise, short-term Q-Learning will produce more reward(probably) but will also miss states.

",32390,,,,,2/2/2021 6:27,,,,0,,,,CC BY-SA 4.0 26145,2,,26065,2/2/2021 7:29,,-1,,"

As David Ireland has mentioned in his answer, despite correlated sampling, learning algorithms can still converge to the optimal policy/value function.

The reason learning algorithms still may produce the correct results despite the use of correlated sampling is to do with another property, that does not rely on i.i.d sampling, that is, the average regret of SGD tends to zero at the limit. A couple of sources I found that highlight this are

Minimising average regret implies that the SGD will, at the limit, produce the best set of parameters that perform well on the encountered samples however this is does not indicate whether the best set of parameters actually performs well.

Extra

This is in fact a general issue that extends beyond the reinforcement learning algorithms. Programs that implement SGD don't typically sample IID! as mentioned in an MIT lecture on stochastic gradient descent at around 40 minutes the lecturer discusses it's more efficient to sample without replacement from the training set making it non IID for which there is not much theory about.


caveat

I'm not 100% sure what the implication of minimising regret means but given regret is of the form

$$regret(\theta_1,\ldots,\theta_t) = \sum_{t=0}^T\left(L(v_\pi(s_t),\hat{v}(s_t,\theta_i) -L(v_\pi(s_t),\hat{v}(s_t,\theta_*)\right)$$

Then I understand the model that minimises regret at time T as the one that has the minimum loss for all training samples encountered up to time T . But what I'm not sure about is how this relates to minimising the cost function, it probably requires another question!

",42514,,42514,,2/2/2021 15:22,2/2/2021 15:22,,,,2,,,,CC BY-SA 4.0 26149,2,,18204,2/2/2021 9:30,,0,,"

Well no single event would confirm we have implemented an AGI system. The G is short for general. There would need to be many different sorts of tests of different sorts of situations.

",17709,,,,,2/2/2021 9:30,,,,1,,,,CC BY-SA 4.0 26151,1,26155,,2/2/2021 12:57,,2,119,"

I have a basic question. I'm working towards developing a reward function for my DQN. I'd like to train an RL agent to edit pixels on an image. I understand that convolutions are ideal for working with images, but I'd like to observe the agent doing it in real-time. Just a fun side project.

Anyway, to encourage an RL agent to craft a specific image I'm crafting a reward function that returns a $N \times N$ dimensional matrix. Which represents the distance between the state of the target image (RGB values for each pixel location) and the image the agent crafted.

Generally speaking, is it better for rewards to be a scalar, or is using matrices okay?

",20271,,2444,,2/2/2021 22:38,2/3/2021 7:30,Can the rewards be matrices when using DQN?,,1,2,,,,CC BY-SA 4.0 26152,1,,,2/2/2021 14:25,,2,32,"

Problem description

I'm creating a clock with 4 seven-segment LED displays. In an effort to get more familiar with tensorflow, I figured I should try to drive this clock with use of a Neural Network.

The input of the network is the Unix time. Initially I wanted to make the network output a UINT32 value, which I can then bit-shift into the shift-registers I use for driving the LEDS. Because this proved to be unsuccessful, I removed the last step, and instead went with 32 booleans for every led-segment as output.

Current status

This last action was unfruitful as well, the best I get my network is a loss of about 0.48, indicating to me that it's best effort is to guess what the output could be.

Input

print(x_validate[0])
print(y_validate[0])
3116360099
[0 1 1 0 0 1 1 0 1 1 1 1 0 0 1 0 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0]

The input is the unix time code. I tried normalizing this by dividing by 2^32, but this didn't make any significant difference. Validation is an array with either 0 or 1 based on whether the LED needs to be on or off. The first 8 bits represent the first 7-segment display, etc. (The 8th bit is never used here because that's connected to the dot on the display.)

Data generation

# Number of sample datapoints
SAMPLES = 2000000

# Generate a uniformly distributed set of random numbers in the range from
# 0 to uin32 max
x_values = np.random.uniform(
    low=0, high=2**32, size=SAMPLES).astype(np.uint32)

# Shuffle the values to guarantee they're not in order
np.random.shuffle(x_values)

print(x_values)

# Time helper function 
def to_utc(x):
  return datetime.utcfromtimestamp(x).replace(tzinfo=timezone.utc).astimezone(pytz.timezone("Europe/Amsterdam")).strftime('%H%M')
to_utc_batch = np.vectorize(to_utc)

y_values = to_utc_batch(x_values)
print(y_values)

# translate to bitstream
def lookup(number):
  switch = {
      0: [1,1,1,1,1,1,0,0],
      1: [0,1,1,0,0,0,0,0],
      2: [1,1,0,1,1,0,1,0],
      3: [1,1,1,1,0,0,1,0],
      4: [0,1,1,0,0,1,1,0],
      5: [1,0,1,1,0,1,1,0],
      6: [1,0,1,1,1,1,1,0],
      7: [1,1,1,0,0,0,0,0],
      8: [1,1,1,1,1,1,1,0],
      9: [1,1,1,1,0,1,1,0]
  }
  return switch.get(number)

print(y_values)

def compile_output(value):
  f = []
  for i, c in reversed(list(enumerate(value))):
    f = f + lookup(int(c))
  return f

output_values = []

for y in y_values:
  output_values.append(compile_output(y))

y_values = output_values

After, data is distributed in 3 sets:

Training: 60%
Validation: 20%
Testing: 20%

Model

model = tf.keras.Sequential()
model.add(keras.layers.Dense(32, activation='relu', input_shape=(1,)))
model.add(keras.layers.Dense(64, activation='relu'))
model.add(keras.layers.Dense(64, activation='relu'))

# Final layer
model.add(keras.layers.Dense(32, activation='sigmoid'))

model.compile( optimizer='adam', loss='binary_crossentropy', metrics=["accuracy"])

I went with binary_crossentropy and sigmoid as I figured the case is essentially a multi-label setup.

I have tried the following already, but did not succeed:

  • Add more layers
  • Make dense layers wider (to max of 512)
  • Add a Dropout layer -> so to have it try out more things to find the relation
  • Use Softmax activation
  • Normalize input data by dividing Unix time by 2^32
  • enlarge sample data to 20.000.000
  • do anything between 10 and 50 epochs ( I usually quit after 15 epochs, when absolutely no change was observed between the last 5 epochs)

Question

  • Why is this model not successful in finding a relation between the data?
  • How can I improve this model or the data so it will be able to succeed?

Bonus

When successful, ideally the output of the model would be a UINT32 number instead of this array of booleans. Any tips on how to get there would be appreciated as well.

Edit: Really sorry, left out this particular line:

# Train the model
history = model.fit(x_train, y_train, epochs=10, batch_size=64,
                    validation_data=(x_validate, y_validate))
",44316,,44316,,2/3/2021 12:09,2/3/2021 12:09,NN to find arbitrary transformation,,0,3,,,,CC BY-SA 4.0 26154,1,26163,,2/2/2021 15:43,,2,270,"

I'm trying to implement Deep Q-Learning for a pet problem having a continuous state space and discretized action space.

The algorithm for table-based Q-Learning updates a single entry of the Q table - i.e. a single $Q(s, a)$. However, a neural network outputs an entire row of the table - i.e. the Q-values for every possible action for a given state. So, what should the target output vector be for the network?

I've been trying to get it to work with something like the following:

q_values = model(state)
action = argmax(q_values)
next_state = env.step(state, action)
next_q_values = model(next_state)
max_next_q = max(next_q_values)

target_q_values = q_values
target_q_values[action] = reward(next_state) + gamma * max_next_q

The result is that my model tends to converge on some set of fixed values for every possible action - in other words, I get the same Q-values no matter what the input state is. (My guess is that this is because, since only 1 Q-value is updated, the training is teaching my model that most of its output is already fine.)

What should I be using for the target output vector for training? Should I calculate the target Q value for every action, instead of just one?

",44278,,2444,,2/2/2021 22:23,2/3/2021 10:51,What is the target output for updating a Deep Q Network,,2,0,,,,CC BY-SA 4.0 26155,2,,26151,2/2/2021 17:16,,3,,"

Generally speaking, is it better for rewards to be a scalar, or is using matrices okay?

Rewards need to be scalar, real values to match to standard theory of Markov decision processes (MDPs) and reinforcement learning (RL) methods.

Although it is possible to accumulate matrices in various ways, by e.g. simple matrix addition, and come up with an analog for expected return which would be a weighted sum of matrices, you then get stuck. There is no fixed way to rank matrices and decide whether one is a better result than another. This is a requirement for any learning process that aims to improve at a task - it needs feedback that changes it makes are better or worse related to some reference. As a result, most objective functions and metrics in optimisation use real-valued scalars, which can always be placed into order to decide a highest or lowest value.

This does not prevent you using a matrix representation for your project, if it is a natural fit. To turn it into a usable reward, you will need to convert that matrix into a real-valued metric. Perhaps the L2 norm or other standard measure that summarises the matrix will be good for your task.

It is possible to process multiple scalar rewards at once with single learner, using multi-objective reinforcement learning. Applied to your problem, this would give you access to a matrix of policies, each of which maximised the reward value of one cell within the matrix. It also allows for switching between objectives in a hierarchical manner using a "policy of policies" if some preference for what to achieve changes. It is not 100% clear, but I do not think this is what you want to do.

",1847,,1847,,2/3/2021 7:30,2/3/2021 7:30,,,,2,,,,CC BY-SA 4.0 26157,2,,26038,2/2/2021 18:41,,0,,"

Will try to formulate my understanding of the ideas in this paper, mention my own concerns that I see are relevant to your question, and see if we can identify any confusions along the way that might clarify the issue

On eq(6) of the relevant blog post, they identify the weight matrix of a discrete, binary Hopfield network as

$$ \boldsymbol{W} = \sum_i^N x_i x_i^T $$

with N raw stored entries, that are retrieved by iterating the initial guess $\xi$ with the following update rule

$$ \xi_{t+1} = \text{sgn}( \boldsymbol{W} \xi_{t} - b ) $$

Now to the paper in question, the update rule for the generalization they propose for continuous states that would be used is (eq 22 in OP):

$$ \xi_{t+1} = \boldsymbol{X} \text{softmax}( \beta \boldsymbol{X}^T \xi_{t} ) $$

Where $\boldsymbol{X} = (x_0, x_1, \dots , x_N ) $

The first substantial difference I see is that, while on the case of binary entries, all the weights of the network are encoded in the matrix $\boldsymbol{W}$, hence the storage is constant regardless of how many actual patterns are stored in it. In contrast, in this continuous case generalized rule, the $\boldsymbol{X}$ matrix seems to grow linearly with the size of entries, in fact it keeps all the stored entries directly. By their update rule, it doesn't seem to be a way around storing and keeping around the entire entries, and the update rule seems to only find a "best fit" among the entries, using the scalar dot product of the attention mechanism. I still think I might be missing something important here

",18642,,18642,,2/2/2021 18:46,2/2/2021 18:46,,,,0,,,,CC BY-SA 4.0 26159,1,26160,,2/2/2021 20:20,,1,71,"

In Barto and Sutton's book, it's written that we have two types of updates in dynamic programming

  1. Update out-of-place
  2. Update in-place

The update in-place is the faster one. Why is that the case?

This is the pseudocode that I used to test it.

if in_place:
    state_values = new_state_values
else:
    state_values = new_state_values.copy()
old_state_values = state_values.copy()

for i in range(WORLD_SIZE):
    for j in range(WORLD_SIZE):
        value = 0
        for action in ACTIONS:
            (next_i, next_j), reward = step([i, j], action)
            value += ACTION_PROB * (reward + discount * state_values[next_i, next_j])
        new_state_values[i, j] = value

max_delta_value = abs(old_state_values - new_state_values).max()
if max_delta_value < 1e-4:
    break

Why is the in-place version faster, and what is the difference? What I think is that it is only better for storage usage, I don't understand the speed increase part.

",43094,,2444,,2/2/2021 22:21,2/2/2021 22:21,Why is the update in-place faster than the out-of-place one in dynamic programming?,,1,0,,,,CC BY-SA 4.0 26160,2,,26159,2/2/2021 21:01,,0,,"

When you make updates in-place, then some of the entries in state_values[next_i, next_j] that you are referencing here

value += ACTION_PROB * (reward + discount * state_values[next_i, next_j])

will already be updated earlier in the same loop. Which means you get to use the latest and likely more accurate values earlier.

The strength of this effect varies depending on the order that states and actions are visited. If you can manage to visit them in reverse order that they would appear in natural trajectories, then the speed improvement will be very noticeable.

",1847,,,,,2/2/2021 21:01,,,,0,,,,CC BY-SA 4.0 26161,2,,26131,2/2/2021 21:11,,1,,"

Well, some time ago I also faced the same issue in the semantic segmentation task. Batch normalization is expected to improve convergence, because the normalization of activations prevents the explosion of the gradients magnitude and leads to more steady convergence.

Adam is an adaptive optimizer with momentum and division by the weighted sum of gradients on previous iterations squared. https://towardsdatascience.com/adam-latest-trends-in-deep-learning-optimization-6be9a291375c.

The loss surfaces of the neural networks is a difficult and poorly understood topic in the present. I suppose, that the poor convergence of SGD is caused by the roughness of loss surface, where the gradient makes big leaps, and jumps over the mimima. The adaptive learing strategy of the Adam, on the other hand, allows to dive into the valleys.

",38846,,,,,2/2/2021 21:11,,,,0,,,,CC BY-SA 4.0 26163,2,,26154,2/2/2021 22:09,,3,,"

As you say, the output of a $Q$ network is typically a value for all actions of the given state. Let us call this output $\mathbf{x} \in \mathbb{R}^{|\mathcal{A}|}$. To train your network using the squared bellman error you need first calculate the scalar target $y = r(s, a) + \max_a Q(s', a)$. Then, to train the network we take a vector $\mathbf{x'} = \mathbf{x}$ and change the $a$th element of it to be equal to $y$, where $a$ is the action you took in state $s$; call this modified vector $\mathbf{x'}_a$. We calculate the loss $\mathcal{L}(\mathbf{x}, \mathbf{x'}_a)$ and back propagate through this to update the parameters of our network.

Note that when we use $Q$ to calculate $y$ we typically use some form of target network; this can be a copy of $Q$ where the parameters are only updated every $i$th update or a network whose weights are updated using a polyak average with the main networks weights after every update.

Judging by your code it looks as though your action selection is what might be causing you some problems. As far as I can tell you're always acting greedily with respect to your $Q$-function. You should be looking to act $\epsilon$-greedily, i.e. with probability $\epsilon$ take a random action and act greedily otherwise. Typically you start with $\epsilon=1$ and decay it each time a random action is taken down to some small value such as 0.05.

",36821,,36821,,2/2/2021 22:15,2/2/2021 22:15,,,,1,,,,CC BY-SA 4.0 26165,2,,26128,2/3/2021 0:50,,0,,"

If you have sold only once or very few items you will need some prior input (domain knowledge). One term for search is intermittent time series. Here is a stored search.

When you have many time series, related, and interest in both totals and single series, that is called hierarchical forecasting. One expert is here (the author of that blog was the founder of sister site Cross Validated).

With time series forecasting it is often difficult to beat simple methods, see https://stats.stackexchange.com/questions/135061/best-method-for-short-time-series/135146#135146

",1429,,,,,2/3/2021 0:50,,,,0,,,,CC BY-SA 4.0 26166,1,,,2/3/2021 3:54,,0,93,"

Currently, I am looking at how Mask R-CNN works. It has a backbone, RPN, heads, etc. The backbone is used for creating the feature maps, which are then passed to the RPN to create proposals. Those proposals would then be aligned with feature maps and rescaled to some $n \times n$ pixels before entering box head or mask head or keypoint head.

Since conv2D is not scale-invariant, I think this scaling to $n \times n$ would introduce scale-invariant characteristics.

For an object that is occluded or truncated, I think scaling to $n \times n$ is not really appropriate.

Is it possible if I predict the visibility of the object inside the box head (outputting not only xyxy [bounding box output], but also xyxy+x_size y_size [bounding box output + width height scale of object]). This x_size and y_size value would then be used to rescale $n \times n$ input.

So, if only half of the object is seen (occluded or truncated), inputs inside the keypoint head or mask head would be 0.5x by 0.5x.

Is this a good approach to counter occlusion and truncation?

",44305,,44305,,2/8/2021 0:38,6/26/2021 15:27,Improving Mask RCNN by arbitrary scaling head input,,0,2,,,,CC BY-SA 4.0 26167,2,,26154,2/3/2021 9:42,,1,,"

There are a couple ways you can define the architecture of a DQN. The most common way of doing it is by taking in the states and outputting the value function of all possible actions - this leads to a DQN with multiple outputs. The other, less efficient way, includes taking in an state-action as input and outputting a single real value - this approach is typically avoided since we need to run the model multiple times to get estimates for different actions.

The replay buffer is used to store $(S,A,R,S')$ transitions as encountered using your $\epsilon$-soft policy. We sample one of these transitions from the replay buffer and calculate an estimate of the value function for $(S,A)$ i.e $\hat Q(S,A,\theta)$ and then we calculate a target as follows. $$target =R+\max_\limits{a'}\hat Q(S',a',\theta^-)$$

Assuming you use the first model, you can then use a Squared error loss function, defined as follows, and modify your parameter as a function of that

$$L(\theta) = (target-\hat Q(S,A,\theta))^2$$

Assuming for now the target is fixed (I'll explain this in a minute), only $Q(S,A,\theta)$ is a function of $\theta$ in the loss function. $Q(S,A,\theta)$ corresponds to one output node of your DQN and therefore, as you've already highlighted, when carrying out EBP the parameters are updated such that we make the value of this one node tend to the specified target.

This is just how Q-learning works, we use samples generated by the behaviour policy to create $L(\theta)$ and then tweak the parameters to minimise the cost. As we do this for more and more samples the network hopefully figures out a way that accommodates for every sample it's been trained on so far (with more emphasis on the most recent samples).

As to your issue, are you sure you're training on multiple different samples and not just a specific one? it may just be a bug you've overseen.


Explaining $\theta^-$

I used a slightly different notation, $\theta^-$, for the parameters used to generate the bootstrapped estimate, $\max_\limits{a'}\hat Q(S',a,\theta^-)$. $\theta^-$ is only matched to $\theta$ every $n^{th}$ step because we want to keep the target constant as much as possible. The reason for this is because Q-learning does not necessarily converge when using neural networks partly due to bootstrapping which can cause a divergence of optimisation because of state generalisation. By using this $\theta^-$ we help prevent things like this from happening.

Ultimately the idea of the replay buffer and the fixed parameter for bootstrapping are to try to convert the RL problem into a supervised learning problem because we know much more about how to deal with supervised learning problems when using DNNs.

",42514,,42514,,2/3/2021 10:51,2/3/2021 10:51,,,,0,,,,CC BY-SA 4.0 26172,1,,,2/3/2021 12:47,,2,95,"

If I trained a neural network with 4 outputs (one for each action: move down, up, left, and right) to move an agent through a grid (deterministic problem). The output of the neural network is a probability distribution over the 4 actions, due to the softmax activation function.

Is the policy (based on the neural network) a stochastic policy, even if the action space is discrete?

",44346,,2444,,2/3/2021 14:34,4/9/2021 21:33,"Is a learned policy, for a deterministic problem, trained in a supervised process, a stochastic policy?",,1,6,,,,CC BY-SA 4.0 26173,2,,26172,2/3/2021 13:25,,1,,"

Is the policy (based in the neural network) a stochastic policy? even if the action space is discrete?

Yes. A discrete action space does not require a deterministic policy - it is possible to assign arbitrary probabilities to each action in each state provided each probability is in the range $[0,1]$ and the sum across all allowed actions is $1$. The two concepts of determinism and discrete actions are entirely separate.

The optimal policy in many situations can be deterministic. If there is only one deterministic optimal policy, your learned policy should be also close to deterministic if the learning process has been successful. That is, the probabilities of optimal actions should all be close to $1$, all the rest close to $0$.

If there is more than one possible optimal policy, your learning agent may have learned a stochastic "mix" of them where in some states it is equally good to take more than one action and the probabilities may be split between those good actions. This will still be optimal and not a problem. If that is the case you should expect to see many action choices close to $0$ and in each state a select few (maybe one) that sum to close to $1$ between them.

In the case of discrete actions, you can derive a deterministic policy from your neural network function by taking the argmax of the action probabilities. This is worth trying. It will round away bad actions that are close to 0 probability due to approximation in the neural network.

In practice sometimes a little randomness in a policy works better for real-world problems with imprecise measurements or other unknowns. It may even be necessary for adversarial environments or where there is key information missing. The only way to find out though is to try both stochastic and deterministic interpretations of the neural network output for your policy.

",1847,,36737,,4/9/2021 21:33,4/9/2021 21:33,,,,5,,,,CC BY-SA 4.0 26174,1,,,2/3/2021 14:33,,2,487,"

I am trying to find research paper with theory(preferably implementation) that is about classifying 1000 (or more) classes. I have heard of an implementation, that initially clustering needs to be done then classification with something like softmax. Does anyone know of any research paper that implements 1000+ class classification.

",38060,,,,,2/3/2021 15:24,How to go about classifying 1000 classes?,,1,0,,,,CC BY-SA 4.0 26175,1,,,2/3/2021 14:43,,3,388,"

In every computer vision project, I struggle with labeling guidelines for border cases. Benchmark datasets don't have this problem, because they are 'cleaned', but in real life unsure cases often constitute the majority of data.

Is 15% of a cat's tail a cat? Is a very blurred image of a cat still a cat? Are 4 legs of a horse, but the rest of its body of the frame still a horse?

Would it be easier or harder to learn a regression problem instead of classification? I.e by taking 5 subclasses of class confidence (0.2,0.4,0.6,0.8,1.) and using them as soft targets?

Or is it better to just drop every unsure case from training or/and testing set?

I experimented a lot with different options, but weren't able to get any definitive conclusion. This problem is so common that I wonder if it has already been solved for good by someone?

",27994,,2444,,2/4/2021 11:29,2/5/2021 23:11,How to treat (label and process) edge case inputs in machine learning?,,1,0,,,,CC BY-SA 4.0 26176,2,,26174,2/3/2021 15:24,,4,,"

If you are asking for arbitrary ML task dealing with 1000+ classes the most straigtforward thing that comes to mind is the ImageNet - https://en.wikipedia.org/wiki/ImageNet#cite_note-nytimes_2012-2. It has more the 20k categories in the present time.

In order to perform such classification task you need

  • A large enough dataset, such that each class occurs frequently enough or efficient data augmentation
  • A powerful model, such that can adapt to variety of different problems

The first neural network to tackle this problem was AlexNet and since then plethora of architectures : VGG's, ResNet's, EfficientNet got better and better quality.

They have a lot of filters that react on different patterns.

",38846,,,,,2/3/2021 15:24,,,,0,,,,CC BY-SA 4.0 26179,1,26228,,2/3/2021 18:17,,6,606,"

In cases where the reward is delayed, this can negatively impact a models ability to do proper credit assignment. In the case of a sparse reward, are there ways in which this can be negated?

In a chess example, there are certain moves that you can take that correlate strongly with winning the game (taking the opponent's queen) but typically agents only receive a reward at the end of the game, so as to not introduce bias. The downside is that training in this sparse reward environment requires lots of data and training episodes to converge to something good.

Are there existing ways to improve the agent's performance without introducing too much bias to the policy?

",7858,,2444,,2/8/2021 14:00,2/8/2021 14:00,How to improve the reward signal when the rewards are sparse?,,1,0,,,,CC BY-SA 4.0 26180,2,,9812,2/3/2021 18:59,,0,,"

In addition to this answer, here's a more formula-based answer, which attempts to clarify the difference between the dimensionality of a state and the size of the state space.

Let's denote our state space, i.e. the space of states, by $\mathcal{S}$. Let's say that $\mathcal{S}$ is a subset of $\mathbb{R}^N$, i.e. $\mathcal{S} \subseteq \mathbb{R}^N$. So, in this case, a state $s \in \mathcal{S}$ is a vector of $N$ real numbers. Depending on $N \in \mathbb{N}$, the dimensionality of the states can be big or not. If $N = 1$, then a state is a real number, so the dimensionality of the state is small. If $N = 10^{40}$, the dimensionality of the state is huge.

To be more concrete, let $\mathcal{S} = \{a, b \}$, $a, b \in \mathbb{R}^N$ and $N= 10^{40}$, then the state space is small, i.e. it contains only 2 states ($a$ and $b$), but the dimensionality of $a$ and $b$ is huge.

You could also have $\mathcal{S} = \{a, b \}$, but $a, b \in \mathbb{R}$. In that case, both the state space and the dimensionality of the states is small.

In machine learning (and in the case of DQN), images, unless they are very small (e.g. $5 \times 5$), are typically considered high dimensional feature vectors (or observations in the case of the DQN). In the case of DQN (figure 1), the input was a $84 \times 84 \times 4$ multi-dimensional array, so each state was relatively high dimensional (i.e. you need $28224$ real numbers to represent each state).

",2444,,2444,,2/4/2021 11:00,2/4/2021 11:00,,,,0,,,,CC BY-SA 4.0 26181,1,,,2/3/2021 20:26,,0,336,"

Below is the Python code for making an ensemble model. All the inputs are the same for all three models. But what if the models have different input shapes due to different window size, such as LSTM models. So the input shapes for Model A would be (window_size_A, features) and for Model B would be (window_size_B, features). The window sizes are different but the number of features are the same. As such, due to the different window size, the training data of the same dataset is split differently for each model such that the X_train.shape for model A: (train_data_A, window_size_A, output) And for Model B: (train_data_B, window_size_B, output). Note the training data is from the same dataset but the length is different due to the different window size. How would you make an ensemble of these models?

def get_model():
    inputs = keras.Input(shape=(128,))
    outputs = layers.Dense(1)(inputs)
    return keras.Model(inputs, outputs)


model1 = get_model()
model2 = get_model()
model3 = get_model()

inputs = keras.Input(shape=(128,))
y1 = model1(inputs)
y2 = model2(inputs)
y3 = model3(inputs)
outputs = layers.average([y1, y2, y3])
ensemble_model = keras.Model(inputs=inputs, outputs=outputs)
",44361,,1641,,2/7/2021 16:04,2/7/2021 16:04,How to make an ensemble model of two LSTM models with different window sizes i.e. different data shapes,,0,5,,,,CC BY-SA 4.0 26182,2,,18143,2/3/2021 22:06,,2,,"

@The Pointer the $2^n$ came from the question: How many function do we need to have if each of the $n$ inputs can be missing? example: $f_1(\text{missing}, x_2, x_3, \dots, x_n)$ for $x_1$ missing $f_2(x_1, x_2, \text{missing}, x_4, \text{missing}, \dots, x_n)$ for $x_3$ and $x_5$ missing.

So this problem is a combinatorial one and the event for each $x_i$ is Missing or Not. Each function corresponds 1-by-1 with a possible set $(x_1, \text{missing}, x_2, \dots, x_n)$. So how many sets can you form $2^n$. Why?

This formula comes back to like tree developing. First variable $x_1$ missing or not (2 possbile events). Now after that, for EACH of these events 2 possbile events for $x_2$ so in total 2 \times 2 (2 for $x_1$ , 2 for $x_2$ for each $x_1$'s event) and etc., $2 \times 2 \times 2 \times \dots$ $n$ times = $2^n$.

",34420,,16521,,2/4/2021 10:57,2/4/2021 10:57,,,,1,,,,CC BY-SA 4.0 26184,1,,,2/4/2021 4:31,,1,278,"

I've been reading about transformers & have been having some difficulty understanding the concept of alignment.

Based on this article

Alignment means matching segments of original text with their corresponding segments of the translation.

Does this mean that, with transformers, we're adding the fully translated sentences as inputs too? What's the purpose of alignment? How exactly do these models figure out how to match the different segments together? I'm pretty sure there's some underlying assumption/knowledge that I'm not fully getting -- but I'm not entirely sure what.

",44369,,2444,,2/6/2021 15:09,6/25/2022 15:02,"What is the purpose of ""alignment"" in the self-attention mechanism of transformers?",,1,1,,,,CC BY-SA 4.0 26187,1,,,2/4/2021 8:32,,1,125,"

My question concerns Stochastic Combinatorial Multiarmed Bandits. More specifically, the algorithm called CombUCB1 presented in this paper. It is a UCB-like algorithm.

Essentially, in each round of the sequential game the learner chooses a super-arm to play. A super-arm is a $d$-dimensional vector $a \in \mathcal{A}$ where $\mathcal{A} \subset \{0,1\}^d$. In each super-arm $a$, when the $i$-element equals to $1$ ( $i \in \{0, \dots, d\}, a(i)=1$ ), that means that the basic action $i$ is active. Basically, in each round the learner plays the basic actions that are active in the chosen super-arm. The rewards of the basic actions are stochastics and a super-arm receives as a reward the sum of the rewards of the basic active actions.

The algorithm mentioned above presents a UCB-like algorithm, where with each basic action is associated a UCB-index and in each round the learner plays the super-arm that maximises that index. My question concerns the confidence interval around the mean of the rewards of the basic actions, presented in equation $2$ of the mentioned paper. Here, the exploration bonus is

$c_{t,s} = \sqrt{\frac{1.5 \log t}{s}}$

I don't understand where that $1.5$ is coming from. I've always known that one needs to use Chernoff-Hoeffding inequality to derive the exploration bonus in a UCB-algorithm. Am I wrong and it needs to be computed in other ways? I've always seen the same coefficient but with $2$ instead of $1.5$ (reference). Could someone explain me where does $1.5$ come from, please?

I know there is a similar question here, but I cannot really understand how that works here.

Thank you in advance in case you have time to read and answer my questions.

",44190,,44190,,2/5/2021 12:53,2/5/2021 12:53,UCB-like algorithms: how do you compute the exploration bonus?,,0,2,,,,CC BY-SA 4.0 26188,1,26189,,2/4/2021 9:02,,2,333,"

I'm new to AI but would still like to try and get a project off the ground. I've read a lot about ML/DL the past few days but I just can't figure out if my problem can be solved with ML/DL. What I'm trying to do looks like a classification job to me but maybe isn't.

I have 100s of images of compacted soil samples, on these images there may be multiple layers visible. I will include a picture below, this sample had a sticker on it, normally they don't. On the image there are 3 layers, separated above and under the sticker. With every image there is data (xml file) available on the size of the layer(s) and the type of soil in that layer, which costs a lot of time to produce, so I want to automate this classification in the future. The data files contains info like:

layer0:
    type 004
    2cm
    12cm
layer1:
    type 003
    12cm
    25cm

If there would be just one layer, the AI could learn what these layers look like and sort them in the right soil class. But I don't know if my problem can be solved with AI as there could be 1, 2, 3 or 4 different layers (classes) on one image and I haven't seen any examples on classification where there can be multiple classes in one image. As AI is quite a steep learning curve I would like to know if my problem is suited for ML/DL before I spend more of my nights reading for something that might not work. I've read numerous websites and a few short books but can't find an answer to my questions.

Can ML/DL solve my multi-class single-image classification problem and which strategy should I read into?

",44299,,4709,,2/4/2021 21:06,2/4/2021 21:06,Can ML/DL solve my classification problem?,,1,2,,,,CC BY-SA 4.0 26189,2,,26188,2/4/2021 9:47,,6,,"

A simple sanity-check on whether an image classifier can perform a task in theory is:

Can a human expert, using the same image plus a list of catgeories that they are familiar with, perform the same task?

It is important you only consider the contents of the image (or in general the data you are prepared to supply to the classifier) and the expert's general knowledge. The expert is not allowed to collect more data for instance, or interact with the sample other than maybe take a few measurements on the pixels.

This sanity check doesn't tell you how hard the problem is. It also does rule out problems that are very hard or impossible for humans but actually quite easy for computers. However, it is a good start because nowadays single-purpose computer vision classifier tasks often rate similarly to or better than humans performing the same task. You are effectively checking "is the data I need for the inference actually in the image?"

Multi-class classifiers are possible in several ways. One way would be to have separate heads to the neural network to classify each layer, and maybe a layer present binary flag to allow for varying numbers of layers. This is similar to an architecture called YOLO which classifies 0 or 1 objects and their locations over multiple grid squares within an image. Your architecture would need to be different to YOLO, but you could use a lot of the ideas from it, such as having an output with multiple multi-class classifiers, one for each soil layer.

I have 100s of images of compacted soil samples

One issue you will face is that you would need a very large number of labelled images in order to train a neural network with cutting-edge performance for this task from scratch. So you will want to look into transfer learning, which involves taking an existing image classifier trained on e.g. ImageNet and adapting it to your problem before training with your smaller dataset.

The small amount of sample image data you have will be a major limiting factor in your case. Sadly no-one can tell you before you attempt the project whether you have enough for a deep learning approach. That is probably where your largest risk of failure is.

",1847,,1847,,2/4/2021 9:52,2/4/2021 9:52,,,,1,,,,CC BY-SA 4.0 26197,1,,,2/4/2021 15:54,,1,43,"

In Decision Support System (DSS), we rank items based on predetermined weighted criteria. For example, we want to rank prospective programmers based on their working experience, required salary, set of skills, age, etc. We rank using weights for each criterion that we have previously defined. The simplest method is using Simple Additive Weighting (SAW).

As far as I know, DSS is included in knowledge-based AI (it's a mandatory subject in AI specialization in most universities in my country).

My question:

With the development of AI/ML/DL today, is there another modern approach that can be used to solve similar problems?

At first, I thought it's similar with Content-Based Recommender System, but it looks different as we don't have "user" in DSS.

",16565,,16565,,2/5/2021 23:58,2/6/2021 0:15,Recent methods for Decision Support System (DSS),,1,0,,,,CC BY-SA 4.0 26199,1,26200,,2/4/2021 16:26,,6,884,"

Why does a vanilla feedforward neural network only accept a fixed input size, while RNNs are capable of taking a series of inputs with no predetermined limit on the size? Can anyone elaborate on this with an example?

",44386,,2444,,2/4/2021 22:38,4/7/2021 9:44,"Why do feedforward neural networks require the inputs to be of a fixed size, while RNNs can process variable-size inputs?",,1,0,,,,CC BY-SA 4.0 26200,2,,26199,2/4/2021 16:52,,7,,"

You are talking about two different types of 'size'. The size of the input for a FFNN and a RNN must always remain fixed for the same network architecture, i.e. they take in a vector $x \in \mathbb{R}^d$ and could not take as input for instance a vector $y \in \mathbb{R}^b$ where $b \neq d$. The size you refer to in the context of the RNN is the length of the input sequence.

What you are getting confused with is that RNN's can make predictions relating to sequences, that is imagine now rather than one $x$ we have a sequence of related, i.e. not i.i.d. data such as time series data, data $\{x_i\}_{i=1}^n$. Assuming we are given some initial $h_0$ (a hidden state) then an RNN will take as input $x_1$ and $h_0$ and output a prediction $y_1$ and a new hidden state $h_1$. In general an RNN will take as input $x_n$ and $h_{n-1}$ and output $y_n$ and $h_n$ where the hidden state is passed as input to the RNN at the next time step. However, the dimensionality of all the $x$'s and $h$'s will all be the same, i.e. $x_i \in \mathbb{R}^d$ and $h_i \in \mathbb{R}^c$ for all $i$, where $d$ does not necessarily have to be equal to $c$ (and in my experience rarely is).

Note that RNN's can also perform sequence to sequence prediction (such as language translation) where the predicted sequence can be a different length to the input sequence, as is the case when doing translation (the input sentence is not necessarily the same length in the translated language). They do this by having an encoder and a decoder which are two separate RNNs. The encoder is fed the input sequence and we maintain all the hidden states outputted by the encoder $\{e_i\}_{i=1}^n$. The decoder is then given a token that represents the start of a sequence and the last hidden state of the encoder $e_n$ as input to which it will make a prediction on what the word should be (I believe but am not 100% certain that the decoder outputs a probability distribution over the dictionary of words it can predict from) and then the word chosen is passed as input to the decoder at the next step, along with the hidden state from the decoder at the previous step and another word is predicted. This continues until the system predicts an End of Sequence token (or until it is forced to by some time limit). More details can be found in this paper but I believe in NLP it is much more common to use attention models now rather than the methods introduced in this paper.

Now, with this data $\{x_i\}_{i=1}^n$ you could in theory pass each one individually to a FFNN but a) it would not capture the sequential nature of the data as the assumption is that each data point is independent of the others and you can see this from the architecture of a FFNN -- they are Directed Acyclic Graphs, the Acyclic-ness is what causes the issue as there is no recursion which stops any sequential information from being passed from one time step to another and b) training using SGD would likely cause issues as we have violated the i.i.d. assumption needed for SGD to converge to local optima. More info on the i.i.d. requirement can be found here.

",36821,,36821,,4/7/2021 9:44,4/7/2021 9:44,,,,2,,,,CC BY-SA 4.0 26201,2,,8791,2/4/2021 17:43,,1,,"

Machine learning techniques are usually using a lot of statistical approaches, like neural networks: a book like this one (Understanding Machine Learning: From Theory to Algorithms ISBN 978-1-107-05713-5) is full of mathematical equations.

Symbolic artificial intelligence, e.g. classical expert system approaches (with some knowledge base), is more related to logic: a book like that one (Artificial Beings: the Conscience of a Conscious Machine ISBN 978 1848211018) has mostly no complex equations, but simple ones.

An AI software can combine both approaches (in particular with meta-programming: programs that generate other programs; or even meta-rules): the RefPerSys project tries to do so.

Artificial neural networks are somehow "working", but you cannot understand why.

Knowledge-based systems are more explainable artificial intelligence.

Classical AI systems could generate code (in C, C++, Common Lisp, machine code) using metaprogramming techniques, and could be mixed with machine learning or deep learning approaches, and/or use existing machine learning libraries.

Notice that the difference between statistical AI and classical AI is not a matter of programming languages or of operating systems. For example, garbage collection may be (and perhaps is not) relevant to both approaches.

",3335,,2444,,12/6/2021 8:32,12/6/2021 8:32,,,,0,,,,CC BY-SA 4.0 26202,1,26220,,2/4/2021 17:51,,2,860,"

I recently started looking into implementations of the DQN algorithm (e.g. TensorFlow) in some more detail. All the implementations that I found use a network that gives an output for each possible action (e.g. if you have three possible actions you will have three output units in your network). This makes a lot of sense from a computational standpoint and seems to work fine if you are dealing with categorical action spaces (e.g "left" or "right").

However, I am currently working with an action space that I discretized and the actions have an ordinal meaning (e.g. you can drive left or right in 5-degree increments). I assume that the action-value function has some monotonicity in the action component (think driving 45 degrees to the left instead of 40 will have a similar value).

Am I losing information on the similarity of actions, if I use a network that has an output unit for each possible action?

Are there implementations of the DQN available in which actions are used as network inputs?

",44389,,2444,,2/5/2021 14:38,2/5/2021 14:38,What should the input and output of the Q-network be in the case of an ordinal action space?,,1,1,,,,CC BY-SA 4.0 26203,1,,,2/4/2021 18:13,,1,42,"

I'm currently trying to build a semantic scraper that can extract product information from different company websites of suppliers in the packaging industry (with as little manual customization per supplier/website as possible).

The current approach that I'm thinking of is the following:

  1. Get all the text data via scrapy (so basically a HTML-tag search). This data would hopefully be already semi-structured with for example: name, description, product image, etc.
  2. Fine-tune a pre-trained NLP model (such as BERT) on a domain specific dataset for packaging to extract more information about the product. For example: weight and size of the product

What do you think about the approach? What would you do differently?

One challenge I already encountered is the following:

  • Not all of the websites of the suppliers are as structured as for example e-commerce sites are → So small customisations of the XPath for all websites is needed. How can you scale this?

Also does anyone know an open-source project as a good starting point for this?

",44379,,,,,2/4/2021 18:13,How to scrape product data on supplier websites?,,0,0,,,,CC BY-SA 4.0 26204,2,,26175,2/4/2021 18:24,,2,,"

Unfortunately, the answer here is that "it depends". People have taken different approaches to this problem and I'll describe a few here. None of which however is the "right" answer.

Labeling

When generating benchmark datasets, we actually do have this problem. To be honest, most of the time the labeling is done to the best ability of the human. Sometimes ambiguous or difficult cases are separated out and cleaned, but, usually, labelers are given a set of concrete guidelines to say whether or not something is a cat. In the case where a human is unsure, usually, that data is thrown out or moved over to the "difficult" pile. Unfortunately, this difficult pile isn't publicly available for the most part in many public datasets. However, if you look at most public datasets, even with significant cleaning these cases exist.

Bayesian Deep Learning

One common theme that I've seen is that people add proper probabilistic uncertainties into their models. This is different than the output of a softmax at the end of an object detection network. The output of a softmax is just some number $\in [0, 1]$ that represents the regressed output of a classification within the model. For example, in SSD the softmax is just the classification of a specific anchor box. There is no real "certainty" information associated with it. In most standalone models (and without some pretty hard assumptions) it doesn't have any rigorous probabilistic meaning.

So how would we add a probability? How can we say that "There is a 80% chance that this image is a cat?. The paper, "What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? does a pretty good job at looking at this problem.

Basically what they do is build Bayesian models to break have their models explicitly output two types of uncertainties.

Aleatoric uncertainty captures noise inherent in the observations. On the other hand, epistemic uncertainty accounts for uncertainty in the model – uncertainty which can be explained away given enough data.

You can go through the paper to get a better understanding of what's going on, but, basically, they fit models to regress both the uncertainty associated with the model, and the uncertainty associated with the data.

Zero-Shot Learning

What about the case where you have a very well defined object, but you've never seen it before? Let's say you've seen a bunch of horses but you've never seen a zebra. As a human, you would look at it for a while and basically just assume that it's a horse with black and white stripes. There is an entire field of machine learning dedicated to this topic. I'm not an expert in it personally, but there are plenty of resources online if you're interested.

A Practical Note

In industry, we usually try to scope the problem as best as we can such that we don't have to deal with this as much. Now, there are times when this is clearly inevitable. What I've seen if an object isn't clearly detectable, then the algorithm might fallback to just saying that there's "something" there. Consider the case for self-driving cars. It's good to detect if there are pedestrians in the road, but if you don't know if something is a pedestrian it's still useful to know that there's something in the road. For this, you can fallback to unsupervised methods to help distinguish objects. From a labeling perspective, you could imagine an ontology of objects for this purpose. At the root node of this ontology would be just, "something in the road" and branches off to "car", "pedestrian" or, "bike" for example. If a labeler is not sure if something is a pedestrian, but it's def something in the road that shouldn't be hit then it would be labeled as "something in the road". Again though, this is highly dependent upon the application.

",17408,,17408,,2/5/2021 23:11,2/5/2021 23:11,,,,6,,,,CC BY-SA 4.0 26207,2,,23773,2/4/2021 20:00,,1,,"

What you define as regret is the case of Stochastic MAB's i.e MAB's with a fixed distribution. First of all the idea of regret in an Online setting is the loss incurred compared to the best agent (NOTE: I have used the term best agent as it can have differeing strategies, resulting in different best agents, in general we deal with a static agent, i.e whose policy/strategy is fixed over the entire horizon).

When we are talking about MAB's we always talk about what happens in 'expectation' rather than what 'actually' happens. This is because we are dealing with incomplete information i.e at each time step we don't actually know what losses we have incurred, and thus the algorithms designed to handle such problems are probabilistic in nature.

Compared to this, there are things like Online Convex Optimization where complete information about the loss function is available (i.e we are given how the loss was calculated) and we actually use the following regret formulation.

$$\sum_{t=1}^T(f_t(w_t) - f_t(u))$$ where $u$ is the minimizer of $\sum_{t=1}^Tf_t(w)$ and $f_t$ are a sequence of loss functions which are fully revealed to a learner.

Now, compared to this in MAB you don't get the loss function revealed to you. You only get to know the reward of the arm you pulled (you don't get to know what was the best arm). Hence, you deal in probabilities i.e you want to maintain a probability distribution over the arms rather than pulling a fixed arm once (NOTE: The losses maybe Stochastic, adversarial etc). This will ensure the arm which is producing the maximum reward gets the maximum probability (if the algorithm works or in technical term 'consistent').

Herein comes the the principal of importance sampling, to have a good estimate of the loss incurred in expectation without knowing the actual loss. In general $f_t$ is assumed to be a linear function (as it can be shown linear functions always have the worst case regret), and hence parametrized by $z_t$ (A vector).

Now consider defining: $$\tilde{z} = [0,0,.....\frac{z(I_t)}{p(I_t)},0,0,...0]$$ where $I_t$ is the arm pulled at time $t$ and $p$ is the probability distribution or strategy to pull your arms. You can check that $\mathbb E[\tilde{z}] = z$ i.e the actual loss vector parametrization in the first place! This $z$ in MABs are nothing but the vector of rewards obtained by pulling an arm i.e. a $k$-D vector of rewards, hence you want to pulll an arm with maximum reward. Thus we see via importance sampling we were able to recover $z$ in 'expectation'.

Thus now regret can be defined as (due to the involvement of probabilistic strategies):

$$R_T=\mathbb E[\sum_{t=1}^T<z_t,w_t> - \min_u \sum_{t=1}^T<z_t,u>]$$ where $w_t$ is nothing but $[0,0,0,...1,0,...0]$ i.e the arm you played after sampling from the probability distribution you use as strategy. Actually one uses importance sampling and pretty involved derivation to get a bound on the aforementioned expectation in the famous EXP3 algorithm. Thus, the bottomline is, due to incomplete information we use a probability distribution to pull the arms and then using an update rule which uses $\tilde{z}$ we can derive bounds for the aforementioned expression.

Now that we have understood the motivation, in Stochastic MAB our goal is to maximize rewards (we have $K$ arms, also I have used standard notations, so it might differ from your notation) i.e

$$\mathbb E[\sum_{t=1}^T X_{I_t}]$$ i.e the arm played at time $t$ which can be written as $$E[\sum_{i=1}^K \mu_iN_i(t)]$$ (NOTE: The earlier expectation was with both your probabilistic strategy of arm plays as well as the probabilistic rewards, as we are dealing with Stochastic MAB's, thus if we eliminate the expectation w.r.t $X_{I_t}$ we get $\mu_{I_t}$ which is written as $\mu_i$ multiplied with the number of times it is played $N_i$ which is a random variable or has a probability associated with it).

Thus this can be further simplified to:

$$\sum_{i=1}^K \mu_iE[N_i(t)]$$.

Now if the highest mean is $\mu^*$ it is clear from the above expression that the expression will be maximized if $\sum_{i=1}^KE[N_{i^*}(t)] = T$ where $i^*$ is the arm corresponding to $\mu^*$ and thus finally we get the regret as: $$R_T = \sum_{t=1}^K (\mu^* - \mu_i)E[N_i(t)]$$.

THe bottomline is that due to incomplete information we use probabilistic strategies, resulting in an expectation of regret, while in Stochastic MAB's the rewards are also probabilistic, but in the regret formulation the expectation w.r.t the rewards can be evaluated to $\mu_i$ (if the distribution is stationary)

A useful reference maybe here (the first part of the video).

",,user9947,,user9947,2/10/2021 7:39,2/10/2021 7:39,,,,0,,,,CC BY-SA 4.0 26208,2,,25963,2/4/2021 20:39,,1,,"

One LSTM layer should be enough unless you have lots of data. The same thing goes for the number of nodes in the layer. Start small first so 5 to 10 nodes and increment it until the performance is reasonable.

Once you have a model working you can apply regularization if you think it will improve performance by reducing overfitting of the training data. You can check this by looking at the learning curves or compring the error on the validation and test sets.

In my experiments I've used the L1 and L2 regularizers along with dropout. These can all be mixed together in fact using both L1 and L2 at the same time is called the ElasticNet.

I tend to apply the regularizers on the kernel_regularizer because this affects the weights for the inputs. Basically feature selection.

The value for the L1 and L2 can start with the default (for tensorflow) of 0.01 and change it as you see fit or read what other research papers have done.

Dropout can start at 0.1 then increment it until there is no performance gain. This is basically a percentage so 0.1 would remove about 10% of your nodes.

Finding the best regularizer is the same as any other hyperparameter optimization which is mostly trial and error.

",32265,,,,,2/4/2021 20:39,,,,0,,,,CC BY-SA 4.0 26209,1,,,2/4/2021 20:48,,2,82,"

I have implemented DQN algorithm and wonder why during testing, the best performance is achieved by a policy from about 300 episode, when mean Q values converge at about 800 episode?

  • Mean Q-values are calculated on a fixed set of states by taking mean of max Q-values for each state.
  • By convergence I mean that the plot of mean Q-values converge to some level (those values does not increase to infinity).

It can be seen in here (page 7) that mean Q-values converge and average rewards plot is quite noisy. I get similar results and in tests, the best policy is where the peaks are during training (average reward plot). I don't understand why don't I get better average scores over time (and better policies) when Q-values converge.

",,user43110,,user43110,2/5/2021 17:04,2/6/2021 12:25,Why do I get the best policy before Q values converge using DQN?,,1,0,0,,,CC BY-SA 4.0 26215,2,,8487,2/5/2021 3:18,,1,,"

I cannot answer your question but I am stuck in a similar rabbit hole so hopefully these references can help you.

The loss function you are describing would be 0-1 loss. However, 0 would be the if our output matches and 1 would be if it does not. This function is not smooth and not convex. Thus we often replace it with a surrogate loss function such as log likelihood.

You can read more about surrogate loss function on pg 269 of Deep Learning by Ian Goodfellow available here: https://www.deeplearningbook.org/

or here : https://en.wikipedia.org/wiki/Loss_functions_for_classification

I did not have time to read the article, but the reason that you are seeing a probability is that they are using a Bayesian framework for neural networks. This was first described in A Practical Bayesian Framework for Backpropagation networks by McKay 1992.

It is explained very well in a video series by Hinton on youtube here: https://www.youtube.com/watch?v=YcwZFNd3UvI

Hopefully these references help you I wish I could explain in greater detail.

",44245,,,,,2/5/2021 3:18,,,,0,,,,CC BY-SA 4.0 26216,1,,,2/5/2021 3:33,,3,637,"

Assuming the input photo is focused on a person's face, if the person is wearing a surgical mask, most face recognition software fail to identify the subject's face.

Most facial landmark models are trained to identify at least the eyes and the tip of the nose (for example, dlib's 5 point landmark).

Is it possible to construct a model that is trained to identify a face based on only the eyes?

Edit: Sorry for my broken english, but by "eyes" I mean the periocular area. I am terribly sorry because english isn't my first language.

",44400,,44400,,2/6/2021 10:32,2/6/2021 10:32,Is it possible to do face recognition with just the eyes?,,2,3,,,,CC BY-SA 4.0 26217,2,,26216,2/5/2021 6:43,,3,,"

Yes, it must be possible as retina scanners have been used as a method of personal identification for some time. The difference is, if you have a retinal scanner you are probably controlling the focal distance of the picture you are taking and using suitably high resolution. Your mileage may vary as these things decrease in quality.

",44402,,,,,2/5/2021 6:43,,,,0,,,,CC BY-SA 4.0 26218,1,,,2/5/2021 7:05,,0,45,"

Could you combine word embeddings with the median per dimension to get a document embedding? In my case I have a huge amount of words to build one document, which in turn should describe a topic. I feel like using the median is the right thing to do, as I get the most common parameter value per dimension. However, I cannot find anyone trying it before. This is why I'm wondering, is there something speaking against it?

",44404,,44404,,2/5/2021 7:22,2/5/2021 7:22,Is there a reason why no one combines word embeddings with the median?,,0,2,,,,CC BY-SA 4.0 26220,2,,26202,2/5/2021 8:48,,1,,"

Yes it is possible to use the action as input to neural network in DQN. For discrete actions represented as one-hot encoded features, the difference is minor:

  • If all actions are in the output, your neural network function is $f(s): \mathcal{S} \rightarrow \mathbb{R}^{|\mathcal{A}|} = [\hat{q}(s,a_1), \hat{q}(s,a_2), \hat{q}(s,a_3) ...]$, and you take the maximum value from the output vector as the greedy action.

  • If the action is provided as an input argument, your neural network function is $f(s,a): \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R} = \hat{q}(s,a)$, and to find the maximum value you construct and run a mini-batch over all possible values of $a$ for the given state.

Also see the answer to this question: Why does Deep Q Network outputs multiple Q values?

In your case, you would like to take advantage of similar values of $a$ because you expect that to work well with the approximation. As you correctly suggested, this will only work with the second approach using action as an input. So, use the steering angle, normalised into a suitable range for input to a neural network, as an input. Every time that you need to find $\text{max}_a Q(s,a)$ for the Q-learning algorithm, you must construct a mini-batch of the current state concatenated with each of the discrete steering angles that you want to consider as actions in the DQN, and run the current (or target) neural network forward.

If you want to go further and use a continuous action space, you will need to change which reinforcement learning method you are using. The various policy gradient and actor-critic approaches, such as REINFORCE, A3C, DDPG etc can cope with continuous actions, because they drop the need to find $\text{max}_a Q(s,a)$, which becomes impractical for very large action spaces.

",1847,,1847,,2/5/2021 8:55,2/5/2021 8:55,,,,2,,,,CC BY-SA 4.0 26221,1,26224,,2/5/2021 11:35,,2,411,"

Consider linear regression. The mean squared error (MSE) is 120.5 for the training dataset. We've reached the minimum for the training data.

Is it possible that by applying Lasso (L1 regularization) we would get a lower MSE for the training data? Would it get lower for the test data? Would this also hold for ridge regression (L2 regularization)?

",44412,,2444,,2/6/2021 2:00,2/6/2021 2:00,Would either $L_1$ or $L_2$ regularisation lower the MSE on the training and test data?,,1,0,0,,,CC BY-SA 4.0 26222,1,,,2/5/2021 11:47,,4,80,"

I found myself scratching my head when I read the following phrase in the paper Visualizing the Loss Landscape of Neural Nets:

To remove this scaling effect, we plot loss functions using filter-wise normalized directions. To obtain such directions for a network with parameters $\theta$, we begin by producing a random Gaussian direction vector $d$ with dimensions compatible with $\theta$. Then, we normalize each filter in $d$ to have the same norm of the corresponding filter in $\theta$. In other words, we make the replacement $d_{i,j} \leftarrow d_{i,j} \| d_{i,j}\| \| \theta_{i,j}\| $

I'm completely unclear what the authors are referring to when they refer to the filters of the vector $d$ in weight space. As far as I can tell, the vector $d$ is a standard vector in weight space ($W$) with a number of components equal to the number of changeable weights in the network. In my opinion, it could be said that each layer in the network can be visualized as a vector in weight space ($\theta_{i}$) with:

$$\theta = \sum_{i}\theta_{i}$$

and then maybe these vectors $\theta_{i}$ are called filters? But how this would have anything to do with the random vector $d$, generated in this space, remains a complete mystery to me.

",44411,,2444,,2/6/2021 14:29,2/6/2021 14:29,Visualizing the Loss Landscape of Neural Nets: Meaning of the word 'filter'?,,0,0,,,,CC BY-SA 4.0 26223,2,,16233,2/5/2021 13:01,,0,,"

If someone will be looking for a dataset for maritime SAR (Search and Rescue) purposes in the future, we have created the first publicly free to use for academic research dataset of this type: http://afo-dataset.pl/en/download/

",30992,,,,,2/5/2021 13:01,,,,0,,,,CC BY-SA 4.0 26224,2,,26221,2/5/2021 13:02,,1,,"

The answer is largely the same whether we consider $\ell_1$ or $\ell_2$ regularisation, so I will just speak generally about regularisation.

Mean square error for training data

Given some training data $\{(x_i, y_i)\}_{i = 1}^n$, a linear regression line $Y = aX + b$ fit using the least squares method looks for coefficients that minimise the sum of squares, i.e. they are the minimisers given by

$$ \mathrm{arg\,min}_{a, b} \sum_{i = 1}^n \left(y_i - (ax_i + b)\right)^2.$$

This gives the same coefficients as minimising the mean square error

$$ \mathrm{MSE}\left((x_1, y_1), \dots, (x_n, y_n)\right) = \frac{1}{n} \sum_{i = 1}^n \left(y_i - (ax_i + b)\right)^2.$$

So, by definition, the coefficients $(a, b)$ minimise the MSE on the training data. Any regularisation will only increase the MSE on the training data.

Generalisation performance

The main point of regularisation is to prevent overfitting on the data and improve the generalisation performance (i.e. on the test set).

With an appropriate parameter for regularisation, you may obtain a smaller MSE on the test set. This depends on your dataset and the parameters you choose: strong regularisation may lead to underfitting, whereas weak regularisation might not make much difference to the coefficients that you fit.

",44413,,,,,2/5/2021 13:02,,,,0,,,,CC BY-SA 4.0 26227,2,,26216,2/5/2021 16:41,,1,,"

The two main eye biometrics are iris recognition and retina recognition (aka retinal scan). These are not going to work from an ordinary photo of someone's face. I have used iris recognition at about ten feet away and this article claims it can be done at 40 feet!

Eye recognition, or identification of a person from an image of their eyes alone (i.e., without seeing their iris or retina), has such a high error rate that it is not done. You may find the following paper of interest:

Nawaz Ripon, K. S., Ershad Ali, L., Siddique, N., & Ma, J. (2019). Convolutional Neural Network based Eye Recognition from Distantly Acquired Face Images for Human Identification. 2019 International Joint Conference on Neural Networks (IJCNN), Neural Networks (IJCNN), 2019 International Joint Conference On, 1–8. doi:10.1109/IJCNN.2019.8852190

For more information on iris recognition see pp 10-11 of this tutorial, and pp 12-13 for retinal scan.

An excerpt from Retinal vs. Iris Recognition: Did You Know Your Eyes Can Get You Identified? by Danny Thakkar:

Retina recognition The posterior portion of human eye forms retina. It is made of a light sensitive tissue. When light passing through cornea and lens reaches retina, neural signals are generated and transferred to the brain via the optic nerve. Retina is a thin layer of tissue formed by neural cells. Capillaries responsible for blood supply of this layer forms a pattern that can be used for personal identification. This pattern of blood capillaries is believed to be unique in each individual due to huge possibility of variation how these capillaries run on the surface of retina. Since retina is located at the posterior portion inside the human eye, special equipment is required to scan this pattern. Retina recognition is one of the least deployed biometric methods because of high cost of the implementation and its highly invasive nature that may cause some user discomfort. Still, it is used is very high security applications like military and high level government access due to its accuracy and high level of security.

Retina recognition systems make use of low energy infra-red light to scan the retinal pattern. Blood vessels absorb infrared light while surrounding tissues reflect it. This reflection is detected by the retina recognition system and image of this pattern is captured. This image is further enhanced to make is usable for the recognition algorithm. Retina template is generated once the image is taken through recognition algorithm; this template is associate with a subject’s demographic data and stored. The process so far is called enrolment. The subject’s identity can be verified anytime by scanning a new retinal sample and matching it against the stored template.

Iris recognition Iris is the ring shaped colored portion in a human eye and is visible from outside with naked eye. It is made of muscle tissue that adjusts the size of pupil and controls how much light can enter the eye. Amount of melatonin pigment in iris is responsible for different colors that human eyes take. Folds in iris muscles throughout the ring create a pattern with great amount of details. Formation of this pattern is completely random and there is no rule how it will turn out in an individual’s eye. However, once this pattern is created during the foetal development, it stays the same throughout the life. An individual’s irises are unique and structurally distinct, even iris of same individual does not match. All these attributes make them good enough for personal recognition.

Details of iris can be captured with any high quality digital camera, however, modern recognition systems make use of near infrared (NIR: 700–900 nm) instead of visible light to capture details. Since iris recognition can be established with high quality camera and recognition software, it can be setup on any computing device; however, dedicated recognition systems are more common due to performance and security reasons. Iris recognition systems use a camera to capture details of the iris and this image is enhanced by the image enhancement algorithms. Once the image is usable enough, it is processed by the recognition algorithms, which extracts unique features to generate a biometric template. Associating identity data with this template establishes identity of the subject in question, which can be used for identity verification in future.

",5763,,5763,,2/5/2021 17:16,2/5/2021 17:16,,,,2,,,,CC BY-SA 4.0 26228,2,,26179,2/5/2021 16:49,,5,,"

Andrew Y. Ng (yes, that famous guy!) et al. proved, in the seminal paper Policy invariance under reward transformations: Theory and application to reward shaping (ICML, 1999), which was then part of his PhD thesis, that potential-based reward shaping (PBRS) is the way to shape the natural/correct sparse reward function (RF) without changing the optimal policy, i.e. if you apply PBRS to your sparse RF, the optimal policy associated with the shaped, denser RF is equal to the optimal policy associated with the original unshaped and sparse RF. This means that PBRS creates an equivalence class of RFs associated with the same optimal policy, i.e. there are multiple RFs associated with the same optimal policy. So, PBRS is the first technique that you can use to deal with sparse reward functions.

To give you more details, let $R\left(s, a, s^{\prime}\right)$ be your original (possibly sparse) RF for your MDP $M$, then

$$R\left(s, a, s^{\prime}\right)+F\left(s, a, s^{\prime}\right)\tag{1}\label{1}$$

is the shaped, denser RF for a new MDP $M'$.

In PBRS, we then define $F$ (the shaping function) as follows

$$ F\left(s, a, s^{\prime}\right)=\gamma \Phi\left(s^{\prime}\right)-\Phi(s), \tag{2}\label{2} $$ where $\Phi: S \mapsto \mathbb{R}$ is a real-valued function that indicates the desirability of being in a specific state. So, if we define $F$ as defined in \ref{2}, we call it a potential-based shaping function.

The intuition of $F$ in \ref{2} is that it avoids the agent to "go in circles" to get more and more reward. To be more precise, let's say that there's a state $s^* = s_t$ that is desirable but not the goal state. If you shaped the reward function by adding a positive reward (e.g. 5) to the agent whenever it got to that state $s^*$, it could just go back and forth to that state in order to get the reward without ever reaching the goal state (i.e. reward hacking). That's, of course, not desirable. So, if your current state is $s^*$ and your previous state was $s_{t-1}$, and whenever you get to $s_{t-1}$ you just shaped the reward function by adding a zero reward, to avoid going back to $s_{t-1}$ (i.e. to avoid that the next state $s_{t+1} = s_{t-1}$, and then go back to $s^*$ again, $s_{t+2} = s^*$, i.e. going in circles), if you use \ref{2}, you will add to your original sparse reward function the following

$$ F\left(s, a, s^{\prime}\right)=\gamma 0 - 5 = -5, \tag{3}\label{3} $$

In other words, going back to $s_{t+1}$ (to later try to go back to $s^*$) is punished because this could again lead you to go back to $s^*$. So, if you define $F$ as in equation \ref{2}, you avoid reward hacking, which can arise if you shape your RF in an ad-hoc manner (i.e. "as it seems good to you").

One of the first papers that reported this "going in circles" behaviour was Learning to Drive a Bicycle using Reinforcement Learning and Shaping

We agree with Mataric [Mataric, 1994] that these heterogeneous reinforcement functions have to be designed with great care. In our first experiments we rewarded the agent for driving towards the goal but did not punish it for driving away from it. Consequently the agent drove in circles with a radius of 20–50 meters around the starting point. Such behavior was actually rewarded by the reinforcement function

You probably also want to watch this video.

Another approach to solving the sparse rewards problem is to learn a reward function from an expert/optimal policy or from demonstrations (inverse reinforcement learning) or to completely avoid using the reward function and simply learn the policy directly from demonstrations (imitation learning).

Note that I'm not saying that any of these solutions fully solves all your problems. They have advantages (e.g. if done correctly, PBRS can speed the learning process) but also disadvantages (e.g., in the case of IRL, if your expert demonstrations are scarce, then your learned RF may also not be good), which I will not discuss further here (also because I don't currently have much practical experience with none of these techniques).

",2444,,2444,,2/5/2021 17:19,2/5/2021 17:19,,,,2,,,,CC BY-SA 4.0 26231,1,26248,,2/5/2021 18:08,,0,74,"

I'm relatively new to machine learning, and I don't know what error I should use for an RNN.

I want to use a simple Elman RNN to predict the cases of Covid-19 there will be in a hospital for the next 15 days. I modeled this as a regression problem, treating the input like a bunch of dots in a graph to predict the tendency that the data is going to take (only show if there will be more cases or less).
With that bunch of dots I in fact refer to this:

Then I would treat this problem as a regression.

I actually don't have anything programmed yet. Firstly I want to write it all on a paper and then get down to work. I am also considering focusing the problem to predict the actual plot of the time-series input, but right now I want to try the regression.

I've come to the conclusion that I can use these four different errors:

  • MSE
  • RMSE
  • Entropy
  • Cross-entropy

What are the different characteristics of these errors? Which to use? Where and when to use them?

",44235,,44235,,2/6/2021 18:09,2/6/2021 18:09,What error should I use for RNN?,,1,3,,,,CC BY-SA 4.0 26232,1,26249,,2/5/2021 18:24,,0,97,"

I am new to AI.

I have a series of numbers ranging from x to y and I have a lot of data to train with

What I am trying to do is, let's say from 0 to 1, I train it with data calculated over time and predict what may happen next, training it with my data and then feeding it the last few days and continue the pattern.

I have been thinking about using char-rnn, but from what i understand the data exported is arbitrary and not a continuation of a series. I oftentimes see videos on youtube "AI continues this song" so I'm wondering which I can use and where I can get started to do this myself.

Thank you and have a nice day ☺

",44391,,,,,2/6/2021 18:27,Number Series Continuation?,,1,0,,2/7/2021 2:10,,CC BY-SA 4.0 26235,1,26246,,2/5/2021 18:51,,11,4964,"

I am currently trying to understand transformers.

To start, I read Attention Is All You Need and also this tutorial.

What makes me wonder is the word embedding used in the model. Is word2vec or GloVe being used? Are the word embeddings trained from scratch?

In the tutorial linked above, the transformer is implemented from scratch and nn.Embedding from pytorch is used for the embeddings. I looked up this function and didn't understand it well, but I tend to think that the embeddings are trained from scratch, right?

",43632,,,,,5/1/2021 12:43,What kind of word embedding is used in the original transformer?,,3,0,,,,CC BY-SA 4.0 26236,2,,26235,2/5/2021 20:25,,4,,"

No, neither Word2Vec nor GloVe is used as Transformers are a newer class of algorithms. Word2Vec and GloVe are based on static word embeddings while Transformers are based on dynamic word embeddings.

The embeddings are trained from scratch.

",5763,,5763,,2/5/2021 21:06,2/5/2021 21:06,,,,4,,,,CC BY-SA 4.0 26237,1,,,2/5/2021 23:09,,1,58,"

I am pondering on the question in the title. As a human being, somehow I can sort a random sequence of numbers from 1 to $ 2^{2^{512}} $ in our universe in infinite time (But I am not sure.). Can an ML model do that in our universe if it is provided with infinite time? There is no restriction on how the learning algorithm is supposed to learn how to sort. (Be careful, even $ 2^{512} $ is bigger than the number of atoms in the universe. Therefore you will have limited memory.)

",19102,,19102,,2/6/2021 14:05,2/6/2021 14:05,Can an ML model sort a random sequence of numbers from 1 to $ 2^{2^{512}} $ in our universe in infinite time?,,0,4,,,,CC BY-SA 4.0 26238,1,26239,,2/5/2021 23:41,,1,531,"

If I were to have a dataset of 9 attributes of different types that describe current weather, such as temperature, humidity, etc., and want to classify the current weather by use of a k-NN algorithm, is this possible?

From what I understand, k-NN has two different attributes that are plotted, and, wherever a point is drawn, its nearest neighbors will classify it.

Could I do the same thing but each data point is placed based on its 9 attributes?

",44421,,2444,,2/6/2021 14:10,2/6/2021 14:10,Is it possible to use k-nearest neighbour for classification with more than two attributes?,,1,0,,,,CC BY-SA 4.0 26239,2,,26238,2/5/2021 23:55,,3,,"

The number of features is not important to use K-NN algotihm. You have to decide distance measure to detect neighbors. I share with you some links that you can check to see which kinds of distance measures that you can use. Just decide the meause and use your feature vectors in the measure.

https://www.kdnuggets.com/2020/11/most-popular-distance-metrics-knn.html

https://medium.com/@luigi.fiori.lf0303/distance-metrics-and-k-nearest-neighbor-knn-1b840969c0f4

",19102,,,,,2/5/2021 23:55,,,,4,,,,CC BY-SA 4.0 26240,2,,26197,2/6/2021 0:07,,-1,,"

In my opinion, the sucess level of the Deep Learning models is perfectly suited for DSS. For example, if you have data you can built perfect Deep Reinforcement Model to rank programmers. In general AI models, specifically Deep models, are not mature enough to use without supervision of human being. But they are very suitable to build support systems.

",19102,,19102,,2/6/2021 0:15,2/6/2021 0:15,,,,1,,,,CC BY-SA 4.0 26242,2,,26209,2/6/2021 1:35,,1,,"

Even if the mean of the maximum Q-value increases from episode 300 onwards, it doesn't mean that the relative order of the Q-values of the actions that you can take in the states change, which means that the policy may not change, even though the value function changes, assuming you're acting greedily with respect to the value function.

More concretely, suppose that you can take one of two actions $\{ a_1, a_2\} = \mathcal{A}$ in each state $s \in \mathcal{S}$. Let's say that you pick $s_1, s_2$ to calculate the average of the maximum Q-value. Without loss of generality, suppose that the action associated with the highest Q-value in these states is $a_2$. If your policy is the greedy policy with respect to the state-action value function, then your policy will choose $a_2$ in $s_1$ and $s_2$. If all Q-values $\hat{q}(s_1, a_1)$, $\hat{q}(s_1, a_2)$, $\hat{q}(s_2, a_1)$ and $\hat{q}(s_2, a_2)$ increase (or even decrease) but their relative order remains the same (i.e. $\hat{q}(s_1, a_2) > \hat{q}(s_1, a_1)$ and $\hat{q}(s_2, a_2) > \hat{q}(s_2, a_1)$, from episode 300 onwards), the greedy policy also remains the same.

So, I think that what you observed is theoretically possible, although I cannot guarantee that you don't have another issue.

",2444,,2444,,2/6/2021 12:25,2/6/2021 12:25,,,,0,,,,CC BY-SA 4.0 26245,1,,,2/6/2021 12:55,,0,144,"

I have a problem where the goal is for the agent to draw a single line between two points on a $500 \times 500$ white image.

I have built my DQN. For now, the output layer's size of the network is $[1, 500 * 500]$. This way, when a Q value is given within a single step, it can be mapped to a single coordinate within the space of the image.

So, with that, I'm able to get a starting point for the line. However, what that doesn't give me is the location of the second point for completion of the line.

One thing I have tried is drawing a line for every two steps. However, this means that the state of the environment does not change for each step.

Should the goal be to change the environment/state for each step or does this not matter? Maybe I have not found the ideal way of modelling the state and action spaces for this problem.

",20271,,2444,,3/10/2021 13:32,3/10/2021 13:32,How should I model the state and action spaces for a problem where the goal is to draw a line between two points?,,0,2,,,,CC BY-SA 4.0 26246,2,,26235,2/6/2021 14:09,,10,,"

I have found a good answer in this blog post The Transformer: Attention Is All You Need:

we learn a “word embedding” which is a smaller real-valued vector representation of the word that carries some information about the word. We can do this using nn.Embedding in Pytorch, or, more generally speaking, by multiplying our one-hot vector with a learned weight matrix W.

There are two options for dealing with the Pytorch nn.Embedding weight matrix. One option is to initialize it with pre-trained embeddings and keep it fixed, in which case it’s really just a lookup table. Another option is to initialize it randomly, or with pre-trained embeddings, but keep it trainable. In that case the word representations will get refined and modified throughout training because the weight matrix will get refined and modified throughout training.

The Transformer uses a random initialization of the weight matrix and refines these weights during training – i.e. it learns its own word embeddings.

",43632,,2444,,2/6/2021 15:14,2/6/2021 15:14,,,,1,,,,CC BY-SA 4.0 26247,1,,,2/6/2021 15:16,,1,66,"

I'm trying to think of how I can embed a game's state into a unique key value. The game I'm specifically working with is Isolation: https://en.wikipedia.org/wiki/Isolation_(board_game). The game state has the coordinates of player 1's pawn, coordinates of player 2's pawn, coordinates of free spaces and coordinates of already used spaces. Is there a way to embed this into a unique key value? My plan is to generate a dict and use that for value iteration with RL to learn the optimal value function for every state.

",30885,,,,,2/6/2021 21:15,Embedding Isolation game states into key values for RL,,1,0,,,,CC BY-SA 4.0 26248,2,,26231,2/6/2021 16:21,,1,,"

To provide a good answer would fill several pages. To keep it very simple try many different loss functions on your model. Your goal is to have the highest performance based on some desired prediction metric (e.g., RMSE, MAE, MAPE, etc.). You almost always have plenty of time to try many loss functions so you don't need to have a full understanding, and few people do, to start your project.

I recommend you read the following to learn more:

",5763,,5763,,2/6/2021 16:30,2/6/2021 16:30,,,,3,,,,CC BY-SA 4.0 26249,2,,26232,2/6/2021 18:27,,0,,"

I don't actually understand your question, but if your data is completely arbitrary, there is nothing to predict, it have no patterns to recognize or something like that.

But if you say that you are working with time-series data and it have some patterns, then you could start by trying to implement just the forward propagation of a simple RNN. I am really new as well at AI and the first RNN I code was a Elman RNN, and they were very simple equations to implement. I recommend you trying that, and then implement the backpropagation to that RNN.

Something that really helped me starting was searching simple github scrips for RNNs (around 100 lines of code) so that you can see their architecture.

About the example you give, the one about songs, they are actually really predictable, because you have the rime, rithm and tempo, and also they use to repeat the same secuence of notes every stanza.

Hope it helps :)

",44235,,,,,2/6/2021 18:27,,,,0,,,,CC BY-SA 4.0 26252,2,,26247,2/6/2021 21:04,,1,,"

I think that there are too many game states in that game for you to use value iteration. The upper bound for a simple concise representation would be $49^2 \times 2^{49}$. That is:

  • $49^2$ covering all possible locations of the two players.

  • $2^{49}$ covering whether each square exists or has been removed.

In this scheme, many combinations are not feasible, as pieces do have to be on existing squares and cannot share a space. However, this doesn't drop the number of allowed states by a significant amount (several or more orders of magnitude) that would make it worth a more complex representation, or put the game into reach of dynamic programming solutions.

Potentially, a version on a 4x5 grid would be small enough to fit in memory and be solvable with dynamic programming. That would have a few million valid states.

In terms of an id code, you could use a 64-bit unsigned integer, reserving 49 bits for the existence of the squares, and for simplicity giving 6 bits each to the locations. This would also be a valid state representation for actually playing the game efficiently, most programming languages support the bit manipulations that you would need in order to maintain the representation. However, it would need a separate expanded representation if you were to create neural network features for approximate value functions.

In terms of writing a solver for the 7x7 version, I would recommend combining a look-ahead planner such as negamax, with an approximate neural network-based reinforcement method, perhaps DQN for simplicity. The neural network would provide a backup "best guess" solution when you could not look far enough ahead in the early stages of the game.

On a quick search for the true number of states in this game, I found an introduction to using minimax on a smaller version that you may find useful.

",1847,,1847,,2/6/2021 21:15,2/6/2021 21:15,,,,7,,,,CC BY-SA 4.0 26255,1,,,2/7/2021 4:58,,1,79,"

What is the preferred order of data augmentation and normalization? Is it the former followed by the latter?

",44327,,44327,,2/7/2021 19:53,2/7/2021 19:53,Does the order of data augmentation and normalization matter?,,0,1,,,,CC BY-SA 4.0 26256,2,,17468,2/7/2021 8:58,,0,,"

In the problem of Semantic Segmentation one solves the problem of investigating, whether a particular pixel on the image belongs to a certain class. For example, on the photo taken from the street one may be interested , whether this pixel belongs to the road, traffic light, sign, people or background of no interest.

Supeficially, in this setting it doesn't matter, whether there is a single instance of an object or multiple, and the thing, which is more important is the abundance of a particular class on an image.

However, CNN adopt to a particular patterns, pertinent to the distribution, from which the data comes from. It can be the case, that a network has captured some property and tooks a pattern, which is relevant only for one instance case. Images with multiple instances of class of interest are like outliers in the data, and may be treated due to this incorrecty.

I am not sure, whether it makes sense in yout particular problem, but maybe it is worth performing a mosaic augmentation https://towardsdatascience.com/data-augmentation-in-yolov4-c16bd22b2617. So that you will have for some samples of the augmented data a multiinstance case.

",38846,,38846,,2/7/2021 14:11,2/7/2021 14:11,,,,3,,,,CC BY-SA 4.0 26257,1,,,2/7/2021 9:12,,0,162,"

I'm having trouble beating an AI in Isolation game: https://en.wikipedia.org/wiki/Isolation_(board_game) with 3 queens on a 7x7 board. I tried applying alpha beta iteration with a scoring function on the state. I tried 3 scoring functions:

  1. (0.75/# moves taken) * number of legal moves - (# moves taken/0.75) * number of opponent legal moves
  2. number of legal moves - 3 * number of opponent legal moves
  3. 3 * number of legal moves - number of opponent legal moves

The first is an annealed aggressive strategy so the agent gets more aggressive as the game goes longer. The 2nd is a pure aggression strat and the last is a pure defensive strat. None of them consistently beat standard alpha beta iteration with the state scoring function: number of legal moves - number of opponent legal moves. They all broke roughly even.

Any suggestions of scoring state functions or search algorithms are appreciated. The search has a limit of 6000 seconds per turn though.

",30885,,,,,2/7/2021 9:12,Beating iterative alpha beta search in Isolation Game,,0,3,,,,CC BY-SA 4.0 26258,1,,,2/7/2021 9:33,,1,159,"

I have a dataset of texts, each text was identified with an ID number. I would like to do a prediction by finding the best match ID number for upcoming new texts. To use multi text classification, I am not sure if this is the right approach since there is only one text for most of ID numbers. In this case, I wouldn't have any test set. Can up-sampling help? Or is there any other approach than classification for such a problem?

The data set looks like this:

id1 'text1', id2 'text2', id3 'text3', id3 'text4', id3 'text5', id4 'text6', . . id200 'text170'

I would appreciate any guidance to find the best approach for this problem.

",44444,,,,,10/31/2022 15:00,Multi class text classification when having only one sample for classes,,1,3,,,,CC BY-SA 4.0 26259,1,,,2/7/2021 10:19,,1,177,"

Is there a convention on how the input data and the weights are multiplied? The input data can be anything, including the result from the previous layers.

There are two options:

Option 1:

$$\begin{bmatrix}i_1 & i_2\end{bmatrix} \times \begin{bmatrix} w_1 & w_2 & w_3\\w_4 & w_5 & w_6\end{bmatrix} = \begin{bmatrix}i_1*w_1 + i_2*w_4 & i_1*w_2+i_2*w_5 &i_1*w_3+i_2*w_6\end{bmatrix}$$

Option 2:

$$\begin{bmatrix} w_1 & w_4\\ w_2 & w_5\\ w_3 & w_6\end{bmatrix} \times \begin{bmatrix}i_1 \\ i_2\end{bmatrix} = \begin{bmatrix}i_1*w_1 + i_2*w_4 & i_1*w_2+i_2*w_5 &i_1*w_3+i_2*w_6\end{bmatrix}$$

",21157,,2444,,2/7/2021 12:24,2/7/2021 13:26,Is there a convention on the order of multiplication of the weights with the inputs in neural nets?,,1,0,,,,CC BY-SA 4.0 26260,1,,,2/7/2021 10:33,,1,445,"

I'm curious how you would apply Monte Carlo Tree Search to a game that has a random initial state. You generate a tree where the root node is the initial state, then you expand if the options from that state are not explored yet.

I'm also wondering how this works in 2 player games. After your opponent moves, does each state in the tree have a key to look up in a dictionary? Otherwise, the algorithm won't know what to do when there's a jump in a state between choosing your action on your turn and when your opponent moves, unless you also store your opponent's move in the tree.

",30885,,2444,,2/7/2021 12:28,2/7/2021 12:28,Does Monte Carlo Tree Search not work on games without the same initial state?,,1,0,,,,CC BY-SA 4.0 26261,2,,26259,2/7/2021 10:43,,2,,"

The conventions I have seen tend to post-multiply rather than pre-multiply, although there are examples in the literature which adopt the opposite convention.

Some examples include:

  1. In Deep Learning: An Introduction for Applied Mathematicians, a layer with input $x \in \mathbb R^n$ and output $f(x) \in \mathbb R^m$ is computed by $$ f(x) = \sigma(Wx + b)$$ for a matrix $W \in \mathbb R^{m \times n}$ of weights, a vector $b \in \mathbb R^m$ of biases and an activation function $\sigma \colon \mathbb R^m \to \mathbb R^m$.

  2. In Deep Learning by Goodfellow, Bengio and Courville, they compute a layer in chapter 6 (see p. 171) using

$$ f(x) = \max \{ 0, W^T x + c \},$$

where the $\max$ function is applied componentwise and gives the ReLU activation.

  1. In the paper Federated Learning with Matched Averaging, the authors describe a fully-connected layer using pre-multiplication,

$$f(x) = \sigma(xW),$$

where they omit the bias to simplify notation and as before apply $\sigma$ entrywise. In this case of course we need to view $x$ as a row vector and will obtain a row vector as the result.

Ultimately, as long as you are clear it doesn't matter a great deal: in the end, all that changes is the shape and values of the matrix.

",44413,,2444,,2/7/2021 13:26,2/7/2021 13:26,,,,0,,,,CC BY-SA 4.0 26262,1,,,2/7/2021 11:03,,1,26,"

Consider a game like Pig (https://en.wikipedia.org/wiki/Pig_(dice_game)), but with a few additions: namely functions of both player's score and turn number that have unique impacts on scoring.
What machine learning model should I use to try and get the optimal number of dice roles per turn (say number of dice roles are bounded between 1 and 10)?
I was reading this tutorial: https://towardsdatascience.com/playing-cards-with-reinforcement-learning-1-3-c2dbabcf1df0, and they suggested reinforcement learning with a Q value function. I don't know how this work though, because turn number isn't bounded, but also needs to be a parameter to the Q value function. Multiplying the range of all parameters suggest this Q value function needs 2,000,000 states. Is this too many? - I have no idea how to judge this.

Is there a better model I should use to try and solve this problem which at its core takes the parameters of (my_score, opponent_score, turn_number) and should return a number 0 - 10 representing how many dice to roll.

",44446,,,,,2/7/2021 11:03,What machine learning model should I use for a random dice-based game?,,0,1,,,,CC BY-SA 4.0 26263,2,,20983,2/7/2021 11:50,,0,,"

Intuitively:

  1. Larger Q means larger probability that the node (s'|s,a) would be chosen. When we selected most visited node, we selected a node with good Q.
  2. More visited count means more accurate estimation. And the chosen node proved itself as a good choice even after more trials than other nodes.
  3. Less computation in some cases (integer/long vs float/double)
",44445,,,,,2/7/2021 11:50,,,,0,,,,CC BY-SA 4.0 26264,2,,26260,2/7/2021 12:04,,2,,"

If the initial state is not always the same, but if your agent is allowed to observe what the initial state is before it has to start running the search algorithm, there's basically no problem; it has all the information it needs when it starts running the tree search. This is how we typically use MCTS (or any other tree searches): we first observe what the current state looks like, and then start running the tree search for this state.

If for whatever reason you already have to start running your tree search without being allowed to observe which initial state has been randomly drawn, you can easily just pretend that you actually do have a "dummy", deterministic initial state which you can observe right before the real initial state, and pretend that the random sampling of an initial state is actually a random event / random transition resulting from a "dummy action". Then you can handle this scenario in exactly the same way that you would any normal game that has non-deterministic transitions (such as games that involve dice): if you have access to explicit knowledge of which stochastic events are possible, and what their probabilities are, you can encode them as chance nodes. If you do not have such explicit knowledge available, you can use an "open-loop" MCTS. See also: https://ai.stackexchange.com/a/13919/1641


Presence of opponents / other players in the game is a different issue. MCTS and other game tree search algorithms were designed specifically for this, they have no problems handling that. In a tree search, you do not just have nodes for your own agent and the states in which it is allowed to act; you also have nodes for any opponents and any states in which they are allowed to act, and the tree search enables you to reason about what actions your opponents are likely to take.

",1641,,,,,2/7/2021 12:04,,,,3,,,,CC BY-SA 4.0 26272,2,,25983,2/8/2021 0:14,,0,,"

I figured this out by going to the author's publicly available github code. It turned out the authors were just generating the transition probability $p$ from $\mathcal{N}(\mu,\sigma^2)$ at the beginning of each episode for some reason. Answering it myself for the sake of not leaving this question unanswered.

",38234,,,,,2/8/2021 0:14,,,,2,,,,CC BY-SA 4.0 26273,1,,,2/8/2021 0:36,,0,52,"

I am new to AI and NN. I've started learning using Geron's book on Tensorflow.

My first project ("Smart Shelf") is to determine which items in a store have been purchased and need refilled. The store camera periodically takes pictures of the tops of items on store shelves. To start, we have only 5 distinct products.

We have created ~250 handwritten images of product-labels that cover these 5 distinct products. So far, the training results are way below our expectation.

I am thinking to augment the training data and see whether it would make any difference.

I have thought about the following strategies:

  1. Train the model again using grayscale images. https://stackoverflow.com/questions/45320545/impact-of-converting-image-to-grayscale/45321001
  2. Invert images, translate them horizontally or vertically https://www.tensorflow.org/tutorials/images/data_augmentation, https://nanonets.com/blog/data-augmentation-how-to-use-deep-learning-when-you-have-limited-data-part-2/

Which of the above will yield better results and why? I am curious. Thanks for any help. I feel that I know various data augmentation techniques, but not sure how and why to apply them.


It seems this is a popular question, as learned from Choosing Data Augmentation smartly for different application etc.

",44463,,44463,,2/9/2021 20:24,2/9/2021 20:24,Data Augmentation of store images using handwritten labels,,0,2,,,,CC BY-SA 4.0 26277,1,,,2/8/2021 9:16,,1,22,"

The problem

I have a multi-channel 1D signal I want to auto-encode.
I am unable to resonstruct the input when the number of channels increases.


Code

I am using a convolutional encoder, and a convolutional decoder:

latent_dim: 512, frames_per_sample: 128

    self._encoder = nn.Sequential(
        nn.Conv1d(in_channels=self._n_in_features, out_channels=50, kernel_size=15, stride=1, padding=7),
        nn.LeakyReLU(inplace=True),
        nn.Conv1d(in_channels=50, out_channels=50, kernel_size=7, stride=1, padding=3),
        nn.LeakyReLU(inplace=True),
        nn.Conv1d(in_channels=50, out_channels=50, kernel_size=3, stride=1, padding=1),
        nn.LeakyReLU(inplace=True),
        # nn.Flatten(start_dim=1, end_dim=-1)
        nn.Conv1d(in_channels=50, out_channels=1, kernel_size=1, stride=1, padding=0),
        nn.Flatten(start_dim=1, end_dim=-1),
        nn.Linear(frames_per_sample, self._config.case.latent_dim)
    )

and

    start_channels = 256
    start_dim = frames_per_sample // (2 ** 4)

    start_volume = start_dim * start_channels
    self._decoder = nn.Sequential(
            nn.Linear(self._config.case.latent_dim, start_volume),
            nn.LeakyReLU(inplace=True),
            # b, latent
            nn.Unflatten(dim=1, unflattened_size=(start_channels, start_dim)),
            # b, start_channels, start_dim
            nn.Upsample(scale_factor=2, mode='linear', align_corners=False),
            # b, start_channels, start_dim*2
            nn.Conv1d(in_channels=start_channels, out_channels=128, kernel_size=3, stride=1, padding=1),
            # b, 128, start_dim*2
            nn.LeakyReLU(inplace=True),
            # b, 128, start_dim*2
            nn.Upsample(scale_factor=2, mode='linear', align_corners=False),
            # b, 128, start_dim*4
            nn.Conv1d(in_channels=128, out_channels=64, kernel_size=7, stride=1, padding=3),
            # b, 64, start_dim*4
            nn.LeakyReLU(inplace=True),
            # b,64, start_dim*4
            nn.Upsample(scale_factor=2, mode='linear', align_corners=False),
            # b, 64, start_dim*8
            nn.Conv1d(in_channels=64, out_channels=32, kernel_size=11, stride=1, padding=5),
            # b, 32, start_dim*8
            nn.LeakyReLU(inplace=True),
            # b, 32, start_dim*8
            nn.Upsample(scale_factor=2, mode='linear', align_corners=False),
            # b, 32, start_dim*16
            nn.Conv1d(in_channels=32, out_channels=16, kernel_size=21, stride=1, padding=10),
            # b, 16, start_dim*16
            nn.LeakyReLU(inplace=True),
            # b, 16, start_dim*16
            nn.Conv1d(in_channels=16, out_channels=self._n_features, kernel_size=3, stride=1, padding=1),
    )

I am not putting the entire code/data here because this is a theoretical question, and I don't expect anyone to go and run this.


Results

The result (orange) has artifacts on the edges, relative to the input data (blue):

This is easy to see on training examples:

Worse - for unseen examples (validation), reconstruction misses on the bias


Observation

The above only starts to happen when adding more channels, which have different biases.

I am normalizing the entire dataset to sit between -1 and 1, but still each channel has its own typical boundary.

Here is a (nice) result, for a single channel:


What I think

My guess - Multiple channels force filters to have a single bias, which doesn't fit all of them.
The edges problems are due to bias + zero padding, and the validation data is due to bias that doesn't agree with all channels.


Questions:

  1. Does my analysis make sense?
  2. What is a possible way to solve this?

My thoughts:

  1. A distinct bias per channel on at least the last layer. How to do this in Pytorch?
  2. Normalizing per-sample (and not per channel) just before passing it to the model, then de normalizing the reconstructed sample.

I don't know how to correctly implement either of those, nor if they make sense, or how to check.

Also posted here, but I think this also belongs on ai.stackexchange

",21645,,21645,,2/8/2021 12:36,2/8/2021 12:36,Dealing with bias in multi-channel auto encoders,,0,0,,,,CC BY-SA 4.0 26280,2,,26258,2/8/2021 12:33,,0,,"

Siamese networks may be useful in your case.

http://www.cs.cmu.edu/~rsalakhu/papers/oneshot1.pdf

https://en.wikipedia.org/wiki/Siamese_neural_network

https://link.springer.com/protocol/10.1007%2F978-1-0716-0826-5_3

",5852,,,,,2/8/2021 12:33,,,,1,,,,CC BY-SA 4.0 26281,1,,,2/8/2021 13:57,,2,133,"

Planning problems have been the first problems studied at the dawn of AI (Shakey the robot). Graph search (e.g. A*) and planning (e.g. GraphPlan) algorithms can be very efficient at generating a plan. As for problem formulation, for planning problems PDDL is preferred. Although planning problems in most cases only have discrete states and actions, the PDDL+ extention covers continuos dimensions of the planning problem also.

If a planning problem has non-deterministic state transitions, classical planning algorithms are not a well suited solution method, (some) reinforcement learning is considered to be a well suited solution in this case. From this point of view, if a planning problem has a state transition probability less than 1, classical planning methods (A*, GraphPlan, FF planner, etc.) are not the right tool to solve these problems.

Looking at reinforcement learning examples, in some cases environments are used to showcase reinforcement learning algorithms which could be very well solved by search/planning algorithms. They are fully deterministic, fully observable and sometimes they even have discrete action and state spaces.

Given an arbitrary fully deterministic planning problem, what is/are the chartacteristic(s) which make reinforcement learning, and not "classical planning" methods better suited to solve the problem?

",2585,,2585,,2/8/2021 19:15,2/8/2021 19:15,What trait of a planning problem makes reinforcement learning a well suited solution?,,0,7,,,,CC BY-SA 4.0 26282,1,26286,,2/8/2021 14:57,,1,358,"

I am optimising function that can have both positive and negative values in pretty much unknown ranges, might be -100, 30, 0.001, or 4000, or -0.4 and I wonder how I can transform these results so I can use it as a fitness function in evolutionary algorithms and it can be optimised in a way that, for example, it can go from negative to positive along the optimisation process (first generation best chromosome can have -4.3 and best at 1000 generation would have 5.9). Although the main goal would always be to maximise the function.

Adding a constant value like 100 and then treating it simply as positive is not possible because like I said, the function might optimise different ranges of results in different runs for example (-10000 to +400 and in another run from -0.002 to -0.5).

Is there a way to solve this?

",22659,,,,,2/8/2021 23:10,How to deal with evolutionary/genetic fitness function that can have both negative and positive values?,,1,3,,,,CC BY-SA 4.0 26284,1,,,2/8/2021 19:16,,1,310,"

In the tutorial BERT – State of the Art Language Model for NLP the masked language modeling pre-training steps are described as follows:

In technical terms, the prediction of the output words requires:

  1. Adding a classification layer on top of the encoder output.

2.Multiplying the output vectors by the embedding matrix, transforming them into the vocabulary dimension.

3.Calculating the probability of each word in the vocabulary with softmax.

In the Figure below this process is visualized and also from the tutorial.

I am confused about what exactly is done. Does it mean that each output vector O is fed into a fully connected layer with embedding_size neurons and then multiplied by the embedding matrix from the input layer?

Update:

In the tutorial The Illustrated GPT-2 (Visualizing Transformer Language Models) I found an explanation for GPT-2 which seems to be similar to my question.

In the tutorial is said that each output vector is multiplied by the input embedding matrix to get the final output.

Does the same mechanic apply to BERT?

",43632,,43632,,2/10/2021 14:25,2/10/2021 14:25,What does the outputlayer of BERT for masked language modelling look like?,,0,0,,,,CC BY-SA 4.0 26286,2,,26282,2/8/2021 23:10,,0,,"

If I understand correctly your problem, you always want to maximize some function $h(x)$, which is defined as follows $h: \Gamma \rightarrow \mathbb{R}$, where $\Gamma$ is the space of genotypes. However, at every generation $g$, you don't know exactly $h(i)$ of each individual $i$, i.e., maybe in one generation $g$, all $h(i)$, for $i=1, \dots, N$ (where $N$ is the size of the population), are negative, but in the next generation $g+1$, due to the mutations and crossovers, that may not be the case anymore, so you don't know how to shift $h(i), \forall i$, so that they are all positive and you can compute the probability of being selected, assuming you are using the fitness proportionate selection.

If my intepretation is correct, then, at every generation $g$, you just need to find $w = \operatorname{min}_i h(i)$, then shift all $h(i)$ by adding $|w| + \epsilon$ (where $\epsilon$ can be zero or a very small number) to all $h(i)$, so you would compute the fitness as follows $f(i) = h(i) + |w| + \epsilon$, for all $i$. Here are all the steps to compute the probability of individual $i$ being selected $p(i)$

  1. $w = \operatorname{min}_i h(i)$
  2. $f(i) \leftarrow h(i) + |w| + \epsilon, \forall i$
  3. $p(i) \leftarrow \frac{f(i)}{\sum_{j=1}^{N} f(j)}, \forall i$

This technique is known as windowing, which I have used in the past to solve the exact same problem that you seem to be trying to solve (check it here).

",2444,,,,,2/8/2021 23:10,,,,3,,,,CC BY-SA 4.0 26287,1,,,2/9/2021 0:10,,0,84,"

I understand there are multiple versions used in AlphaFold. What kind of deep learning model does the more advanced version use? CNN, RNN, or something else?

(Additionally, is there an open-source reference model for the protein folding problem?)

",16839,,2444,,2/9/2021 12:12,2/10/2021 0:26,What kind of deep learning model does latest version of AlphaFold use for protein folding problem?,,1,0,,,,CC BY-SA 4.0 26289,1,26296,,2/9/2021 7:03,,9,1847,"

In the context of Artificial Intelligence, sometimes people use the word "agent" and sometimes use the word "model" to refer to the output of the whole "AI-process". For examples: "RL agents" and "deep learning models".

Are the two words interchangeable? If not, in what case should I use "agents" instead of "models" and vice versa?

",16565,,2444,,2/9/2021 14:22,2/9/2021 22:43,What are the differences between an agent and a model?,,3,0,,,,CC BY-SA 4.0 26290,2,,26289,2/9/2021 8:19,,5,,"

In game AI context:

  • An Agent is a player that plays the game. basically, its a function that gets the current state of the game and returns the next action.
  • A Model is a representation of the game.

For example, I have made a Gin-Rummy game + AI-agents. One aspect in the model was the representation of the deck as a $4*13$ matrix, where each entry in the matrix is the card status (location, whether the card has been seen by opponent).

One can model the same game in different ways, most times there is a tradeoff between representability and simplicity.

",43351,,43351,,2/9/2021 15:41,2/9/2021 15:41,,,,1,,,,CC BY-SA 4.0 26292,2,,25984,2/9/2021 9:59,,3,,"

Because it is possible to fool many different models at once. See table 2 in this paper, for an example using adversarial perturbations: https://arxiv.org/pdf/1610.08401.pdf

That being said, there is no reason to think that using two detectors at once will not increase chance to detect deepfakes. It will just not resolve the problem completely.

",44507,,,,,2/9/2021 9:59,,,,0,,,,CC BY-SA 4.0 26293,2,,2738,2/9/2021 10:22,,1,,"

If you want to understand in one line how it's used in AI, I would say how you update your beliefs according to new data/information is calculated by Bayes' theorem.

Bayes' theorem says it will calculate the probability of something happening, given that some other thing has already happened. In this scenario, we already have some prior (prior belief or the probability of an event to occur without any new information). Now, we will simulate the event again and again and keep updating our probability of the event to occur with the information we collected.

",44508,,2444,,2/9/2021 11:12,2/9/2021 11:12,,,,0,,,,CC BY-SA 4.0 26294,1,,,2/9/2021 10:43,,4,62,"

This is a topic I have been arguing about for some time now with my colleagues, maybe you could also voice your opinion about it.

Artificial neural networks use random weight initialization within a certain value range. These random parameters are derived from a pseudorandom number generator (Gaussian etc.) and they have been sufficient so far.

With a proper sample simple, pseudorandom numbers can be statistically tested that they are in fact not true random numbers. With a huge neural network like GPT-3 with roughly 175 billion trainable parameters, I guess that if you would use the same statistical testing on the initial weights of GPT-3 you would also get a clear result that these parameters are pseudorandom.

With a model of this size, could in theory at least the repeatable structures of initial weights caused by their pseudorandomness affect the model fitting procedure in a way that the completed model would be affected (generalization or performance-wise)? In other words, could the quality of randomness affect the fitting of huge neural networks?

",44509,,2444,,2/9/2021 12:15,2/9/2021 12:15,Can the quality of randomness in neural network initialization affect model fitting?,,0,1,,,,CC BY-SA 4.0 26295,1,,,2/9/2021 11:02,,1,22,"

I'm new to machine learning and trying to apply it for fault detection, an idea came to mind which is using only anomaly detection after which if the results after a while come up as positive, a multi-class classification algorithm (using 7 different classes) is used to classify the fault type. would that be efficient and saves on resources power?

",44510,,,,,2/9/2021 11:02,Using one-class classification first to find anomalies then apply multi-class classification,,0,0,,,,CC BY-SA 4.0 26296,2,,26289,2/9/2021 11:51,,9,,"

Agent

The other answer defines an agent as a policy (as it's defined in reinforcement learning). However, although this definition is fine for most current purposes, given that currently agents are mainly used to solve video games, in the real world, an intelligent agent will also need to have a body, which Russell and Norvig call an architecture (section 2.4 of the 3rd edition of Artificial Intelligence: A Modern Approach, page 46), which should not be confused with an architecture of a model or neural network, but it's the computing device that contains the physical sensors and actuators for the agent to sense and act on the environment, respectively. So, to be more general, the agent is defined as follows

agent = body + policy (brain)

where the policy is what Russell and Norvig call the agent program, which is an implementation of the agent function.

Alternatively, it can be defined as follows

An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.

This is just another definition given by Russell and Norvig, which I also report in this answer, where I describe different types of agents. Note that these definitions are equivalent. However, in the first one, we just emphasize that we need some means to "think" (brain) and some means to "behave" (body).

These definitions are quite general, so I think people should use them, although, as I said above, sometimes people refer to an agent as just the policy.

Model

In this answer, I describe what a model is or what I like to think a model is, and how it is different from a function.

In AI, a model can refer to different but somehow related concepts.

  • For example, in reinforcement learning, a model typically refers to $p(s', r \mid s, a)$, i.e. the joint probability distribution over the next state $s'$ and reward $r$, given the current state $s$ and action $a$ taken in $s$.

  • In deep learning, a model typically refers to a neural network, which can be used to compute (or model) different functions. For example, a neural network can be used to compute/represent/model a policy, so, in this case, there would be no actual difference between a model and an agent (if defined as a policy, without a body). However, conceptually, at a higher-level, these would still be different (in the same way that biological neural networks are different from the brain).

  • More generally, in machine learning, a model typically refers to a system that can be changed to compute some function. Examples of models are decision trees, neural networks, linear regression models, etc. So, as I also state in the other answer, I like to think of a model as a set of functions, so, in this sense, a model would be a hypothesis class in computational learning theory. This definition is roughly consistent with $p(s', r \mid s, a)$, which can also be thought of as a (possibly infinite) set of functions, but note that a probability distribution is not exactly a set of functions.

  • In the context of knowledge bases, a model is an assignment to the variables, which represents a "possible world". See section 7.3, page 240, of the cited book.

There are possible other uses of the word model (both in the context of AI, e.g. in the context of planning, there's often the idea of a conceptual model, which is similar to an MDP in RL, and in other areas), but the definitions given above should be more or less widely applicable in their contexts.

What is the difference between an agent and a model?

Given that there are different possible definitions of a model depending on the context, it's not easy to briefly state what the difference between the two is.

So, here's the difference in the context of RL (and you can now find out the differences in other contexts by using the different definitions): an agent can have a model of the world, which allows it to predict e.g. the reward it will receive given its current state and some action that it decides to take. The model can allow the agent to plan. In this same context, a model could also refer to the specific system (e.g. a neural network) used to compute/represent the policy of the agent, but note that people usually refer to $p(s', r \mid s, a)$ when they use the word model in RL. See this post for more details.

",2444,,2444,,2/9/2021 15:12,2/9/2021 15:12,,,,2,,,,CC BY-SA 4.0 26298,2,,6267,2/9/2021 16:24,,3,,"

Before proceeding and answering the actual question, it's worth noting that AI and AGI are not the same thing, as was the case at the beginning in 1956, as suggested in the official proposal for the Dartmouth workshop.

Nowadays, people that consider themselves "AI researchers" or "AI practitioners" (e.g. myself) typically are not trying to directly build an AGI, but are focusing on a specific AI approach, such as reinforcement learning, which could, one day, be used to build an AGI. The reason is that we have noticed that directly tackling the "AGI problem" (i.e. creating an AGI) is a lot more complex than was originally thought and some do not think that this is even possible. AGI is a sub-branch of AI that studies how to create an AGI (or human-like AI). Only a few people are still working on AGI.

Ben Goertzel, who's one of the people that is still interested in and attempting to directly create an AGI, wrote a blog post about this topic: AGI Curriculum. If he had to design a curriculum, it would be divided into 6 courses

  1. History of AI
  2. AI Algorithms, Structures and Methods
  3. Neuroscience & Cognitive Psychology
  4. Philosophy of Mind
  5. AGI Theories & Architectures
  6. Future of AGI

He then suggests multiple readings (books) for each of these courses/topics. Below, I will list one book for each of the courses (also based on their free availability online as pdfs). You can find more books in the blog post.

  1. The book What Computers Still Can't Do (1992) by Hubert Dreyfus
  2. The book Artificial Intelligence A Modern Approach (AIMA) by Russell and Norvig, but Goertzel notes that this is not an AGI book, but gives an introduction to multiple AI topics that have been used in many cognitive architectures for AGI
  3. The book Neuroscience: Exploring the Brain by Bear, Connors and Paradiso
  4. The book Being No One: The Self-model Theory of Subjectivity (2003) by Thomas Metzinger
  5. The paper Artificial General Intelligence: Concept, State of the Art, and Future Prospects (2014) by Ben Goertzel
  6. The book Singularity is Near (2005) by Kurzweil

So, to conclude, if you want to study artificial general intelligence, it's not sufficient to just read the typical machine learning or deep learning books, but you also need to have a more solid understanding of other aspects of artificial intelligence and even neuroscience in order to study and do research on AGI. Moreover, it's probably a good idea to also have a good background in all the traditional approaches, what they can or not do, the history of AI (why some approaches have failed or not), and understand the philosophical problems and, last but of course not least, read about the current approaches to AGI, such as universalist (e.g. AIXI) or symbolic ones (all the cognitive architectures such as OpenCog).

To answer your question more directly, if you can read and understand the AIMA book, then you probably have if not all most of the mathematical prerequisites, which will probably include

  • logic
  • discrete mathematics
  • calculus
  • optimization
  • linear algebra
  • probability theory
  • theory of computation (this will definitely be needed if e.g. you want to learn about AIXI, but you will also need a nice dose of measure theory and algorithmic information theory to understand all the mathematical details of the theory)

Note that, although these subjects (logic, probability theory, or theory of computation) are necessary to understand the current approaches to AGI, they may not be sufficient to develop a full AGI, but this is a different story. Moreover, note that these mathematical subjects are not just required to understand the current approaches to AGI, but they would also be useful to understand any other AI sub-branch, such as machine learning (and that's probably why people may think that this answer is misleading, but it's not: if you have ever tried to learn something about AIXI, you will know that all the subjects above are more than required!)

In the future, if you also want to do serious research on AGI, having a degree in Computer Science, Cognitive Science, Neuroscience, Mathematics, and/or, of course, Artificial Intelligence, may be a good thing. By the way, Ben Goertzel has a Ph.D. in math. Marcus Hutter, the inventor of AIXI, did his bachelor's and master's in CS with minors in mathematics, and one Ph.D. in theoretical particle physics and another Ph.D. in CS during basically the time that he developed AIXI.

",2444,,2444,,2/12/2021 17:07,2/12/2021 17:07,,,,0,,,,CC BY-SA 4.0 26299,1,26300,,2/9/2021 18:39,,3,214,"

I'm trying to implement a variational auto-encoder (as seen in Section 3.1 here: https://arxiv.org/pdf/2004.06271.pdf).

It differs from a traditional VAE because it encodes its input images to three-dimensional latent feature maps. In other words, the latent feature maps have a width, height and channel dimension rather than just a channel dimension like a traditional VAE.

When calculating the Kullback-Liebler divergence as part of the loss function, I need the mean and covariance that is the output of the encoder. However, if the latent feature maps are three-dimensional, this means that the output of the encoder is three-dimensional, and therefore each latent feature is a 2D matrix.

How can I derive a mean and covariance from a 2D matrix to calculate the KL divergence?

",44524,,,,,2/10/2021 12:29,How do you calculate KL divergence on a three-dimensional space for a Variational Autoencoder?,,1,0,,,,CC BY-SA 4.0 26300,2,,26299,2/9/2021 21:00,,1,,"

Your three dimensional latent representation consists of two images of mean pixels and covariance pixels as shown in Fig. 3. Which represents a Gaussian distribution with the mean and covariance for each pixel in the latent representation. Each pixel value is a random variable.

Now, have a close look at KL-loss Eq. 3 and it's corresponding description in the paper:

$$\mathcal{L}_{KL} = \frac{1}{2 \times (\frac{W}{16} \times \frac{H}{16}) } \sum^M_{m = 1}[\mu^2_m + \sigma^2_m - \log(\sigma^2_m) - 1]$$

Finally, $M$ is the dimensionality of the latent features $\theta \in \mathbb{R}^M$ with mean $\mu = [\mu_1,...,\mu_M]$ and covariance matrix $\Sigma = \text{diag}(\sigma_1^2,...,\sigma_M^2)$, [...].

The covariance matrix is diagonal, thus all pixel values are independent of each other. That is the reason why we have this nice analytical form for the KL-divergence given by Eq. 3. Therefore you can treat your 2D random matrix simply as a random vector of size $M = \frac{W}{16} \times \frac{H}{16}$ ($\times 3$ if you like to include color dimension). The third dimension (RGB channel) can be considered independent as well, therefore it can be also flattened to a vector and appended. Indeed this is what is done in the paper indicated by the second half of the sentence from above:

that are reparameterized by via sampling from a standard multivariate Gaussian $\epsilon \sim \mathcal{N}(0,I_M)$, i.e. $\theta = \mu + \Sigma^{\frac{1}{2}}\epsilon$.

",37120,,37120,,2/10/2021 12:29,2/10/2021 12:29,,,,2,,,,CC BY-SA 4.0 26301,1,,,2/9/2021 21:06,,4,56,"

I have a model that outputs a latent N-dimensional embedding for all data points, trained in a way that clusters data-points from the same class together, while being separated from other clusters belonging to other different classes.

The N-dimensional embedding is projected down to 2D using UMAP. At each epoch, I wish to test the clustering capability of the model on these 2D projections for use as validation accuracy. I have the labels for each class.

How should I proceed?

",33314,,2444,,5/13/2022 8:27,5/13/2022 8:27,Which metric should I use to assess the quality of the clusters?,,2,0,,,,CC BY-SA 4.0 26302,2,,26301,2/9/2021 21:28,,2,,"

You can compute Silhouette Coefficient for your aim. Its values mean:

1: Means clusters are well apart from each other and clearly distinguished.

0: Means clusters are indifferent, or we can say that the distance between clusters is not significant.

-1: Means clusters are assigned in the wrong way.

Other measures, such as purity and mutual information, are also possible by computing

an external criterion that evaluates how well the clustering matches the gold standard classes

",4446,,2444,,5/13/2022 8:26,5/13/2022 8:26,,,,0,,,,CC BY-SA 4.0 26303,1,,,2/9/2021 21:47,,0,325,"

I have a scanned image, and they need to be classified in one of the pre-defined image classes, so that it can be sorted. However, the problem is the open nature of the classes. At testing time, new classes of scanned images can be added and the model should not only classify them as unseen (open set image recognition), but it should be able to tell in which new class it should belong (not able to figure out the implementation for this.)

So, I am thinking that the below option can work for the classification of unseen classes

  1. Zero-shot learning: Once the image is classified as unseen, we can then apply zero-shot learning to find its respective class for sorting.

  2. Template matching: Match the test image of unseen classes with all available class images, and, once we have a match, we can do sorting of images.

  3. Meta learning-based approach: I am not sure how to implement this, suggestions are much appreciated.

Note: I already tried the classical computer vision approach, but it's not working out. So, more open for neural net-based approach.

Is my approach to solving the problem correct? If possible, suggest some alternative to find the corresponding match/classification of the unseen class image. As I could think of these 2 alternative solutions only.

",44197,,44197,,2/17/2021 18:12,12/19/2022 10:09,classification of unseen classes of image in open set classification,,2,0,,,,CC BY-SA 4.0 26305,2,,26289,2/9/2021 22:43,,2,,"

Agents act

The key property of agents is that they act. Quoting one possible definition (Russel&Norvig, AI: A modern approach), "An agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators." The environment can be the physical world around us, or some computer system, or a simulation, however, the key part of an agent that distinguishes it from non-agents is that it acts to affect that environment - as opposed to merely observing it and/or making some calculations.

Within the field of artificial intelligence there are many contexts where various models are used to decide upon a course of action. For example, reinforcement learning, or world models that are used in planning systems. In this case the whole system would be an agent, but the model would be just a part of that system.

However, that does not apply for all contexts - there are systems that are used for making decisions not related to any action (e.g. an OCR system recognizing particular letters), and there are systems that make a decision about some action, but are not intended to actually act on that decision (e.g. they show the proposed decision to a human which might act on that information). Such systems may contain models but in this case also the whole system should not be called an agent, because the actual act of acting is out of scope for that system.

",1675,,,,,2/9/2021 22:43,,,,0,,,,CC BY-SA 4.0 26307,2,,26287,2/10/2021 0:26,,1,,"

The accurate answer is 'we don't know for sure'. But that said, the Deepmind article about AlphaFold 2, which won the CASP 14 competition, says this:

A folded protein can be thought of as a “spatial graph”, where residues are the nodes and edges connect the residues in close proximity. This graph is important for understanding the physical interactions within proteins, as well as their evolutionary history. For the latest version of AlphaFold, used at CASP14, we created an attention-based neural network system, trained end-to-end, that attempts to interpret the structure of this graph, while reasoning over the implicit graph that it’s building. It uses evolutionarily related sequences, multiple sequence alignment (MSA), and a representation of amino acid residue pairs to refine this graph.

A reasonable assumption that can be formed from this paragraph is that it is using some kind of a Transformer-based architecture. Yannic Kilcher's review video on AlphaFold discusses this assumption, along with a detailed explanation of the previous version of AlphaFold.

The source code for this previous version of AlphaFold, the one that was used in CASP 13, seems to have been open-sourced by Deepmind in their Github repository:

https://github.com/deepmind/deepmind-research/tree/master/alphafold_casp13

",3712,,,,,2/10/2021 0:26,,,,0,,,,CC BY-SA 4.0 26309,1,,,2/10/2021 3:22,,3,73,"

A Turing Test is a method of inquiry for determining whether or not a computer is capable of thinking like a human being. In an ideal Turing test, it would be clear to differentiate between a real human being and a robot or AI with human characteristics.

However, it is also possible in a Turing test that a human tries to mimic the behaviour of a computer so that the person applying the test cannot distinguish between a human being and a robot/AI.

Is this a concept that is explored much in computer science? As in research into the variations of Turing tests that can be used to identify whether a human is trying to mimic or impersonate as a robot or AI.

",1745,,,,,2/10/2021 3:22,What is the reverse of passing a Turing test by a human pretending to be a robot that can't be identified?,,0,2,,,,CC BY-SA 4.0 26311,1,,,2/10/2021 7:37,,0,82,"

I am relatively new to machine learning, and I am trying to use a deep neural network to extract some information from sequences of RNA.

A quick overview of RNA: there is both sequence and structure. I am currently expressing the sequence with one-hot encoding (so a sequence of length $60$ would be expressed as a $60 \times 4$ matrix, with one row for each letter of the sequence, and one column for each possible value of that letter). I am also feeding the 2D structure of the RNA into the network, which is expressed as $60 \times 60$ matrix for a sequence of length $60$.

I am trying to use these inputs to predict a single continuous value for a given sequence.

Currently, I am using pretty much the exact setup from this tutorial. I chose this architecture because it allows me to separate the inputs (sequence and structure) and have individual layers for them before merging them into a single model. I think this makes more sense than trying to glue the two separate pieces of data together into a single input.

However, the model doesn't seem to be learning anything - validation loss decreases very slightly then plateaus.

If anyone has suggestions, especially someone who has worked with RNA, DNA, or proteins before, I would really, really appreciate it. I am new to this, and I am not sure how to improve my model from here.

def create_mlp(height,width,filters=(16, 16, 32, 32, 64), regress=False):
    # initialize the input shape and channel dimension, assuming
  # TensorFlow/channels-last ordering
  inputShape = (height, width)
  chanDim = -1
  # define the model input
  inputs = Input(shape=inputShape)
  # loop over the number of filters
  for (i, f) in enumerate(filters):
    # if this is the first CONV layer then set the input
    # appropriately
    if i == 0:
      x = inputs
    # CONV => RELU => BN => POOL
    x = Conv1D(f, 3, padding="same")(x)
    x = Activation("relu")(x)
    x = BatchNormalization(axis=chanDim)(x)
    x = MaxPooling1D(pool_size=2)(x)
  # flatten the volume, then FC => RELU => BN => DROPOUT
  print(x.shape)
  x = Flatten()(x)
  x = Dense(16)(x)
  x = Activation("relu")(x)
  x = BatchNormalization(axis=chanDim)(x)
  x = Dropout(0.5)(x)
  # apply another FC layer, this one to match the number of nodes
  # coming out of the MLP
  x = Dense(4)(x)
  x = Activation("relu")(x)
  # check to see if the regression node should be added
  if regress:
    x = Dense(1, activation="linear")(x)
  # construct the CNN
  model = Model(inputs, x)
  # return the CNN
  return model

def create_cnn(width, height, depth, filters=(16, 16, 32, 32, 64), regress=False):
  # initialize the input shape and channel dimension, assuming
  # TensorFlow/channels-last ordering
  inputShape = (height, width, depth)
  chanDim = -1
  # define the model input
  inputs = Input(shape=inputShape)
  # loop over the number of filters
  for (i, f) in enumerate(filters):
    # if this is the first CONV layer then set the input
    # appropriately
    if i == 0:
      x = inputs
    # CONV => RELU => BN => POOL
    x = Conv2D(f, (3, 3), padding="same")(x)
    x = Activation("relu")(x)
    x = BatchNormalization(axis=chanDim)(x)
    x = MaxPooling2D(pool_size=(2, 2))(x)
  # flatten the volume, then FC => RELU => BN => DROPOUT
  print(x.shape)
  x = Flatten()(x)
  x = Dense(16)(x)
  x = Activation("relu")(x)
  x = BatchNormalization(axis=chanDim)(x)
  x = Dropout(0.5)(x)
  # apply another FC layer, this one to match the number of nodes
  # coming out of the MLP
  x = Dense(4)(x)
  x = Activation("relu")(x)
  # check to see if the regression node should be added
  if regress:
    x = Dense(1, activation="linear")(x)
  # construct the CNN
  model = Model(inputs, x)
  # return the CNN
  return model

mlp = create_mlp(l, 4, regress=False)
  cnn = create_cnn(l, l, 1, regress=False)
  # create the input to our final set of layers as the *output* of both
  # the MLP and CNN
  #cnn.output.reshape()
  combinedInput = concatenate([mlp.output, cnn.output])
",44539,,34371,,2/16/2021 8:13,2/16/2021 8:13,Extracting information from RNA sequence,,1,3,,,,CC BY-SA 4.0 26314,1,26315,,2/10/2021 12:05,,3,282,"

The first step of MCTS is to keep choosing nodes based on Upper Confidence Bound applied to trees (UCT) until it reaches a leaf node where UCT is defined as

$$\frac{w_i}{n_i}+c\sqrt{\frac{ln(t)}{n_i}},$$

where

  • $w_i$= number of wins after i-th move
  • $n_i$ = number of simulations after the i-th move
  • $c$ = exploration parameter (theoretically equal to $\sqrt{2}$)
  • $t$ = total number of simulations for the parent node

I don't really understand how this equation avoids sibling nodes being starved, aka not explored. Because, let's say you have 3 nodes, and 1 we'll call it node A is chosen randomly to be explored, and just so happens to simulate a win. So, node A's UCT$=1+\sqrt(2)\sqrt{\frac{ln(1)}{1}}$, while the other 2 nodes UCT = 0, because they are unexplored and the game just started, so by UCT the other 2 nodes will never be explored no? Because after this it'll go into the expansion phase and expansion only happens it reaches a leaf node in the graph. So because node A is the only one with a UCT $> 0$ it'll choose a child of node A and it will keep going down that node cause all the siblings of node A have a UCT of 0 so they never get explored.

",30885,,2444,,2/10/2021 13:13,2/10/2021 13:48,How UCT in MCTS selection phase avoids starvation?,,1,1,,,,CC BY-SA 4.0 26315,2,,26314,2/10/2021 13:17,,2,,"

First explore the nodes A,B,C once.

For reference see this paper by David Silver and Sylvain Gelly, Combining Online and Offline Knowledge in UCT

If any action from the current state $s$ is not represented in the tree, $\exists a \in \mathcal{A}(s),(s, a) \notin \mathcal{T},$ then the uniform random policy $\pi_{\text {random }}$ is used to select an action from all unrepresented actions, $\tilde{\mathcal{A}}(s)=\{a \mid(s, a) \notin \mathcal{T}\}$.

",43351,,2444,,2/10/2021 13:48,2/10/2021 13:48,,,,0,,,,CC BY-SA 4.0 26318,1,,,2/10/2021 20:27,,3,110,"

The boy lifted the bat and hit the ball.

In the above sentence, the noun "bat" means the wooden stick. It does not mean bat, the flying mammal, which is also a noun. Using NLP libraries to find the noun version of the definition would still be ambiguous.

How would one go about writing an algorithm that gets the exact definition, given a word, and the sentence it is used in?

I was thinking you could use word2vec, then use autoextend https://arxiv.org/pdf/1507.01127.pdf to differentiate between 2 different lexemes e.g. bat (animal) and bat (wooden stick).

Then the closest cosine distance between the dictionary definition and any of the words of the sentence might indicate the correct definition.

Does this sound correct?

",44550,,2444,,2/10/2021 23:39,2/23/2021 17:01,How would one disambiguate between two meanings of the same word in a sentence?,,1,0,,,,CC BY-SA 4.0 26320,2,,26311,2/10/2021 23:06,,1,,"

The link you have mentioned is using Dense layers. One thing to start with would be to use 1D CNNs (they will capture some local information). Also, since sequence matters in your case, refrain from one-hot encodings (just 1, 2, 3, 4). And for the 2D matrix, use a 2D CNN. Then, flatten your encoding for both 1D CNN and 2D CNN, then, finally combine them. This shall give you some improvement.

This is for your network your are already using.

Here is an alternate suggestion for you:

Look at GCN (Graph Convolutional Network). Since, you have a graph structure (60x60 matrix is actually an adjacency matrix) and each node i.e. 60x4 in your case, can then become node features. This is what you want a GNN and its variants for. Look at GAT (Graph Attention Networks) if GCN don't work for you.

",37203,,,,,2/10/2021 23:06,,,,2,,,,CC BY-SA 4.0 26321,2,,26303,2/10/2021 23:15,,0,,"

I have one possible solution for you inspired by real-time face recognition system. It is a similar case to you.

  1. Create embeddings for each class i.e. person or in your case, a class. (Using Siamese network with ArcFace loss)
  2. When a new image comes, take L1 or cosine distance with the embeddings. You will need a threshold above which you create a embedding for the new guy and give it a new class.

One issue you will think at this point of time is how do you do that when have billions of data points. The search for the matching embeddings can then be done using Hierarchical Navigable Small World graphs (see https://github.com/nmslib/hnswlib).

This solution is loosely connected to the plausible techniques you have mentioned here. And if you still want something in that regard, I shall add the keyword Active Learning.

",37203,,,,,2/10/2021 23:15,,,,3,,,,CC BY-SA 4.0 26322,2,,26301,2/10/2021 23:19,,1,,"

One more popular metric for this is the Davies Bouldin Score.

You can also take a look at the clustering metrics in scikit documentation.

",37203,,2444,,5/13/2022 8:27,5/13/2022 8:27,,,,0,,,,CC BY-SA 4.0 26323,1,,,2/10/2021 23:24,,0,40,"

I was going through this paper on helicopter flight control using reinforcement learning by Andrew Ng et al.

It defines two policy classes to learn two policies, one for hovering the helicopter and another for maneuvering (aka trajectory following). The goal for hovering policy is defined as follows:

We want a controller that, given the current helicopter state and a desired hovering position and orientation $\{x^*, y^*, z^*, \omega^*\}$ computes controls $a\in [-1,1]^4$ to make it hover stably.

The goal for maneuvering policy is given in term of hovering policy as follows:

Given a controller for keeping a system’s state at a point $(x^*,y^*,z^*,\omega^*)$, one standard way to make the system move through a particular trajectory is to slowly vary $(x^*,y^*,z^*,\omega^*)$ along a sequence of set points on that trajectory points.

The neural network for these policy classes is shown as follows:

In this, $(\dot{x},\dot{y},\dot{z})$ are velocity estimates, $(\phi,\theta)$ is helicopter roll and pitch estimates and $\dot{w}$ is angular velocity component estimate (more on this at the bottom of page 1 of the linked paper).

Each edge with an arrow in the picture denotes a tunable parameter. The solid lines show the hovering policy class. The dashed lines show the extra weights added for trajectory following (maneuvering). With this observation, I had following doubts:

Q1. Does addition of dashed lines to hovering policy to get maneuvering policy makes manuvering policy superset of hovering policy?

Q2. Rephrasing Q1: can we use maneuvering policy for hovering task (say by setting weights corresponding to dashed lines to zero)?

Q3. If maneuvering policy is indeed a superset of hovering policy, why the authors dont use just maneuvering policy for both tasks or maneuvering policy for hovering task also? Is it because it involves computation involving helicopter's additional sub dynamics represented by dashed line and this additional computation is not required for hovering task?

Or am I completely getting wrong with all these questions?

",41169,,2444,,2/11/2021 12:23,2/11/2021 12:23,Understanding policies in helicopter control in the paper by Andrew Ng et al,,0,4,,,,CC BY-SA 4.0 26324,1,,,2/11/2021 0:34,,0,34,"

I have the following game board below, and we're using A* search to find the optimal path from the agent to the key. There are 8 directions. Up, down, left, right have a cost of 1, and diagonal directions have cost 3. We will be using a priority queue with function $f(v) = g(v) + h(v)$ where $g(v)$ is the backwards cost from the goal through the given edges and up to the vertex v while $h(v)$ is the optimal least cost distance from v to the goal node.

So I calculated the f(s) for the different states, assuming no prior edges specified:

And then I started the search and these are the steps I took: expand C: (CD,3), (CE,3), (CF,3), (CA,5), (CB,5)

expand CD: (CDF,3),(CE,3), (CF,3), (CA,5), (CB,5), (CDB,5)

expand CDF: (CDFH,3), (CE,3), (CF,3), (CA,5), (CB,5), (CDB,5), (CDFG,6)

expand CDFH: (CE,3), (CF,3), (CA,5), (CB,5), (CDB,5), (CDFG,6)

So I only expanded, C,D,F,H. I got the correct answer for the optimal path, but not the correct answer for nodes expanded, which is supposed to be C, D, E, F, G, H. What am I doing wrong?

",44537,,,,,2/11/2021 0:34,Incorrect node expansion in game board with A* search,,0,4,,,,CC BY-SA 4.0 26325,2,,22307,2/11/2021 2:21,,2,,"

Feature scaling happens to be a problem when a model is characterized by having a distance metric (or another kind of numerical evaluation for that matter). Therefore models such as support vector machines, neural networks, distance based clustering methods (e.g. k means) and linear/logistic regression are prone to changes by feature scaling.

Those which are based on probability rather than distances are not scale variant. These include Naive Bayes Classifiers, or decision trees.

",44556,,44556,,2/14/2021 15:52,2/14/2021 15:52,,,,0,,,,CC BY-SA 4.0 26327,1,26339,,2/11/2021 4:10,,6,1587,"

Conceptually, in general, how is the context being handled in contextual bandits (CB), compared to states in reinforcement learning (RL)?

Specifically, in RL, we can use a function approximator (e.g. a neural network) to generalize to other states. Would that also be possible or desirable in the CB setting?

In general, what is the relation between the context in CB and the state in RL?

",33586,,2444,,2/11/2021 16:31,2/11/2021 17:01,What is the relation between the context in contextual bandits and the state in reinforcement learning?,,2,0,,,,CC BY-SA 4.0 26329,1,,,2/11/2021 5:50,,2,38,"

I was reading a paper on the subject of explainable AI and interpretability, in particular the tendency of people (even experts) to excessively trusting explanations given by AI. In the intro the author describes riding in a self-driving car with a screen on the passenger side that depicts the car's vision and classification of the objects on the road, ostensibly to improve the level of trust in the car's decision-making. Later the author quotes a study in which experts in a field give good ratings to an AI's explanations for its decision-making, even when the explanations given are intentionally incorrect.

I cannot for the life of me remember which paper this is or what it covers in its main sections, and after searching through dozens of my saved papers as well as online search engines I cannot recover it. The paper also mentions a specific term for trusting machines/AI, which I also can't remember and would definitely help me find the paper if I could.

If anyone is familiar with this paper or the study it quotes, I would really appreciate a link.

",44562,,11539,,2/15/2021 14:05,2/15/2021 14:05,What is the paper that states that humans incorrectly trust the incorrect explanations of the AI?,,0,1,,,,CC BY-SA 4.0 26330,1,,,2/11/2021 6:33,,2,4249,"

I came across the hinge loss function for training a neural network model, but I did not know the analytical form for the same.

I can write the mean squared error loss function (which is more often used for regression) as

$$\sum\limits_{i=1}^{N}(y_i - \hat{y_i})^2$$

where $y_i$ is the desired output in the dataset, $\hat{y_i}$ is the actual output by the model, and $N$ is the total number of instances in our dataset.

Similarly, what is the (basic) expression for hinge loss function?

",18758,,2444,,2/11/2021 13:26,2/11/2021 13:26,What is the definition of the hinge loss function?,,1,1,,,,CC BY-SA 4.0 26331,1,,,2/11/2021 9:25,,1,31,"

The authors of this paper present a framework for checking the constraints of AI systems using formal argumentative logic between 2 agents: an interrogating agent and a suspect agent. The interrogating agent is attempting to find contradictions in the responses of the suspect agent by querying about information and the suspect agent must provide all the relevant information.

Is this framework really new? I am pretty certain that I saw a very similar framework in the context of program verification some years ago.

",44566,,2444,,2/16/2021 16:59,2/16/2021 16:59,Is the framework provided by this paper for checking the constraints of AI systems really new?,,0,0,,,,CC BY-SA 4.0 26332,1,26432,,2/11/2021 10:06,,0,91,"

I need some tool to classify articles based on short category text which consists of two or three words separated by '-'. The RSS/XML tag content is for example:

Foreign - News

Football - Foreign

I created my own categories and now I need to classify categories from parsed RSS of this news source, so it fits news categories defined by me.

I would for example need all articles containing category "football" to be identified as a category Sport but sometimes those categories XML tags contains exact match like Foreign - News should belong in the DB to category defined by me as Foreign.

I can of course also use longer description text if that would be needed but I think for this simple problem that would not be even necessary.

Since I used only trained decision trees frameworks so far for another project, I would like to hear advice about approach, AI technique or particular framework I can use to solve this problem. I don't want to get into a dead-end street by my own poor in this field not experienced decision.

",44567,,44567,,2/17/2021 16:06,2/17/2021 18:59,What approach to use for selecting one of the category according to short category text?,,1,1,,,,CC BY-SA 4.0 26333,1,,,2/11/2021 11:23,,2,39,"

I am trying to design a CNN that can do pixel wise segmentation of edges leaves in dense foliage agriculture images. Such as these:

On the basis of this article https://arxiv.org/pdf/1904.03124.pdf, two classes are defined, such as the external contours and the internal contours of the leaves boundaries. In addition, a multi-scale approach is used trough a unet like architecture and a auxiliary loss was used to learn the edge detection at different scales (which corresponds to the main idea of this article https://arxiv.org/pdf/1804.01646.pdf). I learn the network using a mIoU loss whose weight varies depending on the class and the scale. Finally, my last activation layer is a clipped ReLU. The results are starting to be good :

However, the network is not able to reconnect some internal edges. The following image shows a broken inner edge that does not continue to the next part (in blue) :

So I'm looking for some paper, git, codes, whatever that can improve the reconstruction of missing edges (inside or outside the CNN).

",44569,,11539,,2/14/2021 10:07,2/14/2021 10:07,CNN leaf segmentation throught classification of edges how to improve,,0,0,,,,CC BY-SA 4.0 26335,1,,,2/11/2021 12:51,,2,57,"

I came across this sentence when exploring a simple nearest neighbor classifier method using Euclidean distance (link):

The slightly odd thing about using the Euclidean distance to compare features is that each dimension (or feature) is compared using the same scale.

This got me thinking - if flat feature space implies that each feature contributes equally to the distance (score function), then curved feature space changes the scale between features, so that the features then contribute different amounts to the score function. For example, imagine we have a 2D feature space - a flat piece of paper - with two points, $X_1$ and $X_2$ on it, between which we wish to calculate the distance. If we then bend this into U-shape along, say, y-axis (so, no curvature introduced in y-dimension), the distances along the x-axis would be larger in the bent case than in the flat case:

In other words, feature x would contribute more to the score function than feature y. This sounds awfully like weighing the feature inputs with weight vectors. Does this imply, that weight vectors (and matrices) have a direct effect on curvature of feature space? Does an identity weight matrix (or a vector of all 1s) imply our feature space is flat (and curved otherwise)? Lastly, could it then be said that whenever we are training an ML model, we are in fact learning the approximate curvature of the feature space we wish to model?

",44571,,11539,,2/14/2021 15:13,2/14/2021 15:13,Does the weight vector form imply feature space curvature?,,1,2,,,,CC BY-SA 4.0 26336,2,,26330,2/11/2021 13:23,,3,,"

The hinge loss/error function is the typical loss function used for binary classification (but it can also be extended to multi-class classification) in the context of support vector machines, although it can also be used in the context of neural networks, as described here.

The hinge loss function is defined as follows

$$ \ell(y) = \max(0, 1-t \cdot y) \tag{1}\label{1}, $$ where

  • $t = \{-1, 1\}$ is the label (so, if your labels are in the set $\{0, 1 \}$, you will have to first map them to $\{-1, 1\}$)
  • $y$ is the output of the classifier (e.g. in the context of the linear SVM, $y=\mathbf{w} \cdot \mathbf{x}+b$, where $\mathbf{w}$ and $b$ are the parameter of the hyper-plane)

This means that the loss in equation \ref{1} is always non-negative. If you're familiar with the ReLU, this loss should look familiar to you. In fact, their plots are very similar.

For more details, you probably should start with the related Wikipedia article, then maybe one of the many machine learning books that covers support vector machines, for example, Pattern Recognition and Machine Learning (2006) by Christopher Bishop, chapter 7 (page 325).

",2444,,,,,2/11/2021 13:23,,,,0,,,,CC BY-SA 4.0 26337,1,26338,,2/11/2021 13:29,,4,1295,"

I'm trying to find the name for a model that is used to output a decision (maybe something like right, left, or do nothing = -1, 0,1) but that can be trained with labels that contain how "correct" or "incorrect" it was. I've tried to google around and ask some friends in my machine learning class, but no one seems to have an answer.

The classic example I seem to always see is the models used in the snake game. We don't know what the right decision was per se, but we can say that if it ran into the wall, that was really wrong. Or if it got an apple and gained 50 points, then it was correct and if it got 2 apples and gained 100 points then it was even more correct, etc.

I'm looking for a network where the exact labels don't exist, but where we can penalize or reward its decisions.

I'm assuming this requires some kind of modified cost function, but I would imagine this type of network already exists. I'm hoping someone can provide me with the name for this type of network and whether or not there is a Keras frontend for something like this.

",44335,,2444,,2/12/2021 16:10,2/12/2021 16:10,"Is there a machine learning model that can be trained with labels that only say how ""right"" or ""wrong"" it was?",,1,0,,,,CC BY-SA 4.0 26338,2,,26337,2/11/2021 13:48,,13,,"

What you are looking for is called "reinforcement learning".

A reinforcement learning algorithm will try to maximize a reward function. This reward represents how "good" or "bad" an action is in the actual context. For example, in the snake game, your reward will be positive for eating an apple and negative when the snake hits a wall.

The interesting thing is that, with reinforcement learning, you can learn without having a reward at each step. In the case of the snake game, your agent can learn that going in the direction of the apple is better than going in the direction of the wall, even if none of this action will directly give a reward (positive or negative).

If you want to use a neural network as your post seem to imply then you should look at deep Q-learning, a reinforcement learning algorithm, which use a neural network to learn to predict the expected reward of a couple (state, action).

",26961,,26961,,2/11/2021 18:53,2/11/2021 18:53,,,,3,,,,CC BY-SA 4.0 26339,2,,26327,2/11/2021 16:51,,4,,"

The notion of a state in reinforcement learning is (more or less) the same as the notion of a context in contextual bandits. The main difference is that, in reinforcement learning, an action $a_t$ in state $s_t$ not only affects the reward $r_r$ that the agent will get but it will also affect the next state $s_{t+1}$ the agent will end up in, while, in contextual bandits (aka associative search problems), an action $a_t$ in the state $s_t$ only affects the reward $r_r$ that you will get, but it doesn't affect the next state the agent will end up in. The typical problem that can be formulated as a contextual bandit problem is a recommender system.

In CBs, like in RL, the agent also needs to learn a policy, i.e. a function from states to actions, but actions that you take in a certain state are independent of the actions you take in other states.

So, as Sutton and Barto put it (2nd edition, section 2.9, page 41), contextual bandits are an intermediate problem between (context-free) bandits (where there is only one state or, equivalently, no state at all) and the full reinforcement learning problem.

Another important characteristic of many RL algorithms, such as Q-learning, is that they assume that the state is Markov, i.e. it contains all necessary info to take the optimal action, but, of course, RL is not just applicable to fully observable MDPs. In fact, even Q-learning has been applied to POMDPs, with some approximations and tricks.

Regarding the use of neural networks to approximate $q(s, a)$ or a policy in CBs, in principle, this is possible. However, given that the optimal action in a state $s$ is independent of the optimal action in another state $s'$, this is probably not useful, but I cannot guarantee you that this has not been successfully done, because I've not yet read the relevant literature (maybe someone else will provide another answer to address this aspect).

",2444,,2444,,2/11/2021 16:59,2/11/2021 16:59,,,,7,,,,CC BY-SA 4.0 26340,2,,26327,2/11/2021 16:54,,3,,"

Conceptually, in general, how is the context being handled in CB, compared to states in RL?

In terms of its place in the description of Contextual Bandits and Reinforcement Learning, context in CB is an exact analog for state in RL. The framework for RL is a strict generalisation of CB, and can be made similar or the same in a few separate ways:

  • If the agent is be optimised for immediate reward only (discount fatcor $\gamma=0$), then optimal action choice depends only on current state without considering consequences. However, the environment may not behave much like a contextual bandit over multiple time steps, so it would be hard to think in terms of the kind of optimisations that apply for CB (such as minimising regret).

  • If state progression in RL is unrelated to the action chosen, then optimal action choices depend only on the current state. There still might be some benefit from understanding the expected state progression in order to predict future rewards, and ability to learn about different states may be limited by the progression, so this in not full equivalence, but it is very close.

  • If state progression in RL is unrelated to any previous history (of states, actions, rewards), and state is drawn from the same population at any time step, then the full MDP description is not necessary, each time step is expected to be like the last. A contextual bandit model could well be more appropriate.

Another thing to consider is what your objectives are for studying the environment or applying an agent within it. Bandit solvers are usually applied to environments where the agent is expected to learn strictly online, and the goal of the developer is to write a learner that uses a minimal amount of information to decide on optimal or near optimal choices. One common metric for this is to minimise regret, or the expected difference in reward between the agent's action choices and the ideal choice summed over time.

If you have offline data to work from, then predictions for an optimal agent in a CB enviroment devolve to supervised learning of a regression task. There is no simple equivalent for this in RL, because actions have consequences that create links between states. As a result, offline RL methods are very similar to online RL ones - the state, action, reward data is processed much the same way.

",1847,,1847,,2/11/2021 17:01,2/11/2021 17:01,,,,0,,,,CC BY-SA 4.0 26341,2,,24613,2/11/2021 19:24,,3,,"

Many techniques for the exploration/exploitation dilemma that are inspired by multi-armed bandit problems, such as UCB1, assume that you can explicitly enumerate all state-action pairs; in fact, multi-armed bandit problems usually only have just one "state", and then this requirement turns into only requiring the ability to enumerate actions.

In RL problems that are small enough to be handled with tabular approaches (without any function approximation), this may still be feasible. But for many interesting RL problems, the state and/or action spaces grow so large that you have to use function approximators (Deep Neural Networks are a popular choice, but others exist too). When you are unable to enumerate your state-action space, you can no longer keep track of things like the visit counts that are normally used in UCB1 and related approaches.

There certainly are more advanced exploration techniques for RL than just $\epsilon$-greedy though, and some may even resemble / take inspiration from bandit-based approaches. There's an excellent blog post on Exploration Strategies in Deep Reinforcement Learning here. For example, you may think of some of the approaches described under "Count-based Exploration" as trying to solve the issue of tracking visit counts as I described above in settings with function approximation.

",1641,,,,,2/11/2021 19:24,,,,0,,,,CC BY-SA 4.0 26342,1,26348,,2/11/2021 19:41,,1,187,"

I am new to graph neural networks and their applications. I have an input graph $G = \{V, E\}$ and an output graph $G' = \{V', E'\}$ where the number of nodes $V$ and $V'$ are different. I am trying to learn the function where $f(G) = G'$ and $V > V'$, thus, the function is mapping many-to-one ($n$ number of nodes map to one). The Graph Convolution Network (GCN) seems to have the same number of nodes in input and output with the function being learnt. Could I utilize the GCN for my task?

",19541,,2444,,2/12/2021 20:06,2/12/2021 22:14,Is there a graph neural network algorithm that can deal with a different number of input and output nodes?,,1,0,,,,CC BY-SA 4.0 26343,1,,,2/11/2021 20:18,,1,100,"

I am taking Berkeley’s CS285 via self-study. On this particular lecture regarding Policy Gradient, I am very confused about the inconsistency between the concept explanation and the demonstration of code snippet. I am new to RL and hope someone could clarify this for me.

Context

1.The lecture defines policy gradient as follow:

log(pi_theta(a | s)) denotes the log probability of action given state under policy parameterized by theta

gradient log(pi_theta(a |s)) denotes the gradient of parameter theta with respect to the predicted log probability of action

2.The lecture defines a pseudo-loss.By auto differentiate the pseudo-loss, we recovery the policy gradient.

Here Q_hat is short hand of sum of r(s_i_t, a_i_t) in the Policy gradient equation under 1)

  1. The lecture then proceeds to gives a pseudo-code implementation of 2)

My confusion

From 1) above,

gradient log(pi_theta(a |s)) denotes the gradient of parameter theta with respect to the predicted log probability of action , not a loss value calculated from a label action and predicted action.

Why does the below in 2) implies that gradient log(pi_theta(a |s)) just morph into output of loss function instead of just predicted action probability as defined in 1) ?

2.

In this pseudo-code implementation,

Particularly, this line below.

negative_likelihoods = tf.nn.softmax_cross_entrophy_with_logis(labels=actions, logits=logits)

Where does the actions even coming from ? If it comes from collected trajectory, aren’t the actions result of logits = policy.predictions(states) to begin with ? Then won’t tf.nn.softmax_cross_entrophy_with_logis(labels=actions, logits=logits) always return 0 ?

  1. Based on the definition of policy gradient in 1), shouldn’t the implementation of pseudo-loss be like below ?

# Given:
# actions - (N*T) x Da tensor of actions
# states - (N*T) x Ds tensor of states
# q_values – (N*T) x 1 tensor of estimated state-action values
# Build the graph:
logits = policy.predictions(states) # This should return (N*T) x Da tensor of action logits 
weighted_predicted_probability = tf.multiply(torch.softmax(logits), q_values)
loss = tf.reduce_mean(weighted_predicted_probability )
gradients = loss.gradients(loss, variables)

",44580,,,,,2/11/2021 20:18,Confusion about computing policy gradient with automatic differentiation ( material from Berkeley CS285),,0,1,,,,CC BY-SA 4.0 26344,1,,,2/11/2021 22:27,,3,47,"

I am currently working on undergraduate research to determine hotspots for hand-surface contact. Ideally, I would like to give the model a depth image as input:

Example of synthetic depth image

and return an image mask indicating where the surface was touched:

Example of synthetic contact mask

I have worked with Machine Learning before but am struggling to determine what model I should use. My understanding is that CNNs are typically intended for classification tasks. And while GANs are used to generate new images, they can produce these images independently of an input. Assuming I have a large dataset of depth images and the respective black and white contact mask, what model can be used to efficiently predict a contact mask given an unseen depth image?

",44585,,11539,,2/15/2021 14:05,2/15/2021 14:05,"Best Machine Learning Model for ""Predicted"" Image Generation",,0,0,,,,CC BY-SA 4.0 26345,1,,,2/11/2021 23:36,,1,576,"

I have read many papers, such as this or this, explaining how external sampling works, but I still don't understand how the algorithm works.

I understand you divide $Q$, which is the set of all terminal histories into subsets $Q_1,..., Q_n$.

What is the probability of reaching some subset $Q_i$? Is it just the product of chance probability, the opponent's probability, and my probability?

As I understand it, the sampling only occurs in the opponent's information sets. How does that work? If there are two players, player 1 strategy is based on what strategy I use.

What happens after you have determined a subset $Q_i$ you want to sample? How many times do you iterate over the subset $Q_i$?

I have searched around and I cannot find any Python code that uses external sampling, but plenty of papers that give formulas, but do not explain the algorithm in detail. So, a Python example of MC-CFR external sampling would probably make it a lot easier for me to understand the algorithm.

",44587,,2444,,2/12/2021 16:56,2/12/2021 21:26,How exactly is Monte Carlo counterfactual regret minimization with external sampling implemented?,,1,1,,,,CC BY-SA 4.0 26346,1,26347,,2/11/2021 23:41,,1,165,"

I'm working on Sentiment Analysis, using HuggingFace to perform sentiment analysis on articles

 classifier = pipeline('sentiment-analysis', model="nlptown/bert-base-multilingual-uncased-sentiment")
 classifier(['We are very happy to show you the 🤗 Transformers library.',  "We hope you don't hate it."])

This returns

label: POSITIVE, with score: 0.9998

label: NEGATIVE, with score: 0.5309

Now I'm trying to understand how to keep track of a subject when performing the sentiment analysis.

Suppose I'm given a sentence like this.

StackExchange is a great website. It helps users answer questions. Hopefully, someone will help answer this question.

I would like to keep track of the subject when performing sentiment analysis. In the example above, in the 2nd sentence 'it' refers to 'StackExchange'. I would like to be able to do track a subject between sentences.

Now, I could try to manually try to parse this by finding the verb and trying to figure find the phrase that comes before it. However, it doesn't sound like a very safe or accurate way to find the subject.

Alternatively, I could train similar to a Named Entity Recognition. However, finding a dataset for this is very hard, and training it would be very time-consuming.

How can I keep track of an entity within an article?

",44588,,2444,,2/12/2021 16:37,2/12/2021 16:37,How to keep track of the subject/entity in a sentence?,,1,0,,,,CC BY-SA 4.0 26347,2,,26346,2/12/2021 2:06,,2,,"

What you're describing is known as coreference resolution. More specifically, this example is anaphora resolution. The short answer is that this is an open research question and there is no well-established solution.

You mentioned Hugging Face in your question. The neuralcoref module in spaCy is itself from Hugging Face (note the reflexive anaphor used for emphasis in this sentence). If you're not a spaCy kind of person, then there's also Stanford's CoreNLP in Java that has coreference resolution. A Python wrapper is also available.

I also wanted to address a couple other topics you mentioned. You are right in that they're all somewhat connected. But you need to scope down your goal/research question because what you're aiming to achieve is too difficult for a first task. Named entity recognition, constituency parsing, and sentiment analysis. Pick just one to focus on.

",19703,,,,,2/12/2021 2:06,,,,2,,,,CC BY-SA 4.0 26348,2,,26342,2/12/2021 2:24,,1,,"

I suggest you look into link prediction. I have had good luck with the StellarGraph library. They have several algorithms implemented, including GCN.

Link prediction is a binary classification problem. Given two nodes, $v_i$ and $v_j$, does there exist a link between them? Using a library like StellarGraph will also produce node embeddings while performing link prediction.

For you scenario I'm picturing a three step process:

  1. Link prediction and node embeddings on $G$.
  2. Link prediction and node embeddings on $G'$.
  3. Link prediction reusing existing embeddings where each link is between the two graphs. So each link is a tuple of the form: $(v_i, v_j')$ where $v_i \in V$ and $v_j' \in V'$. If there were no links predicted from $v_i$ to $v_j'$ then that might suggest to remove $v_j'$.

In the link prediction tasks that I've outlined, you can use GCN with StellarGraph. So there should be no problems in terms of the number of nodes.

",19703,,19703,,2/12/2021 22:14,2/12/2021 22:14,,,,1,,,,CC BY-SA 4.0 26349,1,26355,,2/12/2021 8:37,,2,76,"

Let's say we have a neural network that was trained with a dataset $D$ to solve some task. Would it be possible to "reverse-engineer" this neural network and get a vague idea of the dataset $D$ it was trained on?

",44598,,2444,,2/12/2021 16:15,2/12/2021 16:23,Would it be possible to determine the dataset a neural network was trained on?,,1,0,,,,CC BY-SA 4.0 26352,1,,,2/12/2021 13:25,,3,539,"

After having read Williams (1992), where it was suggested that actually both the mean and standard deviation can be learned while training a REINFORCE algorithm on generating continuous output values, I assumed that this would be common practice nowadays in the domain of Deep Reinforcement Learning (DRL). In the supplementary material associated with the paper introducing Trust Region Policy Optimization (TRPO), however, it is stated that:

A neural network with several fully-connected (dense) layers maps from the input features to the mean of a Gaussian distribution. A separate set of parameters specifies the log standard deviation of each element. More concretely, the parameters include a set of weights and biases for the neural network computing the mean, $\{W_i , b_i\}_{i=1}^L$ , and a vector $r$ (log standard deviation) with the same dimension as $a$. Then, the policy is defined by the normal distribution $\mathcal{N}(\text{mean}=\text{NeuralNet}(s; \{W_i , b_i\}_{i=1}^L), \text{stdev}=\text{exp}(r))$.

where $s$ refers to a state and $a$ to a predicted action (respectively a vector of actions if multiple outputs are generated concurrently).

To me this suggests that the standard deviation stdev (being a function of $r$) is actually not learned when training a TRPO agent, but that it is solely determined by some possibly constant vector $r$.

Since I found the idea of adjusting both the mean and standard deviation together when training a REINFORCE agent quite reasonable, I got wondering whether it is actually true that TRPO agents do not treat the standard deviation for sampling output values as a trainable parameter, but just as a function of the state-independent vector $r$. (Pretty much the same shall then apply to Proximal Policy Optimization (PPO) agents as well, since they are reported to follow TRPO's model architecture in the continuous output case.)

In search for an answer, I browsed OpenAI's baselines repository containing reference implementations of both TRPO and PPO. In my understanding of their code, the code seems to confirm my assumption that standard deviation is a non-trainable parameter and that it is, instead of being trainable, taken to be a constant.

Now, I was wondering whether my understanding of the procedure how TRPO (and PPO) computes standard deviation(s) is correct or whether I misunderstood or overlooked something important here.

",37982,,,,,10/18/2022 8:00,Is (log-)standard deviation learned in TRPO and PPO or fixed instead?,,1,2,,,,CC BY-SA 4.0 26353,1,26364,,2/12/2021 15:50,,0,85,"

There were some posts that using RNN can predict the next point of the sine wave function with data history.

However, I wondered if it also works on all the functions of $x$, such as $x^2$, $x^3$, $\log(x)$, $\frac{1}{(x+1)}$ functions.

",44610,,11539,,2/13/2021 23:22,2/13/2021 23:22,"Is it possible to predict $x^2$, $\log(x)$, or variable function of $x$ using RNN?",,1,0,,,,CC BY-SA 4.0 26354,1,,,2/12/2021 16:21,,1,72,"

In the ICLR 2016 paper BlackOut: Speeding up Recurrent Neural Network Language Models with very Large Vocabularies, on page 3, for eq. 4:

$$ J_{ml}^s(\theta) = log \ p_{\theta}(w_i | s) $$

They have shown the gradient computation in the subsequent eq. 5: $$ \frac{\partial J_{ml}^s(\theta)}{\partial \theta} = \frac{\partial}{\partial \theta}<\theta_i \cdot s> - \sum_{j=1}^V p_{\theta}(w_j|s)\frac{\partial}{\partial \theta} <\theta_j \cdot s>$$


I am not able to understand how they have obtained this - I have tried to work it out as follows:

from eq. 3 we have

$$ p_{\theta}(w_i|s) = \frac{exp(<\theta_i \cdot s>)}{\sum_{j=1}^V exp(<\theta_j \cdot s>)} $$

re-writing eq. 4, we have:

$$\begin{eqnarray} J_{ml}^s(\theta) &=& log \ \frac{exp(<\theta_i \cdot s>)}{\sum_{j=1}^V exp(<\theta_j \cdot s>)} \nonumber \\ &=& log \ exp(<\theta_i \cdot s>) - log \ \sum_{j=1}^V exp(<\theta_j \cdot s>) \nonumber \nonumber \end{eqnarray}$$

Now, taking derivatives w.r.t. $ \theta $:

$$\begin{eqnarray} \frac{\partial}{\partial \theta} J_{ml}^s(\theta) &=& \frac{\partial}{\partial \theta} log \ exp(<\theta_i \cdot s>) - \frac{\partial}{\partial \theta} log \ \sum_{j=1}^V exp(<\theta_j \cdot s>) \nonumber \nonumber \end{eqnarray}$$


So, that's it; the second term (after the negative sign), how did that change to the term they have given in eq. 5? Or did I commit a blunder?


Update

I did commit a blunder and I have edited it out, but, the question remains!

correct property: $$log \ (\prod_{i=1}^K x_i) = \sum_{i=1}^K log \ (x_i)$$

",33781,,33781,,2/16/2021 11:09,2/16/2021 11:09,BlackOut - ICLR 2016: need help understanding the cost function derivative,,0,0,,,,CC BY-SA 4.0 26355,2,,26349,2/12/2021 16:23,,4,,"

You can already do this with some neural networks, such as GANs and VAEs, which are generative models that learn a probability distribution over the inputs, so they learn how to produce e.g. images that are similar to the images they were trained with.

Now, if you're interested in whether there is a black-box method, i.e. a method that, for every possible neural network, would tell you the dataset a neural network was trained with, that seems to be a harder task and definitely an ill-posed problem, but I suspect that people working on adversarial machine learning have already attempted or will attempt to do something similar.

",2444,,,,,2/12/2021 16:23,,,,0,,,,CC BY-SA 4.0 26361,2,,26345,2/12/2021 21:26,,1,,"

External sampling and outcome sampling are two ways of defining the sets $Q_1, \dots, Q_n$. I think your mistake is that you think of the $Q_i$ as fixed and taken as input in these shampling schemes. It is not the case.

In external sampling, there is as many sets $Q_{\tau}$ as there are pure strategies for the opponent and the chance player (a pure strategy is a deterministic policy). Think of it as "the set of terminal nodes that I can reach if my opponent and chance play in this fixed way".

Sampling a set $Q_{\tau}$ thus means sampling a pure strategy for the opponent and chance node. An alternative is to sample on the fly the opponent's policy $\sigma^{-i}$ (and chance) when needed. It gives identical probabilities for q(z), while does not correspond to a full definition of a deterministic policy $\tau$ (which would need a definition on all the information nodes).

There are several implementations on Github, one can be found here: https://github.com/bakanaouji/cpp-cfr/blob/master/RegretMinimization/Trainer/Trainer.cpp

",43682,,,,,2/12/2021 21:26,,,,2,,,,CC BY-SA 4.0 26362,2,,22897,2/13/2021 0:00,,1,,"

So for neural style transfer, using the particular method described in Gatys paper, nobody has done better than using VGG net. This is seemingly due to VGGs inherent stability and inability to learn non-robust features of images. More on this here: https://reiinakano.com/2019/06/21/robust-neural-style-transfer.html

That being said, GANs have had huge success in the field of style transfer, getting much better results than the neural style transfer method described in the paper you mentioned. Cyclegan is one of the best in this respect: https://machinelearningmastery.com/what-is-cyclegan/

",44620,,,,,2/13/2021 0:00,,,,0,,,,CC BY-SA 4.0 26364,2,,26353,2/13/2021 3:13,,-2,,"

Yes, RNN can work on the functions you have mentioned. In fact, neural networks can approximate anything (Universal Approximation Theorem). This question also reminds me of Neural Turing Machine.

But, it would be a complete waste to use RNNs or NNs for such a task.

",37203,,,,,2/13/2021 3:13,,,,6,,,,CC BY-SA 4.0 26365,2,,26335,2/13/2021 3:19,,1,,"

No, it does not take into account the curvature. But, if curvature is important for you, then, it would be a good idea to look at Ricci flow and its applications in neural networks.

",37203,,,,,2/13/2021 3:19,,,,0,,,,CC BY-SA 4.0 26366,1,26400,,2/13/2021 9:53,,7,1252,"

I found the following PyTorch code (from this link)

-0.5 * torch.sum(1 + sigma - mu.pow(2) - sigma.exp())

where mu is the mean parameter that comes out of the model and sigma is the sigma parameter out of the encoder. This expression is apparently equivalent to the KL divergence. But I don't see how this calculates the KL divergence for the latent.

",30885,,2444,,6/5/2022 9:03,6/5/2022 9:03,How is this Pytorch expression equivalent to the KL divergence?,,2,0,,,,CC BY-SA 4.0 26371,2,,25819,2/13/2021 17:11,,0,,"

The solution for my problem was implementing Batch Renormalization: BatchNormalization(renorm=True). In addition normalizing the inputs helped a lot improving the overall performance of the neural network.

",43887,,,,,2/13/2021 17:11,,,,0,,,,CC BY-SA 4.0 26375,1,,,2/13/2021 21:41,,0,285,"

so the idea is to feed neural network data like

input: mono audio(extracted from existing 3d audio) output: 3d audio

after training it should convert mono audio to 3d sound

do you think it is possible? does it already implemented?(I didn't found)

P.S it should sound like https://www.youtube.com/watch?v=kVH_y0rOyGM

not like usual 3d youtube.com/watch?v=QFaSIti5_d0

",44634,,44634,,2/14/2021 19:21,2/14/2021 19:21,Is it possible to transform audio with neural networks to make it sound like 3d sound,,0,8,,,,CC BY-SA 4.0 26376,1,,,2/13/2021 22:03,,1,227,"

In the AlphaZero learning algorithm, during self-play to generate training games, the move played is chosen with probability proportional to the MCTS visits raised to the $\tau$-th power, where $\tau$ is the so called temperature. Higher temperatures correspond to more exploration. It seems that in deepmind's original paper (on AlphaGo Zero if I'm not mistaken) it is mentioned that temperature is decayed to zero after move 30 in Go/Baduk, then this is contradicted in the AlphaZero with it saying that temperature is not decayed at all, and finally in AlphaZero's pseudocode I believe it is implied that the temperature is decayed after some number of moves. Specifically I believe that lczero concluded that they decayed after 15 moves for chess. It's not clear to me after searching what the current training regime for lczero is with regards to temperature. Also, I believe that ELF openGo efforts used $\tau=1$ for the entire game.

Question: Is there a consensus on what $\tau$ should be? Does it matter if the training is in early phases or not (i.e. if the AI is not advanced yet is it beneficial to explore seemingly "worse" moves?) How dependent on the game is this optimal $\tau$? If I have a game which lasts 50 moves average, and I want to decay $\tau$, is there a best practice?

",44635,,,,,2/13/2021 22:03,"What is the consensus on the ""correct"" temperature settings for the AlphaZero algorithm?",,0,0,,,,CC BY-SA 4.0 26378,1,,,2/14/2021 7:44,,2,144,"

I found a naive Bayes classifier for positive sentiment or a negative sentiment Citius: A Naive-Bayes Strategy for Sentiment Analysis on English Tweets. But with most available datasets online, sentiments are classified into 3 types: positive, negative, and neutral.

How does the naive Bayes formula change for such cases? Or does it remain the same, and we only consider the positive and negative to calculate the log likelihoods-likelihoods?

",43993,,43993,,3/8/2021 13:57,3/8/2021 13:57,"How can I apply naive Bayes classifier for three classes (Positive, Negative and Neutral) in text data?",,1,2,,,,CC BY-SA 4.0 26379,1,,,2/14/2021 8:22,,2,125,"

From the architecture table of the first MobileNet paper, a depthwise convolution with stride 2 and an input of 7x7x1024 is followed by a pointwise convolution with the same input dimensions, 7x7x1024.

Shouldn't the pointwise layer's input be 4x4x1024 if the depthwise conv. layer was stride 2? (Assumming padding of 1)

Is this an error on the author's side? Or are there something that I've missed between these layers? I've checked implementations of MobileNet V1 and it seems that everyone just treated this depthwise layer's stride as 1.

",44643,,,,,2/14/2021 8:22,Error in MobileNet V1 Architecture?,,0,3,,,,CC BY-SA 4.0 26382,1,,,2/14/2021 20:54,,0,259,"

I am trying to learn about reinforcement learning and chose the stock market to experiment with. I have minute by minute historical data on a particular stock for the past 20 years. I am using a generator to feed the data into my DQN. I've been running some automated tuning on the hyperparameters and seem to have found some good values.

Now I am wondering if I should be training on the dataset more than once or whether that would cause the network to simply memorize past experiences and cause overfitting. Is there a standard practice when it comes to training on historical data in regards to the number of epochs?

Edit: I'm not nessesarily looking for an answer to how many epochs I should be using, rather I'd like to know if running over the same data more than once is okay with DQNs

",44335,,44335,,2/14/2021 20:59,2/22/2021 15:33,Can I train a DQN on the same dataset for multiple epochs?,,1,4,,,,CC BY-SA 4.0 26384,1,,,2/14/2021 21:56,,1,433,"

I know I can make a VAE do generation with a mean of 0 and std-dev of 1. I tested it with the following loss function:

def loss(self, data, reconst, mu, sig):
    rl = self.reconLoss(reconst, data)
    #dl = self.divergenceLoss(mu, sig)
    std = torch.exp(0.5 * sig)
    compMeans = torch.full(std.size(), 0.0)
    compStd = torch.full(std.size(), 1.0)
    dl = kld(mu, std, compMeans, compStd)
    totalLoss = self.rw * rl + self.dw * dl
    return (totalLoss, rl, dl)

def kld(mu1, std1, mu2, std2):
    p = torch.distributions.Normal(mu1, std1)
    q = torch.distributions.Normal(mu2, std2)
    return torch.distributions.kl_divergence(p, q).mean()

In this case, mu and sig are from the latent vector, and reconLoss is MSE. This works well, and I am able to generate MNIST digits by feeding in noise from a standard normal distribution.

However, I'd now like to concentrate the distribution at a normal distribution with std-dev of 1 and mean of 10. I tried changing it like this:

compMeans = torch.full(std.size(), 10.0)

I did the same change in reparameterization and generation functions. But what worked for the standard normal distribution is not working for the mean = 10 normal one. Reconstruction still works fine but generation does not, only producing strange shapes. Oddly, the divergence loss is actually going down too, and reaching a similar level to what it reached with standard normal.

Does anyone know why this isn't working? Is there something about KL that does not work with non-standard distributions?

Other things I've tried:

  • Generating from 0,1 after training on 10,1: failed
  • Generating on -10,1 after training on 10,1: failed
  • Custom version of KL divergence: worked on 0,1. failed on 10,1
  • Using sigma directly instead of std = torch.exp(0.5 * sig): failed

Edit 1: Below are my loss plots with 0,1 distribution. Reconstruction:

Divergence:

Generation samples:

Reconstruction samples (left is input, right is output):

And here are the plots for 10,1 normal distribution.

Reconstruction:

Divergence:

Generation sample:

Note: when I ran it this time, it actually seemed to learn the generation a bit, though it's still printing mostly 8's or things that are nearly an 8 by structure. This is not the case for the standard normal distribution. The only difference from last run is the random seed.

Reconstruction sample:

Sampled latent:

tensor([[ 9.6411,  9.9796,  9.9829, 10.0024,  9.6115,  9.9056,  9.9095, 10.0684,
         10.0435,  9.9308],
        [ 9.8364, 10.0890,  9.8836, 10.0544,  9.4017, 10.0457, 10.0134,  9.9539,
         10.0986, 10.0434],
        [ 9.9301,  9.9534, 10.0042, 10.1110,  9.8654,  9.4630, 10.0256,  9.9237,
          9.8614,  9.7408],
        [ 9.3332, 10.1289, 10.0212,  9.7660,  9.7731,  9.9771,  9.8550, 10.0152,
          9.9879, 10.1816],
        [10.0605,  9.8872, 10.0057,  9.6858,  9.9998,  9.4429,  9.8378, 10.0389,
          9.9264,  9.8789],
        [10.0931,  9.9347, 10.0870,  9.9941, 10.0001, 10.1102,  9.8260, 10.1521,
          9.9961, 10.0989],
        [ 9.5413,  9.8965,  9.2484,  9.7604,  9.9095,  9.8409,  9.3402,  9.8552,
          9.7309,  9.7300],
        [10.0113,  9.5318,  9.9867,  9.6139,  9.9422, 10.1269,  9.9375,  9.9242,
          9.9532,  9.9053],
        [ 9.8866, 10.1696,  9.9437, 10.0858,  9.5781, 10.1011,  9.8957,  9.9684,
          9.9904,  9.9017],
        [ 9.6977, 10.0545, 10.0383,  9.9647,  9.9738,  9.9795,  9.9165, 10.0705,
          9.9072,  9.9659],
        [ 9.6819, 10.0224, 10.0547,  9.9457,  9.9592,  9.9380,  9.8731, 10.0825,
          9.8949, 10.0187],
        [ 9.6339,  9.9985,  9.7757,  9.4039,  9.7309,  9.8588,  9.7938,  9.8712,
          9.9763, 10.0186],
        [ 9.7688, 10.0575, 10.0515, 10.0153,  9.9782, 10.0115,  9.9269, 10.1228,
          9.9738, 10.0615],
        [ 9.8575,  9.8241,  9.9603, 10.0220,  9.9342,  9.9557, 10.1162, 10.0428,
         10.1363, 10.3070],
        [ 9.6856,  9.7924,  9.9174,  9.5064,  9.8072,  9.7176,  9.7449,  9.7004,
          9.8268,  9.9878],
        [ 9.8630, 10.0470, 10.0227,  9.7871, 10.0410,  9.9470, 10.0638, 10.1259,
         10.1669, 10.1097]])

Note, this does seem to be in the right distribution.

Just in case, here's my reparameterization method too. Currently with 10,1 distribution:

def reparamaterize(self, mu, sig):
        std = torch.exp(0.5 * sig)
        epsMeans = torch.full(std.size(), 10.0)
        epsStd = torch.full(std.size(), 1.0)
        eps = torch.normal(epsMeans, epsStd)
        return eps * std + mu
",23803,,23803,,2/19/2021 0:16,2/19/2021 0:16,Why does the VAE using a KL-divergence with a non-standard mean does not produce good images?,,0,6,,,,CC BY-SA 4.0 26388,1,,,2/15/2021 8:36,,3,500,"

ConvNet stands for Convolutional Networks and CNN stands for Convolutional Neural Networks.

Is there any difference between both?

If yes, then what is it?

If no, is there any reason behind using ConvNet at some places and CNN at some other places in literature?

",18758,,,,,2/16/2021 16:07,Is there any difference between ConvNet and CNN?,,1,1,,,,CC BY-SA 4.0 26391,2,,26388,2/15/2021 11:31,,7,,"

Both terms just mean convolutional neural network. I don't believe there is any particular reason to choose one over the other: ConvNet is slightly easier to say out loud and CNN is slightly shorter to write, but there is absolutely no difference in meaning.

For some contrasting examples in the literature, the EfficientNet paper chooses the term ConvNet and this paper on AlexNet chooses CNN. They are talking about the same sorts of neural networks though!

",44413,,,,,2/15/2021 11:31,,,,0,,,,CC BY-SA 4.0 26392,1,,,2/15/2021 12:17,,0,137,"

I have about 2000 items in my validation set, would it be reasonable to calculate the loss/error after each epoch on just a subset instead of the whole set, if calculating the whole dataset is very slow?

Would taking random mini-batches to calculate loss be a good idea as your network wouldn't have a constant set? Should I just shrink the size of my validation set?

",43270,,2444,,2/15/2021 12:24,2/15/2021 14:26,Is it okay to calculate the validation loss over batches instead of the whole validation set for speed purposes?,,1,1,,,,CC BY-SA 4.0 26393,2,,26392,2/15/2021 14:26,,1,,"

I assume you intended to write compute the evaluation metric over the validation set in batches; you do not compute loss over the validation set!

That is quite a standard practice in many academic implementations (because, when the validation set is large enough, the memory will be a constraint), however, be sure to take the average of the values over all the batches. Using a K-fold setup will increase the confidence in the reported values.

",33781,,,,,2/15/2021 14:26,,,,0,,,,CC BY-SA 4.0 26395,1,26396,,2/15/2021 15:20,,5,929,"

Is it practical/affordable to train an AlphaZero/MuZero engine using a residential gaming PC, or would it take thousands of years of training for the AI to learn enough to challenge humans?

I'm having trouble wrapping my head around how much computing power '4 hours of Google DeepMind training' equates to my residential computer running 24/7 trying to build a trained AI.

Basically, are AlphaZero or MuZero practical for indie board games that want a state of the art AI, or is it too expensive to train?

",44682,,2444,,2/15/2021 22:35,7/23/2022 7:47,Is it practical to train AlphaZero or MuZero (for indie games) on a personal computer?,,2,0,,,,CC BY-SA 4.0 26396,2,,26395,2/15/2021 17:28,,3,,"

The vast majority of neural networks are now trained on graphics processing units (GPUs) or specialised accelerator hardware such as tensor processing units (TPUs).

In Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm, Silver et al. say that the training process involved 5,000 first-generation TPUs generating self-play games and 64 second-generation TPUs for training. This is certainly far beyond what any practical gaming computer is likely to achieve, as you'll likely only have one GPU, and that might not even rival a single TPU. Training on the CPU will be substantially slower again than either a GPU or TPU. Training would be orders of magnitude slower; you might find these benchmarks by Wang et al. of interest.

",44413,,,,,2/15/2021 17:28,,,,1,,,,CC BY-SA 4.0 26398,2,,24087,2/15/2021 22:25,,1,,"

neural networks can solve all taylor series polynomials meaning a NN is an generalized linear model. Most function f(y) can be solved with neural networks. However, many matrix operations can not be generalized for a neural network to solve like determinants. Operations like rotation, scale, and transform also can not be generalized.

you can solve all ordinary least square formulas and general linear model formulas using a neural network. There most n order polynomial curve fitting can be solved by a neural network: spline, b-splines, nurbs can be solved by a neural network.

",44679,,,,,2/15/2021 22:25,,,,0,,,,CC BY-SA 4.0 26400,2,,26366,2/16/2021 4:17,,6,,"

The code is correct. Since OP asked for a proof, one follows.

The usage in the code is straightforward if you observe that the authors are using the symbols unconventionally: sigma is the natural logarithm of the variance, where usually a normal distribution is characterized in terms of a mean $\mu$ and variance. Some of the functions in OP's link even have arguments named log_var.$^*$

If you're not sure how to derive the standard expression for KL Divergence in this case, you can start from the definition of KL divergence and crank through the arithmetic. In this case, $p$ is the normal distribution given by the encoder and $q$ is the standard normal distribution. $$\begin{align} D_\text{KL}(P \| Q) &= \int_{-\infty}^{\infty} p(x) \log\left(\frac{p(x)}{q(x)}\right) dx \\ &= \int_{-\infty}^{\infty} p(x) \log(p(x)) dx - \int_{-\infty}^{\infty} p(x) \log(q(x)) dx \end{align}$$ The first integral is recognizable as almost definition of entropy of a Gaussian (up to a change of sign). $$ \int_{-\infty}^{\infty} p(x) \log(p(x)) dx = -\frac{1}{2}\left(1 + \log(2\pi\sigma_1^2) \right) $$ The second one is more involved. $$ \begin{align} -\int_{-\infty}^{\infty} p(x) \log(q(x)) dx &= \frac{1}{2}\log(2\pi\sigma_2^2) - \int p(x) \left(-\frac{\left(x - \mu_2\right)^2}{2 \sigma_2^2}\right)dx \\ &= \frac{1}{2}\log(2\pi\sigma_2^2) + \frac{\mathbb{E}_{x\sim p}[x^2] - 2 \mathbb{E}_{x\sim p}[x]\mu_2 +\mu_2^2} {2\sigma_2^2} \\ &= \frac{1}{2}\log(2\pi\sigma_2^2) + \frac{\sigma_1^2 + \mu_1^2-2\mu_1\mu_2+\mu_2^2}{2\sigma_2^2} \\ &= \frac{1}{2}\log(2\pi\sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2\sigma_2^2} \end{align} $$ The key is recognizing this gives us a sum of several integrals, and each can apply the law of the unconscious statistician. Then we use the fact that $\text{Var}(x)=\mathbb{E}[x^2]-\mathbb{E}[x]^2$. The rest is just rearranging.

Putting it all together: $$ \begin{align} D_\text{KL}(P \| Q) &= -\frac{1}{2}\left(1 + \log(2\pi\sigma_1^2) \right) + \frac{1}{2}\log(2\pi\sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2\sigma_2^2} \\ &= \log (\sigma_2) - \log(\sigma_1) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2\sigma_2^2} - \frac{1}{2} \end{align} $$

In this special case, we know that $q$ is a standard normal, so $$ \begin{align} D_\text{KL}(P \| Q) &= -\log \sigma_1 + \frac{1}{2}\left(\sigma_1^2 + \mu_1^2 - 1 \right) \\ &= - \frac{1}{2}\left(1 + 2\log \sigma_1- \mu_1^2 -\sigma_1^2 \right) \end{align} $$ In the case that we have a $k$-variate normal with diagonal covariance for $p$, and a multivariate normal with covariance $I$, this is the sum of $k$ univariate normal distributions because in this case the distributions are independent.

The code is a correct implementation of this expression because $\log(\sigma_1^2) = 2 \log(\sigma_1)$ and in the code, sigma is the logarithm of the variance.


$^*$The reason that it's convenient to work on the scale of the log-variance is that the log-variance can be any real number, but the variance is constrained to be non-negative by definition. It's easier to perform optimization on the unconstrained scale than it is to work on the constrained scale in $\eta^2$. Also, we want to avoid "round-tripping," where we compute $\exp(y)$ in one step and then $\log(\exp(y))$ in a later step, because this incurs a loss of precision. In any case, autograd takes care of all of the messy details with adjustments to gradients resulting from moving from one scale to another.

",21739,,21739,,2/18/2021 3:37,2/18/2021 3:37,,,,2,,,,CC BY-SA 4.0 26403,1,26404,,2/16/2021 8:51,,3,263,"

An MDP is a Markov Reward Process with decisions, it’s an environment in which all states are Markov. This is what we want to solve. An MDP is a tuple $(S, A, P, R, \gamma)$, where $S$ is our state space, $A$ is a finite set of actions, $P$ is the state transition probability function,

$$P_{ss'}^a = \mathbb{P}[S_{t+1} = s' | S_t = s, \hspace{0.1cm}A_t = a] \label{1}\tag{1}$$

and

$$R_s^a = \mathbb{E}[R_{t+1}| S_t =s, A_t = a]$$

and a discount factor $\gamma$.

This can be seen as a linear equation in $|S|$ unknowns, which is given by,

$$V = R + \gamma PV \hspace{1mm} \label{2}\tag{2}$$

$V$ is value of a state vector, $R$ is immediate reward vector, $P$ is transition probability matrix, where each element at $(i,j)$ in $P$ is given by, $ P[i][j] = P(i \mid j)$ i.e., probability that I am in state $j$ going to state $i$.

As $P$ is given, we treat, equation $\ref{2}$ as a linear equation in $V$. But $P[i][j] = \sum_a (\pi(a \mid j) \times \mathrm{p}(i \mid j, a) )$. But, $ \pi (a \mid s)$ (i.e., probability that I will take action a in state s) is NOT given.

So, how can we frame this problem as the solution to a system of linear equations in \ref{2}, if we only know $ P^a_{ss'}$ and we do not know $ \pi(a \mid s)$, which is needed to calculate $P[i][j]$?

",44685,,2444,,2/16/2021 14:47,9/24/2021 6:17,How can we find the value function by solving a system of linear equations without knowing the policy?,,1,0,0,,,CC BY-SA 4.0 26404,2,,26403,2/16/2021 9:12,,3,,"

Your equations all look correct to me.

It is not possible to solve the linear equation for state values in the vector $V$ without knowing the policy.

There are ways of working with MDPs, through sampling of actions, state transitions and rewards, where it is possible to estimate value functions without knowing either $\pi(a|s)$ or $P^{a}_{ss'}$. For instance, Monte Carlo policy evaluation or single-step TD learning can both do this. It is also common to work with $\pi(a|s)$ known but $P^{a}_{ss'}$ and $R^{a}_{s}$ unknown in model-free control algorithms such as Q learning.

However, in your case, you are correct, in order to resolve the simultaneous equations you have presented, you do need to know $\pi(a|s)$

This is not as limiting as you might think. You can construct a control method using simultaneous equations, by starting with the policy set to some arbitrary policy. Either a randomly-chosen deterministic policy or the equiprobable policy are reasonable first guesses. Then, after each solution to linear equations, you improve the policy so that each action choice maximises the expected return. This is essentially the policy iteration algorithm but replacing the policy evaluation step with the linear equations method for calculating the values.

",1847,,1847,,9/24/2021 6:17,9/24/2021 6:17,,,,2,,,,CC BY-SA 4.0 26405,1,26409,,2/16/2021 9:38,,0,79,"

I am exploring a potential NLP project. I was wondering what generally is done with the hashtags words (e.g. #hello). Are those words ignored? is the # removed and the word tokenised? Is it tokenised with the #?

",44695,,2444,,9/19/2021 0:46,9/19/2021 0:46,NLP: Are hashtags tokenised?,,1,0,,,,CC BY-SA 4.0 26408,2,,26366,2/16/2021 12:56,,7,,"

This is the analytical form of the KL divergence between two multivariate Gaussian densities with diagonal covariance matrices (i.e. we assume independence). More precisely, it's the KL divergence between the variational distribution

$$ q_{\boldsymbol{\phi}}(\mathbf{z}) = \mathcal{N}\left(\mathbf{z} ; \boldsymbol{\mu}, \mathbf{\Sigma} = \boldsymbol{\sigma}^{2}\mathbf{I}\right) = \frac{\exp \left(-\frac{1}{2}\left(\mathbf{z} - \boldsymbol{\mu}\right)^{\mathrm{T}} \mathbf{\Sigma}^{-1}\left(\mathbf{z}-\boldsymbol{\mu} \right)\right)}{\sqrt{(2 \pi)^{J}\left|\mathbf{\Sigma}\right|}} \tag{1}\label{1} $$

and the prior (it's the same as above, but with mean and covariance equal to the zero vector and the identity matrix, respectively)

$$ p(\mathbf{z})=\mathcal{N}(\mathbf{z} ;\boldsymbol{0}, \mathbf{I}) = \frac{\exp \left(-\frac{1}{2}\mathbf{z}^{\mathrm{T}}\mathbf{z}\right)}{\sqrt{(2 \pi)^{J}}} \tag{2}\label{2} $$

where

  • $\boldsymbol{\mu} \in \mathbb{R}^J$ is the mean vector (we assume column vectors, so $\boldsymbol{\mu}^T$ would be a row vector)
  • $\mathbf{\Sigma} = \boldsymbol{\sigma}^{2}\mathbf{I} \in \mathbb{R}^{J \times J}$ is a diagonal covariance matrix (with the vector $\boldsymbol{\sigma}^{2}$ on the diagonal of the identity)
  • $\mathbf{z} \in \mathbb{R}^J$ is a sample (latent vector) from these Gaussians with dimensionality $J$ (or, at the same time, the input variable of the density)
  • $\left|\mathbf{\Sigma}\right| = \operatorname{det} \mathbf{\Sigma}$ is the determinant (so a number) of the diagonal covariance matrix, which is just the product of the diagonal elements for a diagonal matrix (which is the case); so, in the case of the identity, the determinant is $1$
  • $\boldsymbol{0} \in \mathbb{R}^J$ is the zero vector
  • $\mathbf{I} \in \mathbb{R}^{J \times J}$ is an identity matrix
  • $\mathbf{z}^{\mathrm{T}}\mathbf{z} = \sum_{i=1}^J z_i^2 \in \mathbb{R}$ is the dot product (hence a number)

Now, the (negative of the) KL divergence is defined as follows

\begin{align} -D_{K L}\left(q_{\boldsymbol{\phi}}(\mathbf{z}) \| p(\mathbf{z})\right) &= \int q_{\boldsymbol{\phi}}(\mathbf{z})\left(\log p(\mathbf{z})-\log q_{\boldsymbol{\phi}}(\mathbf{z})\right) d \mathbf{z} \\ &= \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \log p(\mathbf{z})-\log q_{\boldsymbol{\phi}}(\mathbf{z})\right] \label{3}\tag{3} \end{align}

Given that we have logarithms here, let's compute the logarithm of equations \ref{1} and \ref{2}

\begin{align} \log \left( \mathcal{N}\left(\mathbf{z} ; \boldsymbol{\mu}, \mathbf{\Sigma} \right) \right) &= \dots \\ &= -\frac{1}{2}(\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu})-\frac{J}{2} \log (2 \pi)-\frac{1}{2} \log |\mathbf{\Sigma} | \end{align}

and

\begin{align} \log \left( \mathcal{N}(\mathbf{z} ;\boldsymbol{0}, \mathbf{I}) \right) &= \dots \\ &= -\frac{1}{2}\mathbf{z}^{\mathrm{T}} \mathbf{z}-\frac{J}{2} \log (2 \pi) \end{align}

We can now replace these in equation \ref{3} (below, I have already performed some simplifications, to remove verbosity, but you can check them!)

\begin{align} \frac{1}{2} \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ -\mathbf{z}^{\mathrm{T}} \mathbf{z} + (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu}) + \log |\mathbf{\Sigma} | \right] \tag{4}\label{4} \end{align} Now, given that $\mathbf{\Sigma}$ is diagonal and the log of a product is just a sum of the logarithms, we have $\log |\mathbf{\Sigma} | = \sum_{i=1}^J \log \sigma_{ii}$, so we can continue

\begin{align} \frac{1}{2} \left( - \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu}) \right] + \sum_{i=1}^J \log \sigma_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \operatorname{tr} \left( \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu}) (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \right) \right] + \sum_{i=1}^J \log \sigma_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \operatorname{tr} \left( \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{\Sigma}^{-1}(\mathbf{z}-\boldsymbol{\mu}) (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \right] \right) + \sum_{i=1}^J \log \sigma_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \operatorname{tr} \left( \mathbf{\Sigma}^{-1} \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ (\mathbf{z}-\boldsymbol{\mu}) (\mathbf{z}-\boldsymbol{\mu})^{\mathrm{T}} \right] \right) + \sum_{i=1}^J \log \sigma_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + \operatorname{tr} \left( \mathbf{\Sigma}^{-1} \mathbf{\Sigma} \right) + \sum_{i=1}^J \log \sigma_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \mathbf{z}^{\mathrm{T}} \mathbf{z} \right] + J + \sum_{i=1}^J \log \sigma_{ii} \right) &= \\ \frac{1}{2} \left( - \mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z})} \left[ \operatorname{tr} \left( \mathbf{z} \mathbf{z}^{\mathrm{T}} \right) \right] + \sum_{i=1}^J 1 + \sum_{i=1}^J \log \sigma_{ii} \right) &= \\ \frac{1}{2} \left( - \operatorname{tr} \left( \mathbf{\Sigma}\right) - \operatorname{tr} \left( \boldsymbol{\mu} \boldsymbol{\mu}^T \right) + \sum_{i=1}^J 1 + \sum_{i=1}^J \log \sigma_{ii} \right) &= \\ \frac{1}{2} \left( - \sum_{i=1}^J \sigma_{ii} - \sum_{i=1}^J \mu_{i}^2 + \sum_{i=1}^J 1 + \sum_{i=1}^J \log \sigma_{ii} \right) &= \\ \frac{1}{2} \sum_{i=1}^J \left( 1 + \log \sigma_{ii} - \sigma_{ii} - \mu_{i}^2 \right) \end{align}

In the above simplifications, I also applied the following rules.

The official PyTorch implementation of the VAE, which can be found here, also uses this formula. This formula can also be found in Appendix B of the VAE paper, but the long proof that I've just written above is not given. Note that, in my proof above, $\sigma$ is the variance and is denoted by $\sigma^2$ in the paper (as it is usually the case to denote the variance as the square of the standard deviation $\sigma$, but, again, in my proof above $\sigma$ is the variance).

",2444,,2444,,2/16/2021 14:11,2/16/2021 14:11,,,,0,,,,CC BY-SA 4.0 26409,2,,26405,2/16/2021 16:03,,0,,"

Word including Hash tag need to be tokanised. It has a special meaning and given a context. The word by itself has a different meaning.

",44705,,,,,2/16/2021 16:03,,,,1,,,,CC BY-SA 4.0 26411,1,,,2/16/2021 20:43,,1,62,"

If I train a U-Net model for image segmentation (e.g. medical images) and start training until it converges and then add augmentation - can i expect similar results as if i train with augmentation from the beginning ?

",44711,,,,,12/11/2022 0:05,Late Onset Augmentation,,2,0,,,,CC BY-SA 4.0 26412,1,26656,,2/17/2021 0:05,,5,334,"

In the Alpha Zero paper (https://arxiv.org/pdf/1712.01815.pdf) page 13, the input for the NN is described. In the beggining of the page, the authors state that:

"The input to the Neural Network is an N x X x (MT + L) image stack [...]"

From this, I understand that (for one training example) each input feature is an 8x8 plane. (Technically speaking, every value of every 8x8 plane is a feature, but for the purpose of the question let's suppose that a plane is an input feature).

In the description of the table on top of the image, the following statement is made:

"[...] Counts are represented by a single real-valued input; other input features are represented by a one-hot encoding using thespecified number of binary input planes. [...]"

I understand how they convert the P1 and P2 pieces to one-hot encodings. My questions are:

  1. When they say single real-valued input, since every input feature should be an 8x8 plane, do they mean that they create an 8x8 plane where every entry has the same single-real value? For example, for the 'Total move count' plane, if 10 moves had been played in the game so far, it would look like the one below?
  move_count_plane = [[10, 10, 10, 10, 10, 10, 10, 10],
                      [10, 10, 10, 10, 10, 10, 10, 10],
                      [10, 10, 10, 10, 10, 10, 10, 10],
                      [10, 10, 10, 10, 10, 10, 10, 10],
                      [10, 10, 10, 10, 10, 10, 10, 10],
                      [10, 10, 10, 10, 10, 10, 10, 10],
                      [10, 10, 10, 10, 10, 10, 10, 10],
                      [10, 10, 10, 10, 10, 10, 10, 10]]
  1. For the 'Repetitions' plane, is it the same case as above? They mean a plane where every value is the number of times a specific board setup has been reached? For example, if a specific position has been reached 2 times, then the repetitions plane for that position would be
  # for a specific timestep in the T=8 step history
  repetitions_plane = [[2, 2, 2, 2, 2, 2, 2, 2],
                       [2, 2, 2, 2, 2, 2, 2, 2],
                       [2, 2, 2, 2, 2, 2, 2, 2],
                       [2, 2, 2, 2, 2, 2, 2, 2],
                       [2, 2, 2, 2, 2, 2, 2, 2],
                       [2, 2, 2, 2, 2, 2, 2, 2],
                       [2, 2, 2, 2, 2, 2, 2, 2],
                       [2, 2, 2, 2, 2, 2, 2, 2]]

? Also, why do they keep 2 repetitions planes? Is it one for every player? (8 repetition planes for the past T=8 moves for P1, and more 8 repetition planes for the past T=8 moves for P2?)

Thanks in advance.

",44715,,44715,,2/17/2021 0:15,3/4/2021 8:06,Clarifying representation of Neural Nerwork input for Chess Alpha Zero,,1,0,,,,CC BY-SA 4.0 26413,1,26414,,2/17/2021 1:15,,0,63,"

I'm writing some financial tools, I've found highly performant models for question and answering but when it comes to sentiment analysis I haven't found anything that good. I'm trying to use huggingface:

from transformers import pipeline
classifier = pipeline('sentiment-analysis')
print(classifier("i'm good"))
print(classifier("i'm bad")) 
print(classifier("i'm neutral"))
print(classifier("i'm okay")) 
print(classifier("i'm indifferent")) 

Which returns results

[{'label': 'POSITIVE', 'score': 0.999841034412384}]

[{'label': 'NEGATIVE', 'score': 0.9997877478599548}]

[{'label': 'NEGATIVE', 'score': 0.999396026134491}]

[{'label': 'POSITIVE', 'score': 0.9998164772987366}]

[{'label': 'NEGATIVE', 'score': 0.9997762441635132}]

The scores for all of the neutral words come up very high in a positive or negative direction, I would of figured the model would put the score lower.

I've looked at some of the more fine-tuned models yet they seem to perform the same.

I would assume there would be some pretrained models which could handle these use cases. If not, How can I find neutral sentiments?

",44588,,,,,2/17/2021 4:45,Sentiment analysis does not handle neturals,,1,2,,3/10/2021 13:40,,CC BY-SA 4.0 26414,2,,26413,2/17/2021 4:45,,1,,"

Yes, there is. You can try Spacy. Here you go.

import spacy 
from spacytextblob.spacytextblob import SpacyTextBlob

nlp = spacy.load('en_core_web_sm') 
spacy_text_blob = SpacyTextBlob() 
nlp.add_pipe(spacy_text_blob)

text = "i'm good" 
doc = nlp(text) 
print(doc._.sentiment.polarity) # 0.7

text = "i'm bad"  
doc = nlp(text) 
print(doc._.sentiment.polarity) # -0.6999999999999998

text = "i'm neutral" 
doc = nlp(text) 
print(doc._.sentiment.polarity) # 0.0

text = "i'm okay"  
doc = nlp(text) 
print(doc._.sentiment.polarity) # 0.5

text = "i'm indifferent"  
doc = nlp(text) 
print(doc._.sentiment.polarity) # 0.0
",37203,,,,,2/17/2021 4:45,,,,1,,,,CC BY-SA 4.0 26415,2,,26411,2/17/2021 4:55,,0,,"

It would be better to run experiments on this. But, here's answer analytically.

The models will be different.

With augmentation, the network starts to learn to combat the noise too.

With late onset augmentation, the network will start to deviate from its original solution to combat noise.

Comparing this to the ball rolling down the hill, in case of augmentation, the ball will roll but will face a bit of friction (the noise due to augmentation). On the other hand, with late onset augmentation, it would be like the ball reached the bottom of hill, and now it has to face an uphill battle to model the noise. The ball will need to go a bit up the hill against gravity and friction due to noise.

I don't think it would be a good thing to do. But, running experiments is the only way we can confirm.

",37203,,,,,2/17/2021 4:55,,,,0,,,,CC BY-SA 4.0 26417,2,,2548,2/17/2021 5:15,,1,,"

Naive Bayes is a generative algorithm while Perceptron is a discriminative algorithm. That is the main difference.

",37203,,,,,2/17/2021 5:15,,,,0,,,,CC BY-SA 4.0 26418,2,,2548,2/17/2021 7:19,,1,,"

A perceptron is a linear threshold function. That means it has a weight vector $w$, and it outputs $w \cdot x > t$, where $x$ is the input vector and $t$ the threshold.

Naïve Bayes makes the assumption that all features are independent (hence the term naïve). It predicts the most likely class by using Bayesian probability, for each class multiplying the class prior with the the probability of the input given the class. The fact that we are modeling $P(X|Y)$ instead of $P(Y|X)$ makes it a generative model. Since we make the strong independence assumption, $P(Y|X)$ is modeled independently for each $x_i \in X$, usually via some sort of maximum likelihood estimation.

",12201,,,,,2/17/2021 7:19,,,,0,,,,CC BY-SA 4.0 26421,1,26422,,2/17/2021 14:06,,3,119,"

How would you explain Federated Learning in simple layman terms for a non-STEM person? What are the main ideas behind Federated Learning?

",30725,,30725,,2/25/2021 6:03,2/25/2021 6:03,What is Federated Learning?,,1,2,,,,CC BY-SA 4.0 26422,2,,26421,2/17/2021 14:58,,1,,"

The analogy is to a federal system of government. In a federation, smaller pieces follow the direction of a higher piece. In federated machine learning, you give your data for processing to the higher machine. The federation in this analogy is a collection of smaller computers. The central computer breaks up your data and gives portions of it to each smaller computer. When those computers are done they return the results and the central computer reassembles them into a single model.

The main idea is distributed processing. Some benefits are:

  • Cost: it may be cheaper to operate multiple inexpensive computers rather than fewer more expensive computers.
  • Privacy: if this is sensitive data like healthcare records then perhaps you don't want all of one person's data in a single place where the wrong person can grab it.
",19703,,,,,2/17/2021 14:58,,,,0,,,,CC BY-SA 4.0 26423,1,,,2/17/2021 15:54,,2,81,"

I'm quite new to GANs and I am trying to use a Wasserstein GAN as an augmentation technique. I found this article https://www.sciencedirect.com/science/article/pii/S2095809918301127, and would like to replicate their method of evaluating the GAN. The method is shown in the figure.

In the article they write that they extract the generated samples that fooled the discriminator and use these to train a classifier. They also say that they use a Wasserstein GAN. Does anyone know how it is possible to extract samples that fooled the discriminator, since for a Wasserstein GAN the critic (discriminator) only puts a rating and not a label on the generated data?

",44728,,,,,11/18/2022 15:02,Classifying generated samples with Wasserstein-GAN as real or fake,,1,0,,,,CC BY-SA 4.0 26425,1,26431,,2/17/2021 17:04,,1,63,"

I'm thinking about writing an essay on the comparison between the human nervous system (reaction time) and a neural network that does the same reaction time test. I am very new in this area, so I was wondering if I can build a neural network that can perform a test like this: https://humanbenchmark.com/tests/reactiontime

I just wanted to know how I should approach this problem, and what would be the best way to compare it to the human nervous system.

I have thought about maybe using an image classification neural network, and have it looks for different colors and such, but not too sure about its technical aspects as of yet. Any help is appreciated.

",44731,,2444,,2/19/2021 11:41,2/19/2021 11:41,"Is it possible to make a neural network to solve this ""reaction time test""?",,1,0,,,,CC BY-SA 4.0 26426,1,26434,,2/17/2021 17:29,,5,757,"

I understand why tf.abs is non-differentiable in principle (discontinuity at 0) but the same applies to tf.nn.relu yet, in case of this function gradient is simply set to 0 at 0. Why the same logic is not applied to tf.abs? Whenever I tried to use it in my custom loss implementation TF was throwing errors about missing gradients.

",44734,,,,,2/18/2021 11:35,Why is tf.abs non-differentiable in Tensorflow?,,2,0,,,,CC BY-SA 4.0 26429,1,26430,,2/17/2021 18:07,,0,48,"

I am studying a preprint for my own learning (https://www.medrxiv.org/content/medrxiv/early/2020/04/27/2020.04.23.20067967.full.pdf) and I am befuddled by the following detail of the neural network architecture:

This is in accord with the paper's description of the architecture (p. 5):

Age and sex were input into a 64-unit hidden layer and was concatenated with the other branches.

How can the two scalars of age and sex be implemented as a 64-unit dense layer?

",44735,,2444,,2/17/2021 18:48,2/17/2021 18:48,Converting age and sex variables to a 64-unit dense layer,,1,0,,,,CC BY-SA 4.0 26430,2,,26429,2/17/2021 18:22,,2,,"

Convert them into numbers (using one-hot vectors or direct numerical representations) and then concatenate them. Then, you can pass them through the Dense layer.

",37203,,,,,2/17/2021 18:22,,,,3,,,,CC BY-SA 4.0 26431,2,,26425,2/17/2021 18:56,,0,,"

Basically, what you want to do is an anomaly detection at the fastest speed possible. You need to sample the image at two timesteps and if there is a difference you click. The smaller the difference between your timesteps the better it is.

But, you would not need a neural network for it. Since, it is a single colour that will change. A color is represented by a 3-tuple (RGB) so you just need to compare them and if they are different then you click. A single if condition will suffice for it.

",37203,,,,,2/17/2021 18:56,,,,8,,,,CC BY-SA 4.0 26432,2,,26332,2/17/2021 18:59,,0,,"

For this application, you can frame it as text classification. Look at SpaCy. You just need to create embeddings for your text and put a Softmax in the end. You can get those embeddings from BERT or anything else out there. You can in fact just use GLOVE vectors and others like it, concatenate them and then train a classifier.

",37203,,,,,2/17/2021 18:59,,,,0,,,,CC BY-SA 4.0 26433,2,,12576,2/17/2021 19:02,,1,,"

It is computed just like in training. You take an MSE or something along these lines between the input and the output. You set a threshold for it. If new data's reconstruction error is higher than your threshold, then it is anomalous otherwise it isn't.

",37203,,,,,2/17/2021 19:02,,,,0,,,,CC BY-SA 4.0 26434,2,,26426,2/17/2021 19:31,,4,,"

By convention, the $\mathrm{ReLU}$ activation is treated as if it is differentiable at zero (e.g. in [1]). Therefore it makes sense for TensorFlow to adopt this convention for tf.nn.relu. As you've found, of course, it's not true in general that we treat the gradient of the absolute value function as zero in the same situation; it makes sense for it to be an explicit choice to use this trick, because it might not be what the code author intends in general.

In a way this is compatible with the Python philosophy that explicit is better than implicit. If you mean to use $\mathrm{ReLU}$, it's probably best to use tf.nn.relu if it is suitable for your use case.

[1] Vinod Nair and Geoffrey Hinton. Rectified Linear Units Improve Restricted Boltzmann Machines. ICML'10 (2010). URL.

",44413,,,,,2/17/2021 19:31,,,,4,,,,CC BY-SA 4.0 26436,1,,,2/18/2021 2:28,,4,199,"

The recent advances in machine learning were mostly achieved by the hardware, and the hardware is said to continue driving the development of AI, but I was still shocked by this thread which reads that the projected future cost for the largest model would be 1B dollars in 2025. And I learned that universities are suffering from an academic AI brain drain partly due to the scarce hardware resources.

Some people proposed the so-called Green AI that encourages sustainable AI development but provides few constructive methods to prevent the trend.

I wonder if the redder and redder AI would be in fact truly inevitable. It seems to me that all companies should build an expensive compute infrastructure to be competitive, but I think the investment would be very risky since most companies cannot get a higher return.

But on the other hand, we human beings have evolved tens of millions of years or billions of years(life) on earth with hundreds of billions of brains that have ever lived on earth as a whole "human brain". The biological wetware seems much much redder than the nowadays hardware and has consumed much much more energy than all the supercomputers. To make machines as intelligent as we humans shouldn't we pay as high a price? It reminds me of the NFL theorem but it should be imprecise in this scenario.

So, will there be some promising techniques on the algorithm side that can make AI greener, affordable and sustainable in the future? If not, could anyone please explain why AI should be unavoidably red and inevitably redder?

",5351,,5351,,7/25/2021 13:29,6/11/2022 11:12,Will there be some promising techniques that can make AI greener and affordable in the future?,,1,0,,,,CC BY-SA 4.0 26437,2,,26426,2/18/2021 10:54,,2,,"

Creating custom gradient for tf.abs may solve the problem:

@tf.custom_gradient
def abs_with_grad(x):
  y = tf.abs(x);

  def grad(div): # Derivation intermediate value
    g = 1; # Use 1 to make the chain rule just skip abs
    return div*g;

  return y,grad;
",2844,,2444,,2/18/2021 11:35,2/18/2021 11:35,,,,0,,,,CC BY-SA 4.0 26438,2,,11342,2/18/2021 12:17,,3,,"

I am also wondering about this. It must depend both on convolutional sub-network output size (N) and number of classes (M). Maybe there are some rules of thumbs depending on (N, M).

  1. Why 2 dense layers and not, say, 3 or 4 ?
  2. Is it better to have all dense layers (except last) the same size ? or decreasing ? or increasing ? or pyramidal ?
  3. Is it better to have small dense layers or larger ones with dropout between layers ?

And bonus question:

  1. Should we use batch normalization between dense layers ?
",44757,,,,,2/18/2021 12:17,,,,0,,,,CC BY-SA 4.0 26440,1,26464,,2/18/2021 13:05,,0,219,"

I am trying to do an NLP project and was wondering if there is anywhere online where the Word2Vec embeddings are stored (the actual n-dimmensional vectors). I want to search up a word and see what its encoding is. I have tried looking but couldn't find anything. Thank you

",44695,,46540,,5/2/2021 11:56,5/2/2021 11:56,Are the Word2Vec encoded embeddings available online?,,1,2,,2/19/2021 13:43,,CC BY-SA 4.0 26441,2,,26436,2/18/2021 13:25,,3,,"

As far as I know, green AI and red AI are very recent terms and/or research areas, but their importance will or should definitely be more highlighted in the coming years (for obvious reasons).

I don't know how many people are actively and directly researching this topic, but, in the past, I've already read a research paper on the topic, so a few people are already trying to at least raise awareness about these issues. Moreover, they also suggest that researchers report the number of floating-point operations (FLOPs) performed during the experiments, in addition to the usual performance metrics (such as the accuracy). I don't remember all the details of this paper, but I really recommend that you read it. This another possibly useful paper that people interested in this topic might want to read.

Although I'm currently not doing research on this topic, from an algorithmic perspective, improvements in the following areas will probably contribute to greener AI

  • zero-, one- or few-shot learning
  • transfer learning
  • sample/data efficiency
  • computational complexity

If you can learn from fewer samples and with less computation, you will probably produce less emissions.

So, I think that making AI greener is definitely possible, as it's possible to use an electric car rather than a petrol/diesel car, although more research on the topics mentioned above (and other topics) needs to be done to make it practically useful.

",2444,,2444,,6/11/2022 11:12,6/11/2022 11:12,,,,2,,,,CC BY-SA 4.0 26442,1,,,2/18/2021 13:34,,1,45,"

I have a 1 column dataset of $50 000$ points where 95% of the values equal $-50$. The data looks like the following: $$\begin{matrix} \text{time} & \text{value}\\ 1&-50 \\ 2&-50 \\ 3&-50 \\ 4& -50 \\ 5&3 \\ 6&-50\\ 7&-50\\ 8&5 \end{matrix}$$ As an addition, I know the exact time instance in which I will get value $\neq -50$ (as in the example above these are instances $5$ and $8$). The data is somewhat periodic, so the values which are different from $-50$ are chosen from a finite set $\mathcal{S}$.

To predict the values I use a 3 layer LSTM network with l2 regularizer where along with the values I input another column that looks like that: $$\begin{matrix} \text{time} & \text{value} & \text{expect a change}\\ 1&-50 & 0 \\ 2&-50 & 0\\ 3&-50 & 0\\ 4& -50 & 1\\ 5&3 & 0\\ 6&-50& 0\\ 7&-50& 1\\ 8&5 & 0 \end{matrix}$$ which identifies the change one time instant in advance, so the LSTM will know to expect a change. However, the prediction performance is quite poor, it always predicts the change in the value but is far from real one and usually takes values out of the set $\mathcal{S}$. Any idea of how this could be improved?

",44759,,44759,,2/18/2021 17:46,2/18/2021 19:30,How to improve prediction performance of periodic data?,,0,0,,,,CC BY-SA 4.0 26447,1,26458,,2/18/2021 17:20,,3,122,"

I'm using OpenAI's cartpole environment. First of all, is this environment not Markov?

Knowing that, my main question concerns Q-learning and off-policy methods:

For me, there is something weird in updating a Q value based on the max Q for a state and a reward value that was not from the action taken? How does this make learning better and makes you learn the optimal policy?

",33532,,2444,,2/21/2021 10:50,2/21/2021 10:50,"Why can we take the action $a$ from the next state $s'$ in the max part of the Q-learning update rule, if that action doesn't lead to any reward?",,1,0,,,,CC BY-SA 4.0 26448,1,26449,,2/18/2021 19:09,,5,528,"

I while ago I read that you can make subtle changes to an image that will ensure a good CNN will horribly misclassify the image. I believe the changes must exploit details of the CNN that will be used for classification. So we can trick a good CNN into classifying an image as a picture of a bicycle when any human would say it's an image of a dog. What do we call that technique, and is there an effort to make image classifiers robust against this trick?

",44765,,1641,,2/18/2021 20:42,2/18/2021 20:42,Can CNNs be made robust to tricks where small changes cause misclassification?,,2,2,,,,CC BY-SA 4.0 26449,2,,26448,2/18/2021 19:28,,5,,"

These are known as adversarial attacks, and the specific examples that are misclassified are known as adversarial examples.

There is a reasonably large body of work on finding adversarial examples, and on making CNNs more robust (i.e. less prone to these attacks). An example is the DeepFool algorithm, which can be used to find perturbations of data which would cause the label to change.

There are several techniques in the literature which are used to fight against adversarial attacks. Here is a selection:

  1. Augmenting the training data with various random perturbations. This is intended to make the model more robust to the typical adversarial attack where random noise is added to an image. An example of this approach is discussed in [1].

  2. Constructing some sort of model to "denoise" input before feeding into the CNN. An example of this is Defense-GAN [2], which uses a generative adversarial model which models the "true" image distribution and finds an approximation of the input closer to the real distribution.

References

[1] Ian J. Goodfellow, Jonathon Shlens & Christian Szegedy. Explaining and Harnessing Adversarial Examples. ICLR (2015). URL.

[2] Pouya Samangouei, Maya Kabkab, Rama Chellappa. Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models. ICLR (2018). URL.

",44413,,,,,2/18/2021 19:28,,,,0,,,,CC BY-SA 4.0 26451,2,,26448,2/18/2021 19:34,,6,,"

Those examples are called Adversarial Examples. I think it is important to understand why a CNN can be "tricked" like that:

We often expect human-like behavior when a model has a human-like performance. That is similar for CNNs. We expect they decide like we do, i.e. we look for the shape of objects.

However, as various experiments on common CNN show, that is not the case. A CNN looks for other features. It tries to minimize a loss function. And the fastest way to do this is often not the intended way. E.g. you think that a cow has like 4 legs and a certain shape of a head. A CNN might think that it is enough if there is something with 4 legs and a green background (because 9 of 10 images in the training dataset are like that). So for most of the cases the CNN does fine when it identifies a cow by these features.

Geirhos did an experiment in "Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness". He showed a CNN images that can be classified as A and B. E.g. like this image, taken from Geirhos paper above . In most of the cases a common CNN says this is not a cat. It is an elephant!

Or another similar experiment by Wang et al.: "High frequency component helps explain the generalization of convolutional neural networks". They revealed that common CNNs can classify images "correctly" although the low frequencies in that image (i.e. these are the frequencies that contain the shape information) are removed from the image.

In another experiment, Brendel & Bethge in "“Approximating cnns with bag-of-localfeatures models works surprisingly well on imagenet" redesigned CNNs so they can not use global shape information by their design. Their CNN performed also human-like. So why should other CNNs not use the features that were used by this network?

To come to the second part of your question: To make a CNN more robust you need to provide the CNN with images that contain only the features that you want it to learn. That is difficult with images. Removing texture information by applying style transfer is one option. Think about what you want the network to learn and what not. Then try to remove/hide/suppress what you do not want it to learn. Details are given in other answers.

There is also an approach called feature visualization that tries to generate visualizations of the features that are recognized by a CNN. However, the naive technique is almost useless, because these visualizations are often dominated by something like noise. (Except for a minimal part of a CNN.) This might indicate that CNNs do not focus on global shape information (as the visualizations do not sharp shapes, rather high-frequency textures).

Maybe this question helps a bit: How is it possible that deep neural networks are so easily fooled?

",42064,,42064,,2/18/2021 20:05,2/18/2021 20:05,,,,0,,,,CC BY-SA 4.0 26452,1,26457,,2/18/2021 21:02,,2,441,"

I always see that the width and height of the kernel are the same. But is it a good idea to use different numbers?

Recently I tried to use GoogLeNet (which expects images to be 224x224) on my images (500x150) and I got an error:

Negative dimension size caused by subtracting 7 from 5 for 'average_pooling2d_5/AvgPool'...

I know that this error is because the height of my image is too small. If I use the height of about 200, then everything is ok. So, maybe, in this situation, I could just use a smaller height and bigger width in the kernel. For example (5, 3).

Is it a good idea in this case? Or in general? How can it affect the accuracy of the network and the ability to extract different features?

",,user40943,2444,,2/19/2021 12:52,2/19/2021 12:52,Is it a good idea to use different width and height of the kernel in a CNN?,,1,0,,,,CC BY-SA 4.0 26453,1,26456,,2/18/2021 21:17,,3,69,"

I'm trying to train an object detection algorithm (i.e. YOLOv4 Scaled, Faster R-CNN) on data taken from large orthophotos. Let's say I have one class, and I label the entire orthophoto with bounding boxes. After labeling, is there a way to slice up the entire image into individual photos of specified pixel sizes (i.e. 416x416 pixels) while keeping the bounding boxes? I can easily slice the photo into the specified dimensions, but the problem I am having is keeping the bounding boxes in these new images.

That way, I would not be exhausting my GPU's memory requirements.

",32750,,,,,2/18/2021 21:38,Is there a methodology for splitting up annotated orthophotos into smaller photos that retain the original bounding boxes?,,1,0,,,,CC BY-SA 4.0 26456,2,,26453,2/18/2021 21:38,,1,,"

You can reduce your photo size and scale the corresponding boxes to the new dimensions (416x416).

Or if you want to go with your technique, you can slice the image and then, check if the bounding box lies in the slice, then, reorient it according to the slice you took.

Take a look at albumentations library for this.

",37203,,,,,2/18/2021 21:38,,,,0,,,,CC BY-SA 4.0 26457,2,,26452,2/18/2021 21:46,,1,,"

It depends on your application. In case of text recognition, non-uniform kernels are used since the information about text is less on the horizontal axis and more on the vertical axis.

If in your case it is applicable then, it will be good idea. But, if it is not you are better off using a smaller uniform kernel (2x2, maybe). You can also zero-pad your image to make it uniform before putting it through convolutions. Also, check if you are doing 'valid' or 'same' padding in your convolutions since 'valid' convolutions chip away at your image dimensions.

",37203,,,,,2/18/2021 21:46,,,,3,,,,CC BY-SA 4.0 26458,2,,26447,2/18/2021 22:32,,5,,"

I'm using OpenAI's cartpole environment. First of all, is this environment not Markov?

The OpenAI Gym CartPole environment is Markov. Whether or not you know the transition probabilities does not affect whether the state has the Markov property. All that matters is that knowing the current state is enough to be determine the next state and reward in principle. You do not need to explicitly know the state transition model, or the reward function.

An example of a non-Markov state for CartPole would be if one of the features was missing - e.g. the current position of the cart, or the angular velocity of the pole. It is still possible to have agents attempt to solve CartPole with such missing data, but the state would no longer be Markov, and it would be a harder challenge.

For me, there is something weird in updating a Q value based on the max Q for a state and a reward value that was not from the action taken? How does this make learning better and makes you learn the optimal policy?

To recap, you are referring to this update equation, or perhaps some variation of it:

$$Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \alpha(R_{t+1} + \gamma\text{max}_{a'}Q(S_{t+1},a') - Q(S_t, A_t))$$

Another correction here, the reward value is related to the action taken and the experience that happened. This is important - the values of $R_{t+1}$ and $S_{t+1}$ are from the experience that was observed after taking action $A_t$, they do not come from anywhere else. In fact this is the only data from experience that is inserted into the update, everything else is based on action value estimates in a somewhat self-referential way (a.k.a. "bootstrapping").

How does this converge to an optimal policy? It follows a process known as generalised policy iteration (GPI), which works as follows:

  • The target policy is to always take the best action according to your current estimates

  • In off policy methods only (i.e. does not apply to all GPI), there is a conversion needed from experience in a behaviour policy to value estimates for the target policy. In single-step Q learning, this is where the $\text{max}_{a'}Q(S_{t+1},a')$ comes from. It is calculating the future value from following the target policy*. No interim values are needed, because you are estimating a new value for $Q(S_t,A_t)$, there are no other time steps where you need to care about difference between behaviour and target policy (there is just "now" and "all the future, which is bootstrapped")

  • Once you update the action value estimates in Q table, that automatically updates the target policy because it always takes the best estimated action.

  • Each update improves your value estimates, due to injecting some real experienced data in the form of $R_{t+1}$ and $S_{t+1}$.

  • Each time the estimates improve, the target policy can also improve and become closer to the optimal policy.

  • When the target policy changes, this makes the estimates less accurate, because they were made for the previous policy. However, the next time the same states and actions are visited they will be updated based on the new improved policy.

With a stochastic environment, it is possible for estimates to get worse due good or bad luck for the agent when it tries different actions. However, over time, the law of large numbers will win, and the value estimates will converge to close to their true values for the current policy, which will make policy improvement inevitable if it is possible (caveat, it is only close to inevitable for the tabular Q-learning method which can be proven to converge eventually on an optimal policy).

There is proof that always taking the maximising action is a strict improvement to a policy when estimates are accurate. It is called the Policy Improvement Theorem.


* To be clear, there is nothing particularly clever about taking the max here. It is literally what the target policy has been chosen to do, so the term $\text{max}_{a'}Q(S_{t+1},a')$ is just the same as $V_{\pi}(S_{t+1})$ - the value of being in state $S_{t+1}$ for the current target policy $\pi$.

",1847,,2444,,2/21/2021 10:50,2/21/2021 10:50,,,,3,,,,CC BY-SA 4.0 26459,1,26460,,2/19/2021 0:51,,-1,145,"

I am creating a NN in tensorflow keras. the inputs are all float and the output is a class.

The output currently encoded as a float, but only has 4 values (0,1,2,3).

My model is similar to this:

model = tf.keras.Sequential([
    normalize,
    layers.Dense(128, activation='relu'),
    layers.Dense(254, activation='relu'),
    layers.Dense(512, activation='relu'),
    layers.Dense(512, activation='relu'),
    layers.Dense(512, activation='relu'),
    layers.Dense(512, activation='relu'),
    layers.Dense(512, activation='relu'),
    layers.Dense(512, activation='relu'),
    layers.Dense(254, activation='relu'),
    layers.Dense(128, activation='relu'),
    layers.Dense(4)
])

model.compile(loss = tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'],
                           optimizer = tf.optimizers.Adam())

history=model.fit(data_set_features, data_set_labels,validation_split=0.33, epochs=100)

is the model last layer correct?

what type of activation should I use and loss function?

",44775,,,,,2/19/2021 1:58,setting up last layer in tensoflow for class type of label,,1,1,,2/19/2021 10:32,,CC BY-SA 4.0 26460,2,,26459,2/19/2021 1:44,,1,,"

Since you're using categorical cross-entropy loss, the last layer (output layer) should come with softmax activation instead of identity (as being blank in layers.Dense(4)).

model = tf.keras.Sequential([
    normalize,
    ...,
    layers.Dense(4, activation=tf.keras.activations.softmax)
])

And SparseCategoricalCrossentropy is different from CategoricalCrossentropy, for categorical cross-entropy, the shape of output and label should be the same, for example:

When using categorical cross-entropy (example labels in one-hot):

Output: [[.1,.0,.9,.0],...] <-- Prediction of 90% for the 3rd class, sum=100%
Label:  [[ 0, 0, 1, 0],...] <-- Third class holds 100% classification possibility

When using sparse categorical cross-entry (labels are class indices):

Output: [[.1,.0,.9,.0],...] <-- Prediction of 90% for the 3rd class, sum=100%
Label:  [[2]          ,...] <-- Third class index is 2
",2844,,2844,,2/19/2021 1:58,2/19/2021 1:58,,,,0,,,,CC BY-SA 4.0 26461,1,,,2/19/2021 13:00,,1,118,"

I'm trying to understand the logic behind the magic of using the gumbel distribution for action sampling inside the PPO2 algorithm.

This code snippet implements the action sampling, taken from here:

def sample(self):
    u = tf.random_uniform(tf.shape(self.logits), dtype=self.logits.dtype)
    return tf.argmax(self.logits - tf.log(-tf.log(u)), axis=-1) 

I've understood that is a mathematical trick to be able to backprop over the action sampling in case of categorical variables.

  1. But why can't I just put a softmax layer on top of the logits and sample according to the given probabilities? Why do we need u?

  2. Tere is still the argmax which is not differential. How can backprob work?

  3. Does u allows exploration? Imagine that at the beginning of the learning process, Pi holds small similar values (nothing is learned so far). In this case the action sampling does not always choose the maximum value in Pi because of logits-tf.log(-tf.log(u)). In the further course of the training, larger values arise in Pi, so that the maximum value is also taken more often in the action sampling? But doesn't this mean that the whole process of action sampling is extremely dependent on the value range of the current policy?

",44075,,44075,,2/19/2021 14:28,2/19/2021 14:28,PPO2: Intuition behind Gumbel Softmax and Exploration?,,0,0,,,,CC BY-SA 4.0 26464,2,,26440,2/19/2021 13:35,,0,,"

The gensim library for Python provides both a convenient way of manipulating Word2Vec embeddings, but also downloadable data sets and pretrained models. That last part is what you're looking for. Wikipedia2Vec also has multilingual embeddings. Then this guy (whom I've never heard of) has an assortement of languages I haven't seen other places. There's another 44 models hosted by the European NLPL. You can also search Kaggle, but a lot of this looks like training data, models I've already linked, or just cruft.

",19703,,,,,2/19/2021 13:35,,,,0,,,,CC BY-SA 4.0 26465,1,,,2/19/2021 13:36,,4,191,"

I am training a Semi-Supervised GAN, using multivariate time-series with window of shape (180*80) with the generator and discriminator architecture below. My data is scaled using Robust Scaler, so I kept linear activation for the generator output.

During the training I get noise in the generated signals and I can't understand the reason why whereas the original data is smooth. What can be the reason for this noise?

def make_generator_model(noise):
    w_init = tf.random_normal_initializer(stddev=0.02)
    gamma_init = tf.random_normal_initializer(1., 0.02)
    
    def residual_layer(layer_input):
        
        res_block = Conv1D(128, 3, strides=1, padding='same')(layer_input)
        res_block = BatchNormalization(gamma_initializer=gamma_init)(res_block)
        res_block = LeakyReLU()(res_block)
        res_block = Conv1D(128, 3, strides=1, padding='same')(res_block)
        res_block = BatchNormalization(gamma_initializer=gamma_init)(res_block)
        res_block = LeakyReLU()(res_block)
        res_add = Add()([res_block, layer_input])
        
        return res_add
    
    in_noise = Input(shape=(100,))
    

    gen = Dense(180*65, kernel_initializer=w_init, use_bias=None)(in_noise)
    gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
    gen = LeakyReLU()(gen)

    gen = Reshape((180, 65))(gen)
    #assert model.output_shape == (None, 45, 256) # Note: None is the batch size

    gen = Conv1D(64, 7, strides=1, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
    #assert model.output_shape == (None, 45, 128)
    gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
    gen = LeakyReLU()(gen)
        
    gen = Conv1D(64, 4, strides=2, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
    #assert model.output_shape == (None, 45, 128)
    gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
    gen = LeakyReLU()(gen)
    
    gen = Conv1D(128, 4, strides=2, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
    #assert model.output_shape == (None, 45, 128)
    gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
    gen = LeakyReLU()(gen)
    
    for i in range(6):
        gen = residual_layer(gen)

    gen = Conv1DTranspose(128, 4, strides=2, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
    #assert model.output_shape == (None, 90, 64)
    gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
    gen = LeakyReLU()(gen)
    
    gen = Conv1DTranspose(128, 4, strides=2, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
    #assert model.output_shape == (None, 90, 64)
    gen = BatchNormalization(gamma_initializer=gamma_init)(gen)
    gen = LeakyReLU()(gen)
    

    out_layer = Conv1D(65, 7, strides=1, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
    #assert model.output_shape == (None, 180, 65)
    
    model = Model(in_noise, out_layer)

    return model

def make_discriminator_model(n_classes=8):
    w_init = tf.random_normal_initializer(stddev=0.02)
    gamma_init = tf.random_normal_initializer(1., 0.02)   
    
    in_window = Input(shape=(180, 65))

    disc = Conv1D(64, 4, strides=1, padding='same', kernel_initializer=w_init)(in_window)
    disc = LeakyReLU()(disc)
    disc = Dropout(0.3)(disc)

    disc = Conv1D(64*2, 4, strides=1, padding='same', kernel_initializer=w_init)(disc)
    disc = LeakyReLU()(disc)
    disc = Dropout(0.3)(disc)
    
    disc = Conv1D(64*4, 4, strides=1, padding='same', kernel_initializer=w_init)(disc)
    disc = LeakyReLU()(disc)
    disc = Dropout(0.3)(disc)
    
    disc = Conv1D(64*8, 4, strides=1, padding='same', kernel_initializer=w_init)(disc)
    disc = LeakyReLU()(disc)
    disc = Dropout(0.3)(disc)
    
    disc = Conv1D(64*16, 4, strides=1, padding='same', kernel_initializer=w_init)(disc)
    disc = LeakyReLU()(disc)
    disc = Dropout(0.3)(disc)
    
    disc = Flatten()(disc)
    
    disc = Dense(128)(disc)
    disc = Dense(128)(disc)
    
    out_layer = Dense(1)(disc)
    
    c_out_layer = Dense(8, activation='softmax')(disc)
    
    model = Model(in_window, out_layer)
    c_model = Model(in_window, c_out_layer)

    return model, c_model
",44792,,,,,2/19/2021 15:46,GAN Generator Output w/ Periodic Noise,,1,1,,,,CC BY-SA 4.0 26466,1,,,2/19/2021 13:36,,2,120,"

I have a question about data leakage when pre-processing data for a neural network and whether data leakage actually applies in my instance.

I have variance stabilising transformed genomic data. Because it is genomic data we know apriori that lower numbers translate to lower levels of a gene being made and vice versa. Before input into the neural network, the data are squashed to between 0 and 1 using sklearn:

preprocessing.minmax_scale(data, feature_range=(0,1), axis=1)

The min_max scaling needs to be done across sample (axis=1) as opposed to features because of this apriori assumption of gene levels - low genes need to remain low and vice-versa...

Because of this, my question is: do training samples still need to be scaled separately from test samples as it doesn't seem there is a risk of data leakage here? Is this the correct assumption to make?

",34530,,2444,,2/20/2021 10:35,3/22/2021 11:04,Is data leakage relevant when scaling across samples?,,1,0,,,,CC BY-SA 4.0 26467,1,,,2/19/2021 13:52,,0,39,"

I'm working on writing an article about the possibilities of modern AI-based algorithms to produce invisible self-learning malware, that can distribute itself throughout the internet and create flexible botnets.

So far, I can not find any additional information about that, except this one.

Is there any research work on known malware detection systems based on AI?

",4884,,4884,,2/19/2021 15:18,2/19/2021 15:18,Is there any research work on known malware detection systems based on AI?,,0,4,,,,CC BY-SA 4.0 26468,1,,,2/19/2021 14:52,,2,109,"

I have a dataset with an input size of 155x155, with the output being 155 x 1 with a 3-4 layer neural net being used for regression. With such a small sample size, should I use full batch gradient descent (so all 155 samples) or use mini batch/stochastic gradient descent. I have read that using smaller mini batch sizes allows better generalisation, but as the batch size is very very small computationally it shouldn't be a burden to use BGD.

",44796,,,,,2/19/2021 17:02,Should I use batch gradient descent when I have a small sample size?,,0,4,,,,CC BY-SA 4.0 26469,2,,26465,2/19/2021 15:38,,4,,"

Sorry cannot directly reply to your comment as I posted without an account, and you were right! I replaced transposed layers with Upscale1D+Conv1D and that solved the issue.

gen = Conv1DTranspose(128, 4, strides=2, padding='same', kernel_initializer=w_init, use_bias=None)(gen)

should become (notice that strides=2 becomes strides=1):

gen = Upscale1D()(gen)
gen = Conv1D(128, 4, strides=1, padding='same', kernel_initializer=w_init, use_bias=None)(gen)
",44792,,44792,,2/19/2021 15:46,2/19/2021 15:46,,,,2,,,,CC BY-SA 4.0 26472,1,,,2/19/2021 17:27,,1,94,"

I am working on solving a problem on nodes in a graph communicating with each other. They try to estimate a central state using Kalman consensus filter, with the connections described by the graph's adjacency matrix. Considering time to be discrete, the adjacency matrix changes at each time instant with a transition probability matrix (unknown) to some other matrix (in some finite set of matrices). I want to simulate this in python to solve an MDP with some cost/reward function. (An external agent is concerned with this cost/reward and takes actions accordingly)

Since the state space can be large, my advisor suggested using deep-RL techniques. However I have only studied (formally) basic RL (Q learning, stochastic approximation, etc. with finite states and finite actions at every instant). I tried looking at RL libraries but I can't figure out which one to pick. And even before that I am very confused by how to simulate KCF between nodes in python (from scratch?). How should I proceed?

",44801,,,,,2/19/2021 17:27,How should I simulate this Markov Decision Process?,,0,0,,,,CC BY-SA 4.0 26473,1,,,2/19/2021 18:15,,1,95,"

I'm trying to understand the concept behind the implementation of the OpenAI PPO2 algorithm. The loss function that is minimized is as follows: loss = pg_loss - entropy * ent_coef + vf_loss * vf_coef.

First question: The computation of pg_loss requires to use operations like tf.reduce_mean and tf.maximum. Are these two functions differentiable? Apparently, they are, otherwise, it would not work. Can someone explain why so I can understand the implementation?

Second question: During training, an action is sampled by using the Gumbel Distribution: Noise from such a distribution is added to the logits and then tf.argmax is applied. This index is then used to calculate the negative log-likelihood. However, the tf.argmax should also not be differentiable, so how can this work?

",45600,,2444,,2/23/2021 10:27,2/23/2021 10:27,Why is Openai's PPO2 implementation differentiable?,,0,1,,,,CC BY-SA 4.0 26475,1,,,2/20/2021 4:19,,0,345,"

If I'm dealing with a sequence of images as the input (frame by frame), and I want to output a matrix at each timestamp, can the hidden state be a matrix?

",44814,,2444,,2/20/2021 10:25,2/20/2021 11:26,Can the hidden state of an RNN be a matrix?,,2,2,,,,CC BY-SA 4.0 26476,1,,,2/20/2021 4:45,,3,85,"

How does:
$$\text{Var}(y) \approx \sigma^2 + \frac{1}{T}\sum_{t=1}^Tf^{\hat{W_t}}(x)^Tf^{\hat{W_t}}(x_t)-E(y)^TE(y)$$ approximate variance?

I'm currently reading What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision, and the authors wrote the above formula for the approximate estimation for the variance. I'm confused how the above is an approximation for $\frac{\sum(y-\bar{y})^2}{N-1}$. So, in the above equation, they're using a Bayesian Neural Network to quantify uncertainty. $\sigma$ is the predictive variance (kind of confused how they get this). $x$ is the input and $y$ is the label for the classification. $f^{\hat{W_t}}(\cdot)$ output a mean to a Gaussian distribution, with $\sigma$ being the SD for that distribution and $T$ is a predefined number of samples because the gradient is evaluated using Monte Carlo sampling.

",30885,,30885,,2/21/2021 0:12,2/21/2021 0:12,Why does this formula $\sigma^2 + \frac{1}{T}\sum_{t=1}^Tf^{\hat{W_t}}(x)^Tf^{\hat{W_t}}(x_t)-E(y)^TE(y)$ approximate the variance?,,0,11,,,,CC BY-SA 4.0 26479,1,,,2/20/2021 5:11,,1,775,"

Is there any situation in which breadth-first search is preferable over A*?

",44815,,2444,,2/20/2021 10:16,2/25/2021 13:04,Is there any situation in which breadth-first search is preferable over A*?,,2,1,,,,CC BY-SA 4.0 26480,2,,26466,2/20/2021 5:58,,1,,"

Considering that you are making a minmax scaling, the only time in which there would be no risk of data leakage is if the minimum value on your training set equals the minimum value of the test set, and your maximum value on the training set equals the maximum value on the test set.

In that circumstance the result of your scaling would be exactly the same as fitting the scaler to the training set, and applying it to the test set.

There is basically no advantage in scaling them separately, and you take the risk of data leakage if their value ranges differ. I don't know much about genomic data, but even if your assumption was correct I would recommend not to do that.

",44817,,,,,2/20/2021 5:58,,,,3,,,,CC BY-SA 4.0 26482,2,,26475,2/20/2021 7:55,,3,,"

Yes, I would say more, that hidden state can be a tensor of arbitrary dimensionality. For vanilla RNN the update rules of the hidden state and the output are: $$ h_t = \sigma_h(W_h x_t + U_h h_{t-1} + b_h) $$ $$ y_t = \sigma_y(W_y h_t + b_y) $$ Here $W_h$ is input to hidden state matrix, and $U_h$ is hidden state to hidden state, $W_y$ is hidden state to output. One can simply upgrade these matrix products to more general case of tensor products, such that they could handle multidimensional input data and hidden state.

The contraction of the $m$-dimensional tensor $X$ with $n$-dimensional Y (we assume $m \geq n$) by maximal amount of indices will be $m-n$ dimensional tensor $Z$: $$ X_{i_1 \ldots i_m} Y_{i_1 \ldots i_n} = Z_{i_{n+1} \ldots i_m} $$

Namely, imagine, that the $x_t = (x_t)_{i_1 \ldots i_X}$ is an $d_x$-dimensional tensor. Then take $W_h$ to be $d_{wh}$ dimensional tensor, hidden state $h$ to be $d_h = d_{wh} - d_{x}$ -dimensional, $U_h$ is $d_{uh}$ - dimensional, $W_y$ is $d_{wy}$ -dimensional. The output $y_t$ of the RNN will be $d_{wy} - d_{h}$ dimensional tensor.

Here note, that in order for the sum to be make sense in the first equation, dimensions need to match: $$ d_{uh} - d_{h} = d_{h} $$

",38846,,,,,2/20/2021 7:55,,,,0,,,,CC BY-SA 4.0 26484,2,,26475,2/20/2021 11:26,,1,,"

The question is, why you specifically want a matrix? I assume you mean per-frame features. In such case you can use a ConvNet as a feature extractor, i.e. it outputs a vector of features of fixed size. This vector is an input in your LSTM. If you have $C$ frames, the output of LSTM is $$ Output, (h, c) = LSTM (frame \ features) $$ where frame features are size $(1, C, v)$, where $v$ is the feature vector length from ConvNet. $Output$ is size $(1, C, w)$ and $h$ is $(1, 1, w)$ where $w$ is the size of the hidden layer in LSTM, and $h$ is the last hidden layer. Now, you can resize $Output$ to $(C, w)$ and make it an input in some linear layer that outputs a batch size $C$. This is your predictions per frame.

",21738,,,,,2/20/2021 11:26,,,,0,,,,CC BY-SA 4.0 26485,2,,26479,2/20/2021 11:26,,2,,"

The only general situation that comes to my mind where BFS could be preferred over A* is when your graph is unweighted and the heuristic function is $h(n) = 0, \forall n \in V$. However, in that case, A* (which is equivalent to UCS) behaves like BFS (except for the goal test: see section 3.4.2 of this book), i.e. it will first expand nodes at level $l$, then at level $l+1$, etc., this is because nodes at level $l+1$ are farther away from the initial node than nodes at level $l$, i.e. $f(n') = g(n') > f(n) = g(n)$, for all $n' \in V_{l+1}$ and $n \in V_{l}$, where $g(n)$ is the cost of the shortest path from the initial node to $n$ and $V_{l}$ is the subset of nodes of the search space that belong to level/layer $l$ of the search tree.

So, if you think that your goal is close to the initial node in terms of levels/layers and each node has a small branching factor (i.e. not many children), then BFS may be a good idea (but you could also use A*).

BFS can also be used as a sub-routine in other algorithms, such as the Ford–Fulkerson algorithm. So, in these cases, BFS may also be preferable.

",2444,,2444,,2/20/2021 12:31,2/20/2021 12:31,,,,0,,,,CC BY-SA 4.0 26486,1,26488,,2/20/2021 12:04,,5,470,"

I am working on this assignment where I made the agent learn state-action values (Q-values) with Q-learning and 100% exploration rate. The environment is the classic gridworld as shown in the following picture.

Here are the values of my parameters.

  • Learning rate = 0.1
  • Discount factor = 0.95
  • Default reward = 0

Reaching the trophy is the final reward, no negative reward is given for bumping into walls or for taking a step.

After 500 episodes, the arrows have converged. As shown in the figure, some states have longer arrows than others (i.e., larger Q-values). Why is this so? I don't understand how the agent learns and finds the optimal actions and states when the exploration rate is 100% (each action: N-S-E-W has 25% chance to be selected)

",37540,,37540,,2/20/2021 12:39,2/21/2021 16:53,Why does Q-learning converge under 100% exploration rate?,,1,0,,,,CC BY-SA 4.0 26487,1,,,2/20/2021 13:15,,1,25,"

Are there any known theoretical bounds, or at least heuristic approaches, regarding the relation or correlation between the performances of any two different classification algorithms?

For example, would there exist binary classification datasets for which, say, $k$-nearest-neighbour classifiers would perform with say >90% accuracy, whereas say decision tree classifiers would do no better than 50-60%? (Accuracy here is measured by say $k$-fold cross-validation.)

It seems to me, at first glance, that a dataset which is able to achieve a very high accuracy on some classification algorithm would necessarily have some structure that would make it highly improbable that some other general classification algorithm would be able to perform very poorly. Yet it's also not impossible that there might be some 'exotic' type of dataset that does exhibit such a phenomenon.

",44825,,,,,2/20/2021 13:15,Theoretical limits on correlation between classification algorithm performances,,0,1,,,,CC BY-SA 4.0 26488,2,,26486,2/20/2021 14:37,,5,,"

Q-learning is guaranteed to converge (in the tabular case) under some mild conditions, one of which is that in the limit we visit each state-action tuple infinitely many times. If your random random policy (i.e. 100% exploration) is guaranteeing this and the other conditions are met (which they probably are) then Q-learning will converge.

The reason that different state-action pairs have longer arrows, i.e. higher Q-values, is simply because the value of being in that state-action pair is higher. An example would be the arrow pointing down right above the trophy -- obviously this has the highest Q-value as the return is 1. For all other states it will be $\gamma^k$ for some $k$ -- to see this remember that a Q-value is defined as

$$Q(s, a) = \mathbb{E}_\pi \left[\sum_{j=0}^\infty \gamma^j R_{t+j+1} |S_t = s, A_t = a \right]\;;$$ so for any state-action pair that is not the block above the trophy with the down arrow $\sum_{j=0}^\infty \gamma^j R_{t+j+1}$ will be a sum of $0$'s plus $\gamma^T$ where $T$ is the time that you finally reach the trophy (assuming you give a reward of 1 for reaching the trophy).

",36821,,,,,2/20/2021 14:37,,,,3,,,,CC BY-SA 4.0 26489,2,,25750,2/20/2021 17:50,,0,,"

You can also pose your problem as co-reference resolution. Try Huggingface's neuralcoref library.

",37203,,37203,,2/20/2021 18:20,2/20/2021 18:20,,,,1,,,,CC BY-SA 4.0 26490,1,,,2/20/2021 18:04,,1,183,"

I'm attempting to write an AI for the game Slay the Spire. One of the tasks it will need to do is navigate the map. The map is a directed acyclic graph with the same start and end node.

Each node (including the end node) will have 2 values associated with it: Expected value and death probability. The goal of the AI should be to maximize the expected value without dying. So far, none of this seems tough.

The twist here is that the death probability changes over time. Some nodes (elites) have high volatility: The death probability may go up or down drastically as we move through the map. The pathfinding algorithm would need to consider adjacent alternatives. A path that allows me to switch to a low-volatile node if things are getting tough is important.

As an example, the following map has two major routes. Both routes have an elite (the creature with horns), but the one on the right is a forced elite, while the one on the left can be skipped if death probability is too high. The ability for me to be flexible mid-route is an attractive feature, and one I'd like to take into account when pathfinding somehow.

How can my path-finding algorithm take into account adjacent paths/path flexibility? Is this even a job for pathfinding at all?

",44830,,44830,,2/20/2021 22:04,2/20/2021 22:04,How to pathfind with volatile probabilities (Slay the spire),,0,3,,,,CC BY-SA 4.0 26492,1,26557,,2/21/2021 2:30,,4,66,"

Is there a document with a list of conjectures or research problems regarding reinforcement learning like the Millennium Prize Problems?

",41666,,2444,,2/21/2021 10:31,2/25/2021 11:10,Is there a document with a list of conjectures or research problems regarding reinforcement learning (like the Millennium Prize Problems)?,,1,0,,,,CC BY-SA 4.0 26496,2,,22695,2/21/2021 17:43,,2,,"

Both Belief-MDPs and Bayes-Adaptive MDPs (BAMDPs) are special cases of POMDPs and their state space is augmented with a belief over their unobserved/hidden variables.

In a belief-MDP, the hidden variables can change over the course of an episode. (Eg. Both the position and the uncertainty in the position of the robot can vary during an episode).

In a BAMDP, the hidden variables are usually the attributes of the transition/reward function and are held constant during an episode. (Eg. In a robot locomotion task, ground friction or load - dynamics attributes, current goal location - reward function attributes. Though variables defining these attributes are unknown to the agent and the agent has to infer them, the actual variables remain unchanged during an episode)

References:

  1. M Ghavamzadeh, S Mannor, J Pineau, Aviv Tamar - Bayesian reinforcement learning: A survey. Foundations and Trends in Machine Learning, 8(5-6):359-483, 2015.

  2. L Zintgraf, K Shiarlis, M Igl, S Schulze, Y Gal, K Hofmann, and S Whiteson. Varibad: A very good method for bayes-adaptive deep rl via meta-learning. arXiv preprint arXiv:1910.08348, 2019.

",44852,,2444,,2/21/2021 23:00,2/21/2021 23:00,,,,0,,,,CC BY-SA 4.0 26497,1,26499,,2/21/2021 18:56,,5,561,"

I am currently trying to learn reinforcement learning and I started with the basic gridworld application. I tried Q-learning with the following parameters:

  • Learning rate = 0.1
  • Discount factor = 0.95
  • Exploration rate = 0.1
  • Default reward = 0
  • The final reward (for reaching the trophy) = 1

After 500 episodes I got the following results:

How would I compute the optimal state-action value, for example, for state 2, where the agent is standing, and action south?

My intuition was to use the following update rule of the $q$ function:

$$Q[s, a] = Q[s, a] + \alpha (r + \gamma \max_{a'}Q[s', a'] — Q[s, a])$$

But I am not sure of it. The math doesn't add up for me (when using the update rule).

I am also wondering either I should use the backup diagram to find the optimal state-action q value by propagating the reward (gained from reaching the trophy) to the state in question.

For reference, this is where I learned about the backup diagram.

",37540,,2444,,2/23/2021 11:16,2/23/2021 11:16,How would I compute the optimal state-action value for a certain state and action?,,1,5,,,,CC BY-SA 4.0 26499,2,,26497,2/21/2021 23:09,,4,,"

It seems that you are getting confused between the definition of a Q-value and the update rule used to obtain these Q-values.

Remember that to simply obtain an optimal Q-value for a given state-action pair we can evaluate

$$Q(s, a) = r + \gamma \max_{a'} Q(s', a)\;;$$

where $s'$ is the state we transitioned into (note that this only holds when obtaining the optimal Q-value, if we were using a stochastic policy then we would have to introduce expectations).

Now, this assumes that we have been given/obtained the optimal Q-values. To obtain them, we have to use the update rule (or any other learning algorithm) that you mentioned in your question.

",36821,,,,,2/21/2021 23:09,,,,0,,,,CC BY-SA 4.0 26500,2,,5318,2/22/2021 2:59,,2,,"

Sorry if this is a bad use of answer to add comment but since my reputation is not high enough this is only way to leave a comment to OP's question.

I think some of the answers misunderstood the OP's intention.

Over fitting is used as a means to test the complexity of the model - if a model cannot overfit a small dataset then it's likely not able to generalize well.

It's not that OP misunderstood the meaning of over fitting.

For Instance, I think this discussion is relevant: https://stats.stackexchange.com/questions/492165/what-to-do-when-a-neural-network-cannot-overfit-one-training-sample

",44859,,,,,2/22/2021 2:59,,,,0,,,,CC BY-SA 4.0 26501,1,,,2/22/2021 3:54,,2,45,"

I am currently replicating the results of this paper. In this paper they have not mentioned how they are evaluating the results as no ground truth is available for comparison. Same goes for other papers of this topic (unsupervised depth estimation). So I am very much confused about how to evaluate the model for overfitting, underfitting for this.

",41103,,,,,2/22/2021 3:54,What are the metrics to be used for unsupervised monocular depth estimation in computer vision?,,0,1,,,,CC BY-SA 4.0 26502,1,,,2/22/2021 4:43,,9,837,"

I once read somewhere that there is a range of learning rate within which learning is optimal in almost all the cases, but I can't find any literature about it. All I could get is the following graph from the paper: The need for small learning rates on large problems

In the context of neural networks trained with gradient descent, is there a range of the learning rate, which should be used to reduce the training time and get a good performance in almost all problems?

",44529,,2444,,2/22/2021 14:53,11/28/2022 15:23,Is there an ideal range of learning rate which always gives a good result almost in all problems?,,2,0,,,,CC BY-SA 4.0 26504,2,,26502,2/22/2021 9:46,,4,,"

The visualisation can be found in The need for small learning rates on large problems. This paper by D. Randall Wilson and Tony R. Martinez from 2001 investigates the role of learning rates in gradient descent algorithms.

In general, different algorithms assign different meaning to the same word 'learning rate'. For example, the learning rate in a gradient descent algorithm is not comparable to the learning rate in a tabular reinforcement learning algorithm such as Q-learning. This means that at a particular 'best' does not exist considering the different concepts denoted with the term `learning rate' in different algorithms.

Additionally, the learning rate is typically considered a part of the learning algorithm. The no free lunch theorem of machine learning tells us that no particular learning algorithm performs best across tasks. Because the learning rate is part of the solution, no particular learning rate is 'best' across tasks either.

In practice, you should set the learning rate sufficiently low to not 'overshoot' the optimal solution which will be evidenced by oscillations in the error (no convergence). But you also should also set it high enough to obtain reasonable performance given the amount of available training time.

What learning rate gives you the right trade-off typically requires a combination of domain knowledge and experimentation on the training set.

",44752,,44752,,2/22/2021 13:18,2/22/2021 13:18,,,,3,,,,CC BY-SA 4.0 26505,1,,,2/22/2021 11:15,,3,230,"

I'm following Andrew Ng's course for Machine Learning and I just don't quite understand the following.

Using PCA to speed up learning

Using PCA to reduce the number of features, thus lowering the chances for overfitting

Looking at these two separately, they make perfect sense. But practically speaking, how am I going to know that, when my intention is to speed up learning, I'm not letting the model over-fit?

Do I've to find a middle-ground between these two scenarios when applying PCA? If so how exactly can I do that?

",42004,,2444,,2/24/2021 11:59,2/24/2021 11:59,"When using PCA for dimensionality reduction of the feature vectors to speed up learning, how do I know that I'm not letting the model overfit?",,1,2,,,,CC BY-SA 4.0 26506,1,26512,,2/22/2021 12:32,,2,162,"

From what I understand, experience replay works by storing tuples of $(s, a, r, s')$ to be sampled for training. I understand why we store $s$, $r$ and $s'$. However, I do not understand the need for storing the action $a$.

As I recall, the reward $r$ and the next state $s'$ are both used to calculate the target values. We can then compare these target values to the output we get when we do a forward-pass using state $s$ It seems to me that the stored action $a$ is not required for this process to work; or am I missing something? Why would we store the action $a$ if it isn't used in the training process itself?

Please, forgive me if this question has been answered before. I looked, but was unable to find anything other than generic explanations as to what experience replay is and why we do it.

",44873,,2444,,2/22/2021 23:53,2/22/2021 23:53,What is the purpose of storing the action $a$ within an experience tuple?,,2,0,,,,CC BY-SA 4.0 26508,2,,26505,2/22/2021 12:44,,2,,"

I'm not sure if I understood your question correctly, but here's my take anyway.

So, PCA is a technique that you can apply to data to reduce the number of features. In return, (i) this can speed-up training, as there are less features to do computation with, (ii) and can prevent overfitting, as you lose some information on your data.

To detect overfitting, you usually monitor the validation and training losses during the training. If your training loss decreases, but your validation loss stays constant or increases, it's likely that your model is overfitting on the training data. In practice, this means your model generalizes worse and you can observe this by measuring the test accuracy.

All in all, you can apply PCA, train a new model, and measure your model's test accuracy to see if PCA has successfully prevented overfitting. In case it didn't, you can re-train with other regularization techniques such as weight decay and so on.

After your edit that put the slides

Basically, what slides claim is, PCA could be a bad way to prevent overfitting when compared to using standard regularization methods. To actually see whether this is the case, the standard way would be measuring your model's performance on a validation dataset. So, if PCA throws away lots of information, and hence causes your model to underfit, your validation accuracy should rather be poor wrt to using standard regularization techniques.

",32621,,32621,,2/22/2021 16:45,2/22/2021 16:45,,,,0,,,,CC BY-SA 4.0 26509,2,,26506,2/22/2021 13:06,,2,,"

The goals of experience replay as first proposed by Lin (1992) and more recently applied successfully in the DQN algorithm by Mnih et al. (2013) are to break temporal correlations of updates and to prevent forgetting of experiences that might be useful later on.

To meet these goals, the replay buffer should store tuples required in the learning step.

Most works that use experience replay, including those mentioned before, learn (to approximate) the Q function, i.e. $Q(s,a)$. Clearly, this function depends on the sampled action $a$.

If $a$ is not used at all in the training process then it would not need to be stored in the replay buffer. In such a scenario, however, the problems that motivate the replay buffer may not be present in the first place.

",44752,,,,,2/22/2021 13:06,,,,0,,,,CC BY-SA 4.0 26511,1,,,2/22/2021 14:41,,2,30,"

I was wondering about SciBERT's QA abilities using SQuAD. I have a scarce textual dataset consisting of less than 100 files where doctors are discussing cancer in dialogues. I want to add it to SciBERT to see if the QA abilities will improve in the cancer disease domain.

After concatenating them into one large file which will be our vocab, I then clean the file (all char to lower, white space splitting, char filtering, punctuation, stopword filtering, short tokens and etc) which leaves me with a list of 3000 unique tokens

If I wanted to add these tokens, do I just do scibert_tokenizer.add_tokens(myList) where myList is the 3k tokens?

I can confirm that more tokens are added doing print(len(scibert_tokenizer)) and I can see that embeddings do change such as corona and ##virus changes to coronavirus and ##virus.

Does the model need to be trained from scratch again?

",44877,,,,,4/18/2021 12:24,Adding corpus to BERT for QA,,0,1,,,,CC BY-SA 4.0 26512,2,,26506,2/22/2021 14:58,,3,,"

We need to store the action $a$ as it tells us the action that we took in the state that we are backing up.

Suppose we are in state $s$ and we take action $a$, then we will receive a reward $r$ and next state $s'$. The goal of RL, and in particular DQN (I mention DQN as it is the first algorithm that comes to mind when I think of a replay buffer but it is of course not the only algorithm to make use of one), is that we are trying to learn optimal state-action value functions $Q(s, a)$. We thus want our value function to be able to predict $y = r + \gamma \max_{a'}Q(s', a')$, i.e. given $s$ and $a$ we want to be able to predict $y$. As you can see, we need to know which action we took in state $s$ so that we can train our value function to approximate $y$, and the value function clearly depends on $a$, hence we need to also store $a$ in our experience tuple.

",36821,,,,,2/22/2021 14:58,,,,3,,,,CC BY-SA 4.0 26513,2,,26382,2/22/2021 15:33,,1,,"

I'm assuming according to your question that you have a fixed batch, or in other words, there's no possibility of further exploration in your settings. If this assumption is true, you have what's known as Batch/Offline Reinforcement Learning.

First, let's check some aspects about this: in Offline RL once that there's no possibility regarding further exploration, your dataset must contain a brunch of situations to leads your system to learn a robust policy. Imagine that you're working with robots, and training them using a fixed batch extracted using real-world interactions. If this batch was collected from a system that works pretty well, it might not contain samples about non-desirable situations. Once this system is deployed in the real world, facing a non-desirable situation, your system could not "know" which action should be taken to "escape" from these non-desirable states, once it never saw this during the learning phase. To summarizing, your dataset should be large and representative.

Now we assume that your dataset is large and representative. So, another problem arises: fixed batch shift the distribution of samples, creating a kind of bias that can be extrapolated by many epochs.

Finally, your dataset is composed of expert demonstrations? In other words, do you have the best actions on those samples in your dataset for each state?

The good thing about a fixed batch with real-world data is that it can booster the learning process once it contains the true dynamics of the real system. But be careful regarding the distribution of samples and training hyperparameter. Resuming, there's not a recipe for all problems, it depends on your case. I'll let two references that can be useful and where this answer is based:

References:

Nair, A., Dalal, M., Gupta, A., & Levine, S. (2020). Accelerating online reinforcement learning with offline datasets. arXiv preprint arXiv:2006.09359.

Fujimoto, Scott, David Meger, and Doina Precup. "Off-policy deep reinforcement learning without exploration." International Conference on Machine Learning. PMLR, 2019.

",42682,,,,,2/22/2021 15:33,,,,0,,,,CC BY-SA 4.0 26514,2,,26502,2/22/2021 15:53,,6,,"

The 2015 article Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith gives some good suggestions for finding an ideal range for the learning rate.

The paper's primary focus is the benefit of using a learning rate schedule that varies learning rate cyclically between some lower and upper bound, instead of trying to choose a single fixed learning rate value. For this to work, you still need to select good lower and upper bounds, and Smith suggests training the model for a few epochs while increasing the learning rate between a large range of values. At first, the learning rate will be too small to make any progress at all. As the learning rate increases, eventually, the loss will begin to decrease, but, at some point, the learning rate will get too large, and the loss will stop decreasing and even begin increasing. Your ideal range consists of the learning rate values where the loss was decreasing steeply. After finding your range, you can reset the weights and biases on your model and restart training using whatever learning rate schedule you plan to use for training.

Here is a concrete example from one of my experiments:

In this case, I start my learning rate search at 1e-09 and plan to end with a learning rate of 0.99 (although I am actually able to stop sooner than that). Your experiment may require different search bounds, but you could always start with that and adjust things as needed. At first, the loss plot is flat, and then it begins to decrease, but is too gradual. At the first red line, loss starts to decrease sharply, and once it reaches the second red line, the plot has begun to level off, so I can end my search. For this particular experiment, my ideal learning rate range had a minimum of 4.01e-4 and a maximum of 2.58e-2.

For more information, I suggest reading this Keras Learning Rate Finder post, which contains more information on how the process works and a tutorial for how to program it using Keras and Tensorflow.


Edit: I recently learned that in newer versions of Tensorflow / Keras, the on_train_batch_end callback now returns an aggregated average loss instead of the raw loss for the given batch, which gives poor results for the learning rate finder. See this Github Issue for more information and the workaround I am currently using.

",44882,,44882,,11/28/2022 15:23,11/28/2022 15:23,,,,0,,,,CC BY-SA 4.0 26516,1,,,2/22/2021 18:12,,1,42,"

I'm working in an environment where once an action $a \in A$ is performed, it must hold this action selection for a while. To clarify this, assumes a horizon length $h$ and the set of actions: $\{a_{1}, a_{2}, a_{3}\} \in A$. Assumes now that the length $h$ to converge for an optimal solution must be split by 3 where each action is applied distinctly because if one of these actions is selected twice during an episode, the reward penalizes it severely in such way that there's no possible policy that can get a better return than selects each action just once. Thus, the length of each of "sub-horizon" $h_{i}$ is $h_{i} > 0$ and $h_{i} < h - 2$.

Now seeing the DQN setting in the naive approach for exploration using $\epsilon$-greedy policy that selects at each time-step an action following:

n = random value between 0 and 1
if n < epsilon then:
   a = random action in A
else:
   a = maxQ(S, a)

For my particular problem (and probably many with similar settings), this way of selects an action seems oppositely and hard to converge for the optimal solution, shifting the distribution of samples for far from that. Does it make sense?

Actually, I believe that sticky actions, could be way beneficial for many similar settings, beyond the scope of his original approach.

",42682,,,,,2/22/2021 18:12,$\epsilon$-greedy policy in environments where actions are performed in a long term. Does it has influence?,,0,0,,,,CC BY-SA 4.0 26517,2,,17923,2/22/2021 20:30,,1,,"

Meta-Reinforcement Learning can refer to a broad range of ideas. Also, different algorithms are SOTA under different evaluation metrics (sample efficiency, agent performance, adaptation speed on a new task, etc)

Assuming that you are referring to the problem of quickly learning/adapting to a new task by training an agent on a distribution of related tasks, the following are some popular algorithms

  • PEARL [Rakelly et al., 2019]
  • VariBAD [Zintgraf et al., 2020]
  • Meta-Q-Learning [Fakoor et al., 2020]

References:

  1. K Rakelly, A Xhou, D Quillen, C Finn, S Levine - Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables, ICML 2019.
  2. L Zintgraf et al., - VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning, ICLR 2020.
  3. R Fakoor, P Chaudhari, S Soatto, A J Smola - Meta-Q-Learning, ICLR 2020.
",44852,,,,,2/22/2021 20:30,,,,0,,,,CC BY-SA 4.0 26518,1,,,2/22/2021 20:55,,1,75,"

Is there an RL approach/algorithm that would be suited for the following kind of problem?

  • There is a continuous action space with an action value $A_{a,t}$ for each action dimension $a$.
  • The objective function is a non-linear function of the satisfaction factors $S_{s}$ for each satisfaction dimension $s$ and some other random & independent factors. This objective function can be known to the agent.
  • Each satisfaction factor depends on an independent variable $\Delta^S_{s,t}$ and the effect $\delta^S_{a,s,t}$ of each action: $S_{s,t}=\Delta^S_{s,t} +\sum_a A_{a,t} * \delta^S_{a,s,t}$.
  • Each action can further have an effect $\delta^R_{a,r,t}$ on the inventory factors $I_{r,t}$ for each resource dimension $r$, with inventories being kept between time-steps and a factor $\Delta^R_{r,t}$ that is added or removed from the inventory at each step independent of the actions: $I_{r,t+1}=I_{r,t}+\Delta^R_{r,t} + \sum_a A_{a,t} * \delta^R_{a,r,t}$
  • The agent is constrained by each of these resources (i.e. the inventory has to remain positive).
  • The agent should be able to deal both with $\delta$ and $\Delta$ factors that are visible (states) and invisible (have to be learned).
  • A trained agent should be able to know how to adapt to changes of the $\delta$ and $\Delta$ factors, as well as the introduction or removal of activity dimensions.

EDIT: I have adapted the problem description after some feedback.

",44896,,44896,,2/26/2021 11:19,2/26/2021 11:19,Which RL algorithm would be suitable for this multi-dimensional and continuous action space?,,0,0,,,,CC BY-SA 4.0 26519,1,,,2/22/2021 21:24,,1,22,"

Let's say that I want to create a program capable of detecting lamps on some pictures. Those pictures can be, for instance, of a room, a street, etc.

I would like to know if the following is possible:

  • Create a program that is trained using both pictures of lamps (so a dataset of images) and numerical/categorical data of lamps (so that would be a .csv file which contains features ranging from type, height, etc.) The idea is to combine both types of data in order to detect on another unseen picture if it contains a lamp.

I'm not entirely sure if the resulting algorithm will be performant, but I would like to try.

If what I described above is possible, could you please point me in the right direction as to which literature to consult ? My research have lead me nowhere.

",44899,,,,,2/22/2021 21:24,Using numerical/categorical data and image data to detect objects,,0,2,,,,CC BY-SA 4.0 26520,2,,12490,2/22/2021 22:38,,2,,"

Can't see that this has been mentioned yet - there are ways to generate text non-sequentially using a non-autoregressive transformer, where you produce the entire response to the context at once. This typically produces worse accuracy scores because there are interdependencies within the text being produced - a model translating "thank you" could say "vielen danke" or "danke schön" but whereas an autoregressive model can know which word to say next based on previous decoding, a non-autoregressive model can't do this, so also could produce "danke danke" or "vielen schön". There is some research that suggests you can close in on the accuracy gap though: https://arxiv.org/abs/2012.15833.

",44902,,,,,2/22/2021 22:38,,,,3,,,,CC BY-SA 4.0 26524,2,,8427,2/23/2021 13:46,,3,,"

An ontology at its most abstract is a model of the world. It describes concepts that exist in the world and how those concepts are related.

Ontologies are similar to taxonomies. A taxonomy is a tree-like hierarchy that organizes concepts in increasing levels of specificity. What an ontology adds is a second type of link between those concepts that explains how they are connected. So a taxonomy could say $isA(DOG, ANIMAL)$ ("a dog is an animal"). An ontology may also describe something like: $chase(DOG, CAT)$ ("dogs chase cats").

Why use an ontology? It can be used to reason about the world. I subscribe to a very particular kind of language-centric ontology that isn't worth addressing here. Instead, the Semantic Web project (Tim Berners-Lee) is probably closer to what would interest you. Semantic Web uses a type of description logic, which is outside my realm of expertise. But there are tools for processing these kinds of DL and gaining "understanding" from them. To work with this kind of ontology you'll want to be familiar with the idea of Resource Description Framework triples.

",19703,,,,,2/23/2021 13:46,,,,0,,,,CC BY-SA 4.0 26525,1,,,2/23/2021 14:21,,2,167,"

From the original paper and this post we have that batch normalization backpropagation can be formulated as

I'm interested in the derivative of the previous layer outputs $x_i=\sigma(w X_i+b)$ with respect to $b$, where $\{X_i\in\mathbb{R}, i=1,\dots,m\}$ is a network input batch and $\sigma$ is some activation function with weight $w$ and bias $b$.

I'm using Adam optimizer so I average the gradients over the batch to get the gradient $\theta=\frac{1}{m}\sum_{i=1}^m\frac{\partial l}{\partial x_i}\frac{\partial x_i}{\partial b}$. Further, $\frac{\partial x_i}{\partial b}=\frac{\partial}{\partial b}\sigma(wX_i+b)=\sigma'(wX_i+b)$.

I am using ReLu activation function and all my inputs are positive, i.e. $X_i>0 \ \forall i$, as well as $w>0$ and $b=0$. That is, I get $\frac{\partial x_i}{\partial b} = 1\ \forall i$. That means that my gradient $\theta=\frac{1}{m}\sum_{i=1}^m\frac{\partial l}{\partial x_i}$ is just the average over all derivatives of the loss function with respect to $x_i$. But summing all the $\frac{\partial l}{\partial x_i}$ up is zero, which can be seen from the derivations of $\frac{\partial l}{\partial x_i}$ and $\frac{\partial l}{\partial \mu_B}$.

This means that my bias change would always be zero, which makes no sense to me. Also i created a neural network in Keras just for validation and all my gradients match except the bias gradients, which are not always zero in Keras.

Does one of you know where my mistake is in the derivation? Thanks for the help.

",44919,,44919,,2/24/2021 13:36,2/24/2021 13:36,Bias gradient of layer before batch normalization always zero,,0,1,,,,CC BY-SA 4.0 26527,2,,26318,2/23/2021 17:01,,1,,"

I'd suggest BERT for this. It is essentially a word-embedding model that uses at local context to determine the appropriate embedding for each word. This means it would assign "bat" a different embedding in a sentence containing "hit the ball" vs. a sentence containing "flies and eats bugs". On top of that, Google has released a number of pre-trained versions of BERT, which can be used directly without additional training (depending on your task of course). BERT as a service is great if you just want embeddings. The Python transformers library makes it exceedingly simple to incorporate BERT into your task-specific model.

",37972,,,,,2/23/2021 17:01,,,,2,,,,CC BY-SA 4.0 26528,1,,,2/23/2021 19:18,,0,45,"

I am researching different AI approaches and was curious what approach would be useful in my scenario.

Assume you are tiling a room. The tiles, and the room itself, can be any shape. In this room you could encounter N number of obstacles, such as a wall, or built-in. The goal is to layout the tiles, taking into account cutting into the obstacles mentioned above, along with the shape and dimensions of the destination room. This would have to account the shape, and measurements of said tile being placed onto the room.

Which AI approach would prove useful in this scenario?

",44925,,,,,2/23/2021 19:18,AI approach for layout mapping,,0,3,,,,CC BY-SA 4.0 26529,1,,,2/23/2021 22:58,,0,99,"

To me, tying weights in an autoencoder makes sense if we think of the auto encoder as doing PCA. Why in any situation would it make sense to not tie the weights? If we don't tie the weights, would it not try to learn something that is PCA anyway or rather something that might not be as optimal as PCA?

Also, if weights are not tied, it doesn't make sense to me that the auto-encoder is invertible i.e. if the decoder is looking for an inverse operation because it's a mapping between spaces of different dimension which should not invertible.

So, if the weights are not tied then why do we expect the decoder to learn anything meaningful i.e neither PCA nor an inverse operation?

",44931,,,,,3/15/2022 1:01,Why Autoencoder Weights Are Not Always Tied,,1,1,,,,CC BY-SA 4.0 26530,1,26549,,2/24/2021 4:02,,1,141,"

In this tensorflow article, the comments in the code say that MHA should output with one of the dimensions being the sequence length of the query/key. However, that means that the second MHA in the decoder layer should output something with one of the dimensions being the input sequence length, but clearly it should actually be the output sequence length! From all that I have read on transformers, it seems that the output of the left side of the SHA should be a matrix with dimensions q_seq_length x q_seq_length, and the output of the right side of the SHA should be v_seq_length x d_model. These matrices can't even multiply when using the second MHA in the decoder to incorporate the encoder output! Please help. I would appreciate a clear-cut explanation. Thanks

",44936,,40671,,2/24/2021 14:41,2/24/2021 23:45,How is the transformers' output matrix size arrived at?,,1,0,,,,CC BY-SA 4.0 26531,1,26534,,2/24/2021 6:42,,2,823,"

Most machine learning models, such as multilayer perceptrons, require a fixed-length input and output, but generative (pre-trained) transformers can produce sentences or full articles of variable length. How is this possible?

",44940,,2444,,2/26/2021 10:46,12/17/2021 0:15,How are certain machine learning models able to produce variable-length outputs given variable-length inputs?,,1,0,,,,CC BY-SA 4.0 26532,1,26547,,2/24/2021 7:06,,1,226,"

I have this question that I'm kinda stuck on.

It's a game scenario in which we set up an expectimax tree. In the game, you have 3 dice with sides 1-4 that you roll at the beginning. Then, depending on the roll, the player can choose one of the dice to reroll or not reroll anything. Points are assigned like so:

  • 10 points if there's 2 of a kind
  • 15 if there's 3 of a kind
  • 7 if there's a series like 1-2-3 or 2-3-4
  • Otherwise, or if the sum is higher than the rewards from above, the score = sum of the rolls

For additional context, this is an example expectimax tree I came up with, for the case that the player rolled a 1,2,4 and is considering rerolling or not:

Now lets introduce a new agent -- a robot that's supposed to help the human player. we assume

  • the human player choses any action with uniform probability regardless of the initial roll
  • there's a robot that, given a configuration of dice and the human's desired action, actually implements the action with probability 1-p and overrides it with a "no reroll" order with probability p>0. It has no effect if the human's decision is already to not reroll.

For that scenario, I came up with this expectimax tree:

Now for the part I'm actually stuck on -- lets define A, B, C, and D as the expected reward of performing actions "reroll die 1", "reroll die 2", "reroll die 3", and "no reroll." How do we find $R_H$, the expected reward for the human acting without the robot's help, and $R_{AH}$ the expected reward for if the robot helps? We can't use p in the expression, we only have access to A,B,C,D and we're supposed to write it in the form $X + Y_p$

*EDIT: I asked again and the question was worded weirdly. They said we should definitely use p. What was meant by not using p is X and Y themselves can't contain p. But Y will be multiplied by p in the final simplified form.

For $R_{H}$ I think the answer should be $\frac{(A + B + C + D}{4}$ because of uniform distribution over A-D.

I'm supposing that $R_{AH}$ would be $\frac{(A + B + C)(1-p) + D + Dp}{4}$? Because the robot doesn't override with probability $1-p$, but he can only override A-C and does with probability $p$.

I think something feels slightly wrong about my answer but I'm not sure what.

",44537,,44537,,2/24/2021 17:05,2/24/2021 20:51,Find the expected reward in an expectimax-based dice rolling game?,,1,4,,,,CC BY-SA 4.0 26534,2,,26531,2/24/2021 8:12,,3,,"

In short, repetition with feedback.

You are correct that machine learning (ML) models such as neural networks work with fixed dimensions for input and output. There are a few different ways to work around this when desired input and output are more variable. The most common approaches are:

  • Padding: Give the ML model capacity to cope with the largest expected dimensions then pad inputs and filter outputs as necessary to match logical requirements. For example, this might be used for an image classifier where the input image varies in size and shape.

  • Recurrent models: Add an internal state to the ML model and use it to pass data along with each input, in order to work with sequences of identical, related inputs or outputs. This is a preferred architecture for natural language processing (NLP) tasks, where LSTMs, GRUs, and transformer networks are common choices.

A recurrent model relies on the fact that each input and output is the same kind of thing, at a different point in the sequence. The internal state of the model is used to combine information between points in the sequence so that for example a word in position three in the input has an impact on the choice of the word at position seven of the output.

Generative recurrent models often use their own output (or a sample based on the probabilities expressed in the output) as the next step's input.

It is well worth reading this blog for an introduction and some examples: The Unreasonable Effectiveness of Recurrent Neural Networks by Andrej Karpathy

",1847,,18758,,12/17/2021 0:15,12/17/2021 0:15,,,,0,,,,CC BY-SA 4.0 26535,2,,26529,2/24/2021 8:14,,1,,"

We expect the decoder to learn anything meaningful without tying the weights because the loss function is calculated between the input and reconstructed output and training will minimize that loss. The untied autoencoder's decoder will learn to transform the embedding back into the input. Not tying the weights gives more representational power and also increases chances of overfitting.

If you are running into overfitting while training your autoencoder tying your weights is an option. It is a regularization method. Not tying them is fine if you are getting any overfitting or want to solve overfitting by an alternative method.

",37203,,,,,2/24/2021 8:14,,,,0,,,,CC BY-SA 4.0 26536,1,,,2/24/2021 10:20,,1,49,"

CONTEXT

I am trying to build a regression model that finds the optimal parameters for a given input. The data I am using are point clouds, with N points and 3 coordinates (x,y,z) each. Each point cloud is divided into neighborhoods of constant size and, during inference, a batch of these neighborhoods are fed into the model which outputs a set of parameters. The parameters represent a family of surfaces and the goal is to find parameters such that the surface fits the neighborhood of points as tightly as possible (in the least squares sense).

THE ISSUE

The problem is that each type of parameter must fall into a specific range, otherwise it has no meaning. For example the first two parameters must lie inside [0.1, 1.9], the next three must be strictly positive etc.. I have tried restraining the outputs by adding a scaled sigmoid activation or simply clamping the output to the range that I want. However, it seems that such hacks result in saturation, the model outputs negative values and all the outputs become 0 from clamping.

I can't imagine I'm the first one to encounter such a problem, but I haven't been able to find out a way to solve it. Is there a defacto way of dealing with this situation?

P.S. I am not including details of the model architecture to keep this question general interest, but I will include them upon request, if it helps.

",44945,,,,,2/24/2021 10:20,How to restrain a model's outputs to a certain range without affecting its representative capacity?,,0,4,,,,CC BY-SA 4.0 26538,1,,,2/24/2021 14:41,,0,67,"

I am trying to make a text classification for Arabic data. The problem is that there is no labeled Arabic dataset for this data. My question is then: is possible to do a classification without a training dataset? If yes, what methods can I use?

",44632,,2444,,2/24/2021 15:35,7/19/2022 20:05,Can I do topic classification of Arabic text (software requirements) without a training dataset?,,1,2,,,,CC BY-SA 4.0 26541,1,,,2/24/2021 15:15,,1,41,"

Let's say we have a WGAN where the generator and critic have 8 layers and 5 million parameters each. I know that the greater the number of training samples the better, but is there a way to know the minimum number of training examples needed? Does it depend on the size of the network or the distribution of the training set? How can I estimate it?

",44954,,2444,,2/24/2021 15:29,2/24/2021 15:49,How can I estimate the minimum number of training samples needed to get interesting results with WGAN?,,0,2,,,,CC BY-SA 4.0 26542,2,,26538,2/24/2021 15:40,,1,,"

Yes, it is possible to do it. This can be posed as an unsupervised text classification. You can look at TF-IDF, Neural networks (BERT) etc. for creating embedding of your text and then, use clustering techniques like K-Means, KNN etc. for classification.

",37203,,,,,2/24/2021 15:40,,,,4,,,,CC BY-SA 4.0 26544,1,26545,,2/24/2021 16:28,,1,45,"

By scaling features, we can prevent one feature from dominating the decisions of a model. For example, say heights (cm), and age (years) are two features in my data. Since range of heights is larger than of years, a trained model could weight importance of heights much more than years. This could result in a poor model in return.

However, say that all of my features are binary, they take a value of either 0 or 1. In such a case, does feature scaling still have any benefits?

",32621,,,,,2/24/2021 19:10,Does feature scaling have any benefits if all features are on the same scale?,,1,0,,,,CC BY-SA 4.0 26545,2,,26544,2/24/2021 19:10,,2,,"

If all you features are binary, then, you don't need to apply normalization on them. Since their values are on the same scale already.

",37203,,,,,2/24/2021 19:10,,,,0,,,,CC BY-SA 4.0 26546,1,26765,,2/24/2021 19:28,,5,218,"

In a video lecture on the development of neural networks and the history of deep learning (you can start from minute 13), the lecturer (Yann LeCunn) said that the development of neural networks stopped until the 80s because people were using the wrong neurons (which were binary so discontinuous) and that is due to the slowness of multiplying floating point numbers which made the use of backpropagation really difficult.

He said, I quote, "If you have continuous neurons, you need to multiply the activation of a neuron by a weight to get a contribution to the weighted sum."

But the statement stays true even with binary (or any discontinuous activation function) neurons. Am I wrong? (at least, as long as you're in the hidden layer, the output of your neuron will be multiplied by a weight I guess). The same professor said that the perceptron, ADALINE relied on weighted sums so they were computing multiplications anyways.

I don't know what I miss here and I hope someone will enlighten me.

",44965,,2444,,2/26/2021 13:58,3/17/2021 9:48,Why did the developement of neural networks stop between 50s and 80s?,,1,9,,,,CC BY-SA 4.0 26547,2,,26532,2/24/2021 20:51,,0,,"

If I understand,

  • With probability $p$ the robot select no re-roll (action $D$).
  • With probability $1-p$ the human uniformly select an action between $A,B,C,D$.

Thus the expected reward for if the robot helps is

$$R_{AH}= p\cdot D+(1-p)\frac{A+B+C+D}{4} = \frac{(A+B+C)(1-p)+D+3pD}{4}$$

",43351,,,,,2/24/2021 20:51,,,,0,,,,CC BY-SA 4.0 26548,1,26550,,2/24/2021 22:09,,3,295,"

Is a stochastic environment necessarily also non-stationary? To elaborate, consider a two-state environment ($s_1$ and $s_2$), with two actions $a_1$ and $a_2$. In $s_1$, taking action $a_1$ has a certain probability $p_1$ of transitioning you into $s_2$, and a probability $1-p_1$ of keeping you in $s_1$. There is also a similar probability for taking $a_2$ in $s_1$, and taking either action in $s_2$. Let's also say that there is a reward $r$ given only when a transition occurs from either state, and 0 otherwise. This is a stochastic environment. But isn't this non-stationary in one sense and stationary in another? I think it is stationary because the expected return from taking a particular action in a particular state converges to a constant value. But it is non-stationary in the sense that the reward obtained from taking a certain action in a given state may change at a given time. Which is really the case?

",44927,,2444,,2/24/2021 23:32,2/25/2021 1:18,Does stochasticity of an environment necessarily mean non-stationarity in MDPs?,,1,1,,,,CC BY-SA 4.0 26549,2,,26530,2/24/2021 23:45,,0,,"

Aha, I understand now! In the paper, the diagram for SHA has its inputs in the order Q, K, V, while the diagram for MHS has its inputs in the order V, K, Q! In the grand diagram for the entire net, I thought the arrows for the inputs were in the order for SHA, but they are actually in the order for MHA. It is a bit confusing that when changing to the MHA diagram, they decided to swap the order of inputs in the paper, but it all makes sense now. output_seq x d_q multiplies d_q x input_seq to get output_seq x input_seq, which finally multiplies with input_seq x d_v to get output_seq x d_v, which then concatenates to output_seq x hd_v, so after the final linear layer it becomes output_seq x d_model, the standard in the decoder layer. Hopefully, anyone else who mixed up orders from the diagram will understand now.

",44936,,,,,2/24/2021 23:45,,,,0,,,,CC BY-SA 4.0 26550,2,,26548,2/25/2021 0:54,,5,,"

Is a stochastic environment necessarily also non-stationary?

No.

A stochastic environment (i.e. an MDP with a transition model $p(s', r \mid s, a)$) can be stationary (i.e. $p$ does not change over time) or non-stationary ($p$ changes over time). Similarly, a deterministic environment, i.e. the probabilities are $1$ or $0$, can also be either stationary or not. To emphasize that an MDP may be non-stationary, you could write $p$ as a function of time, i.e. $p_t$ (you also do the same thing for the reward function if it's separate from the transition function).

The same idea applies to a stochastic/deterministic policy, which can either be stationary or not.

A non-stationary environment may lead to a non-stationary policy (or may require you to relearn a model of the environment, if you need to learn a model of the environment) [1]. However, note that a stochastic environment (i.e. an MDP) does not necessarily imply a stochastic policy (actually, under some conditions, stationary and stochastic MDPs are known to have a deterministic optimal policy [1]).

In general, if something (e.g. environment, policy, value function or reward function) is non-stationary, it means that it changes over time. This can either be a function or a probability distribution. So, a probability distribution (the stochastic part of an MDP) can change or not over time. If it changes over time, then it makes the MDP non-stationary.

But it is non-stationary in the sense that the reward obtained from taking a certain action in a given state may change at a given time

Informally, you could say that the empirical reward obtained is non-stationary because it changes over time, due to the stochasticity of the reward function, behaviour policy, etc., but the dynamics (transition function and reward function) would still be fixed, so the environment would still be stationary. So, there's a difference between the environment and the experience that you collected so far (with some behaviour policy).

",2444,,2444,,2/25/2021 1:18,2/25/2021 1:18,,,,1,,,,CC BY-SA 4.0 26551,2,,19944,2/25/2021 4:10,,-2,,"
  1. Every program halts, or continues

  2. Given N steps, enough time and space (*), halting within N steps is provable

3 (from 2). Halting always has proof: run program until halt; count number of steps; verify claim of halting (within number of steps)

  1. (Program is safe) implies (program is proved safe)

  2. (Safety proved) implies (public understands the proof)

  3. (Program is safe) implies (program always halts (safely) or continues (safely)) [from 1, 6] and ((public) understands (safety proof)) [from 7]

  4. (Public not understand claimed safety proof this moment) implies (don't run program this moment) [common sense]

(*) This universe is finite. Some numbers too big to be computed in this universe

Have you seen perfect software?

Have you seen software make mistakes?

Why trust life-death decisions to software?

Why trust government decisions to software?

Why trust business decisions to software?

If scientists may not recognize AI is intelligent, what if you don't recognize something beyond AI in front of you?

(After enough doubt, all you can do is TRUST)

",27283,,,,,2/25/2021 4:10,,,,1,,,,CC BY-SA 4.0 26552,1,,,2/25/2021 9:19,,0,37,"

I'm trying to understand the deconv referenced in the paper Visualizing and Understanding Convolutional Networks

The paper states (section 2, p. 3):

the deconvnet uses transposed versions of the same filters, but applied to the rectified maps

Is it possible to implement this step in a short code example? Given an unpooled, rectified map; how would the transposed filter be applied against it?

I did try looking at the referenced paper Adaptive Deconvolutional Networks for Mid and High Level Feature Learning. However, I'm not wrapping my head around its explanations too well; and it references a third paper with regard to its work on "layers of convolutional sparse coding" (deconvolution [M. Zeiler, D. Krishnan, G. Taylor, and R. Fergus. Deconvolutional networks. In CVPR, 2010]), but this 2010 paper appears to require access to download.

",44975,,44975,,3/1/2021 8:08,3/1/2021 8:08,How to implement the deconv which is used in “Visualizing and Understanding Convolutional Networks”,,0,2,,,,CC BY-SA 4.0 26553,1,,,2/25/2021 9:44,,2,1429,"

When solving a classification problem with neural nets, be it text or images, how does the number of classes affect the model size and amount of data needed to train?

Are there any soft or hard limitations where the number of outputs starts to stall learning?

Do you know about any analysis of how the number of classes scales the model?

Does the optimal size increase proportionally with the number of outputs? Does it increase at all? If it does increase, is the relationship linear or exponential?

",38671,,2444,,2/25/2021 10:21,12/21/2021 4:52,"In classification, how does the number of classes affect the model size and amount of data needed to train?",,2,0,,,,CC BY-SA 4.0 26554,2,,26553,2/25/2021 10:24,,1,,"

The most obvious way more classes increase the network size is the output layer, but I don't believe there is a rule of thumb for the size of the entire network.

As I understand it, there is no clear answer to how big a network needs to be to achieve a certain performance with regard to the number of layers compared to the number of classes. This is a very active research field, just as an example compare the size of EfficientNet to other State of the art models when it was introduced and you can see the size difference.

Regarding the data needed, in The Deep Learning Book (which is a few years old now) they state as a general that with the models available at that time, you needed ~5000 examples per label for acceptable performance, while to exceed human performance (their words) you would need about 10 million labeled examples.

",44980,,32410,,12/21/2021 4:06,12/21/2021 4:06,,,,3,,,,CC BY-SA 4.0 26555,2,,26553,2/25/2021 10:27,,1,,"

Model/network design has multiple guidelines, a basic one is: The solving capacity of the network should be larger than the possibility space of the problem to be solved.

Solving capacity (learning capacity) of a network (dense usually) can be calculated as the product of number of neurons in all layers, for example:

Input shape: 10 values
Network shape: [layer1 30 units, layer2 20 units, output 1 unit] should have learning capacity of $30 \times 20 \times 1 = 600$, it learn roughly max 600 different inputs (each input holds 10 values).

Another consideration, the separation lines, even when the inputs of the problem to be learnt are unlimited, but the 2 classes (just example) are always separated on 2 sides of a line without mixing up, just a single neuron can solve the problem.

One neuron can make 1 separation line, 1 layer makes a poly-line with segments are by the neurons in that layer, another layer makes another poly-line.

Thus, more classes, more separations to be done, and more classes would mean the input variety is large, so surely training data are a lot and model size needs to be large.

",2844,,2844,,12/21/2021 4:52,12/21/2021 4:52,,,,2,,,,CC BY-SA 4.0 26556,1,,,2/25/2021 11:04,,1,784,"

I came across this question set. It asks following question:

Let’s revisit our bug friends from assignment 2. To recap, you control one or more insects in a rectangular maze-like environment with dimensions M × N , as shown in the figures below. At each time step, an insect can move North, East, South, or West (but not diagonally) into an adjacent square if that square is currently free, or the insect may stay in its current location. Squares may be blocked by walls (as denoted by the black squares), but the map is known.
For the following questions, you should answer for a general instance of the problem, not simply for the example maps shown.

(a) You now control a single flea as shown in the maze above, which must reach a designated target location X. However, in addition to moving along the maze as usual, your flea can jump on top of the walls. When on a wall, the flea can walk along the top of the wall as it would when in the maze. It can also jump off of the wall, back into the maze. Jumping onto the wall has a cost of 2, while all other actions (including jumping back into the maze) have a cost of 1. Note that the flea can only jump onto walls that are in adjacent squares (either north, south, west, or east of the flea).

i. Give a minimal state representation for the above search problem.
Sol. The location of the flea as an (x, y) coordinate.
ii. Give the size of the state space for this search problem.
Sol. M ∗ N

(b) You now control a pair of long lost bug friends. You know the maze, but you do not have any information about which square each bug starts in. You want to help the bugs reunite. You must pose a search problem whose solution is an all-purpose sequence of actions such that, after executing those actions, both bugs will be on the same square, regardless of their initial positions. Any square will do, as the bugs have no goal in mind other than to see each other once again. Both bugs execute the actions mindlessly and do not know whether their moves succeed; if they use an action which would move them in a blocked direction, they will stay where they are. Unlike the flea in the previous question, bugs cannot jump onto walls. Both bugs can move in each time step. Every time step that passes has a cost of one.

i. Give a minimal state representation for the above search problem.
Sol. A list of boolean variables, one for each position in the maze, indicating whether the position could contain a bug. You don’t keep track of each bug separately because you don’t know where each one starts; therefore, you need the same set of actions for each bug to ensure that they meet.
ii. Give the size of the state space for this search problem.
Sol. $2^{MN}$

I don't get why the (a).i. uses $(x,y)$ coordinates whereas (b).i. uses boolean list. I guess they can be used interchangeablly right? And correspondingly the answers to ii will change.

Update

I now understand following:

For single flea maze, the representation $(x,y)$ will have $M\times N$ state space, whereas boolean list will have $2^{M\times N}$ state space. For two bug maze, the representation $(x_1,y_1,x_2,y_2)$ will have $(M\times N)^2$ state space, whereas boolean list will have $2^{M\times N}$ state space. I am able to understand, we prefer $(x,y)$ representation for single flea maze since $M\times N < 2^{M\times N}$. But for two bug maze, I am not able to understand why we prefer boolean list representation (and not $(x_1,y_1,x_2,y_2)$ representation), since $(M\times N)^2<2^{M\times N}$.

",40640,,40640,,2/25/2021 12:07,11/17/2022 16:00,Determining minimal state representation for maze game,,1,0,,,,CC BY-SA 4.0 26557,2,,26492,2/25/2021 11:04,,2,,"

It's not really an exhaustive list, but Hutter maintains a small list of problems (click on the bullet point "Universal AI Book" here) related to AIXI (a reinforcement learning agent), some of which have already been solved. The money awards are in the range of 50-500 euros, so they are not as financially important as the Millennium Prize Problems.

",2444,,2444,,2/25/2021 11:10,2/25/2021 11:10,,,,0,,,,CC BY-SA 4.0 26558,1,,,2/25/2021 11:17,,0,72,"

I would like to use a deep learning approach to detect people in videos. I have found some freely accessible implementations like Human Segementation with Pytorch or BodyPix / DeepLab / Pixellib with Tensorflow. They all work well, but with many it happens that, for example, half hand is not detected or if a person is sitting in the picture only the legs and the head are detected. Are there other approaches to detect people who are freely accessible or is that state-of-the-art? I had imagined such problems have been solved, but I don't know so much about it. Thanks for your answers.

",44982,,,,,11/18/2022 2:07,What are the state-of-the-art Person-Detektion / Human-Segmentation?,,1,1,,,,CC BY-SA 4.0 26559,1,26571,,2/25/2021 11:18,,1,97,"

I have a task for which I have to do image segmentation (cancer detection on MRIs). If possible, I would also like to include clinical data (i.e. numeric/categorical data which comes in the form of a table with features such as age, gender, ...).

I know that for classification purposes, it's possible to create a model that uses both numeric data as well as image data (as mentioned in the paper by Huang et al. : "Multimodal fusion with deep neural networks for leveraging CT imaging and electronic health record: a case‑study in pulmonary embolism detection"

The problem I have is that, for image segmentation tasks, it doesn't really make sense to me as to how to use both types of data.

In the above-mentioned paper, they create one model with only the image data and another with only the numeric data, and then they fuse them (there are multiple strategies for fusing them together). For classification tasks, it makes sense. However, for my task, it does not make sense to have a model which only uses the clinical data for image segmentation and that's where I get confused.

How do you think I should proceed with my task? Is it even possible to mix both types of data for image segmentation tasks?

",44898,,,,,2/25/2021 19:45,How to use mixed data for image segmentation?,,1,0,,,,CC BY-SA 4.0 26560,2,,21597,2/25/2021 11:21,,1,,"

Minimax is the base algorithm, and alpha beta pruning is an optimisation that you can apply to minimax to make it more efficient.

",44983,,,,,2/25/2021 11:21,,,,0,,,,CC BY-SA 4.0 26561,2,,26556,2/25/2021 11:38,,0,,"

I don't get why the (a).i. uses (𝑥,𝑦) coordinates whereas (b).i. uses boolean list. I guess they can be used interchangeablly right?

No these cannot be used interchangeably. The situations really are different enough that completely different state representations are necessary.

In the case of (a) you know the current bug's location, and nothing else about the problem changes over time. So any state representation that captures a single bug's location is valid. The (x,y) coordinates are a natural choice given the framing of the question, but also you could enumerate all the possible grid positions for the bug, giving the same size of state space.

In case of (b) you do not know where either bug is, but you don't need to care about the difference between each bug to track it separately (this is becasue your goal state does not rely on the identity of either bug). Therefore the state representation has to capture where any bug could possibly be.

The proposed solution is simple, but you could also do it by tracking possible locations of each bug. That could work with $17^2$ $(x_1,y_1,x_2,y_2)$ tuples initialised to all possible position combinations that the two bugs start in. The state space in that case is large (it is not just $17^2$, but includes all valid reachable combinations of the combined $17^2$ $(x_1,y_1,x_2,y_2)$ tuples) and difficult to assess because many state combinations are not reachable. Also note this alternative proposal does differentiate between the bugs - you could potentially improve upon that by removing start states that simply swap between bug locations, although this is still not as efficient as the binary representation.

Technically the proposed solution for (b) also works the same for any number of bugs that are spread out in the maze and want to all be in the same location at the end.

",1847,,,,,2/25/2021 11:38,,,,5,,,,CC BY-SA 4.0 26563,2,,26479,2/25/2021 13:04,,1,,"

There is an inherent assumption in heuristic search that the heuristic function points you in the right direction.

A* largely depends on how good the heuristic function is. Two nice properties for the heuristic function are for it to be admissible and consistent. If the latter stands, I can't think of any case where BFS would outperform A*. However, this property doesn't stand in every case.

If you select a misleading heuristic function (i.e. a function that points you in the wrong direction) then BFS should outperform A*.


I drew a dumb example to illustrate my point. You want to reach from the Initial State (IS) to the Goal State (GS). The shade of green indicates the value of the heuristic (the greener the better).

In the third case, where you have a misleading heuristic, A* would tend to explore areas near the top right first, then it would go down towards the GS. This case is actually worse than having no heuristic and using a BFS.

",26652,,,,,2/25/2021 13:04,,,,1,,,,CC BY-SA 4.0 26564,1,26565,,2/25/2021 14:52,,2,654,"

I have a stochastic environment and I'm implementing a Q-table for the learning that happens on the environment. The code is shown below. In short, there are ten states (0, 1, 2,...,9), and three actions: 0, 1, and 2. The action 0 does nothing, action 1 subtracts 1 with a probability of 0.7, and action 2 adds 1 with a probability of 0.7. We get a reward of 1 when we are in state 5, and 0 otherwise.

import numpy as np
import matplotlib.pyplot as plt

def reward(s_dash):
    if s_dash == 5:
        return 1
    else: 
        return 0
states = range(10)
Q = np.zeros((len(states),3))
Q_previous = np.zeros((len(states),3))
episodes = 2000
trials = 100
alpha = 0.1
decay = 0.995
gamma = 0.9
ls_av = []
ls = []
for episode in range(episodes):
    print(episode)
    s = np.random.choice(states)
    eps = 1
    for i in range(trials):
        eps *= decay
        p = np.random.random()
        if p < eps:
            a = np.random.randint(0,3)
        else:
            a = np.argmax(Q[s, :])

        if a == 0:
            s_dash = s
        elif a == 1:
            if p >= 0.7:
                s_dash = max(s-1, 0)
            else:
                s_dash = s
        else:
            if p >= 0.7:
                s_dash = min(s+1, 9)
            else:
                s_dash = s
        r = reward(s_dash)
        Q[s][a] = (1-alpha)*Q[s][a] + alpha*(r + gamma*np.max(Q[s_dash]))
        s = s_dash
    ls.append(np.max(abs(Q - Q_previous)))
    Q_previous = np.copy(Q)
print(Q)
for i in range(10):
    print(i, np.argmax(Q[i, :]))
plt.plot(ls)
plt.show()

When I plot the absolute value of the maximum change in the Q-table at the end of each episode, I get the following, which indicates that the Q-table is constantly being updated.

However, I see that when I print out the action with the max Q-value for each state, it shows what I expect to be the optimal policy. For each state, the best action is given as shown below:

(0, 2)
(1, 2)
(2, 2)
(3, 2)
(4, 2)
(5, 0)
(6, 1)
(7, 1)
(8, 1)
(9, 1)

My question is: why do I not have convergence in the Q-table? If I had a stochastic environment for which I didn't know before-hand what the optimal policy is, how will I be able to judge if I need to stop training when the Q-table isn't converging?

",44927,,,,,2/25/2021 16:22,What is a good convergence criterion for Q-learning in a stochastic environment?,,1,0,,,,CC BY-SA 4.0 26565,2,,26564,2/25/2021 16:22,,3,,"

To obtain guarantees of convergence for Q table values, you need to decay the learning rate, $\alpha$, at a suitable rate. Too fast and convergence will be to inaccurate values. Too slow and convergence never happens.

For sticking with theoretical guarantees, the learning rate decay process should generally follow the rule that $\sum_t \alpha_t = \infty$ but $\sum_t \alpha_t^2 \ne \infty$ - an example of a learning rate schedule that does this is $\alpha_t = \frac{1}{t}$, although in practice that specific choice could lead to very slow convergence.

Choosing a good starting $\alpha$ and a good decay schedule will depend on the problem, and you may want to base it on experience with similar problems. However, it is not that common to need to gurantee convergence of action values in value-based reinforcement learning. In control problems you often care more about have finding an optimal policy than about perfectly accurate action values. Further to that, many interesting control problems are too complex to solve perfectly in tabular form, so you expect some approximation. It seems relatively common just to pick a learning rate for the problem and stick with it.

If you make your learning rate lower, the Q table will converge to more stable Q values, but possibly at the expense of taking longer to converge on the optimal policy.

",1847,,,,,2/25/2021 16:22,,,,2,,,,CC BY-SA 4.0 26566,1,,,2/25/2021 17:08,,1,245,"

I've been working on a lot of simple resnet18 binary classifiers lately and I've started to notice that the probability distributions are often skewed one way or the other. This figure shows one such example. The red and blue color code the negative and positive ground truths respectively. And the bottom axis is the output prediction of the binary classifier (sigmoid activated output neuron). Notice how the red is more bunched towards 0, but the blue has quite some spread.

At first I began to reason this to myself with arguments like "well the positive clues in the image have a small footprint, so they are hard to find, therefore the model should be unsure about positives more of the time."

Later I found oppositely skewed distributions and tried to say "well the positive clues in the image have a small footprint, so it might be easy to confuse some other things for the positive clues, therefore the model should be unsure about negatives more of the time"

You can see where I'm going with this. It took me training up quite a few models like this in a short amount of time to realise I was kidding myself. Even the exact same architecture and similar dataset may produce a different skew over different training runs. And if you think about it, negative probability is just the complement of positive probability, so any argument you make in favor of one over the other can be easily reversed.

So what's influencing this skew? Why is there a skew at all? If there's a skew, is it because of something "real", or is it just random?

These all may seem like philosophical questions, but they have great practical significance. Because that skew basically tells me where I should put my decision threshold in production level inference!

",16871,,,,,2/25/2021 19:42,Why are CNN binary classifier output probability distributions often skewed?,,1,0,,,,CC BY-SA 4.0 26567,1,,,2/25/2021 18:03,,2,61,"

For an RL problem on a continuous state space, the states could be discretized into buckets and these buckets used in implementing the Q-table. I see that is what is done here. However, according to van Hasselt from his book, this discretization changes the problem into a partially observable MDP (POMDP), and this is understandable. And I know POMDPs require special treatment from the vanilla Q-learning we are used to (observation space, belief states, etc).

But my question is: is there a specific technical reason why a discretized-state problem (which is now POMDP) should be solved using POMDP algorithms, instead of plainly constructing a vanilla Q-table using the discretized states (i.e. the buckets from discretization)? In other words, is there a disadvantage in not using POMDP algorithms to tackle the discretized-state problem?

",44927,,44927,,2/25/2021 18:34,2/25/2021 18:34,Are there any known disadvantages of implementing vanilla Q-learning on a discretized-state space environment?,,0,3,,,,CC BY-SA 4.0 26568,1,26569,,2/25/2021 18:51,,0,59,"

I am working with tabular data that is similar to the below:

Name Phone Number ISO3 Country Amount Email ... ... Outcome Possible Reason
Leona Sunfurry (555)-555-5555 United States 58.96 leo_sun@gmail.com ... ... 0 Not ISO3 country
Diana Moonglory (333)-555-5555 USA 8.32 di.moon@gmail.com ... ... 1
Fiora Quik (111)-555-5555 FRA 0.35 null ... ... 1
Darius Guy 12345678901234 CAN 555.01 null ... ... 0 Too many digits in phone
LULU (333)-555-5555 CAN 0.00 null ... ... 0 Odd name format
Eve K. (111)-555-5555 FRA 69.25 e.k@gmail.com ... ... 1
Lucian Light (999)-555-5555 ENG 65.00 null ... ... 1
Lux D. (333)-555-5555 USA 11.64 test@test.com ... ... 1
Jarvin Crown (333)-555-5555 USA 1357.13 j4@gmail.com ... ... 0 Unknown reason

The table contains information about users. Some of the fields are user-generated while others are generated by the program (like device location, amount, etc.). When this data is collected, it is sent to third parties (we will say a bank). Sometimes the bank rejects the data and it is not good for our users. The rejection could have happened because the user did not input the data correctly or the banks did not like how a field is formatted despite the data being correct and acceptable to other banks.

So we want to find the fields that are causing the most errors and how to fix the issue.

Does it make sense to do pattern recognition on the values to find the reason why the row was rejected? It would need to be an alpha-numeric type of algorithm, it seems.

We know the outcomes from the bank which is labeled as Outcome. Although we have labeled data, it still feels like we need an unsupervised learning algorithm because we do not have labels on why the rows of data were rejected.

Does anyone know what type of algorithm would be best? Any feedback would be appreciated!

",44992,,2444,,2/25/2021 18:54,2/25/2021 19:37,Which algorithm can be used for extracting text patterns in tabular data?,,1,0,,,,CC BY-SA 4.0 26569,2,,26568,2/25/2021 19:37,,1,,"

You should first segregate the rejected samples. You can use then use string matching or something more complex (like creating embeddings and then, taking L2 distance between them) between the different field names you have and the comment for rejection. Whichever field gets the highest score, you increase the rejection count for that field. In the end, you will have a tally of who is your biggest enemy.

You can create some rules which prevent injection of wrong data (like your password should be 7 characters long or something along these lines) or post-process your entries to match a uniform format.

",37203,,,,,2/25/2021 19:37,,,,2,,,,CC BY-SA 4.0 26570,2,,26566,2/25/2021 19:42,,1,,"

Yes, due to this issue, you should use temperature scaling after training your model. It will calibrate your probability and you will start to get the same kind of distributions. Here are a good article along with implementation on it.

",37203,,,,,2/25/2021 19:42,,,,2,,,,CC BY-SA 4.0 26571,2,,26559,2/25/2021 19:45,,1,,"

You can try doing image segmentation the traditional way, just using the image data. If you want to use the non-image data, then, you can introduce classification as another task for your network. It will provide some regularization to your model. But, this is one way you can still use non-image data whilst still working with image outputs.

",37203,,,,,2/25/2021 19:45,,,,0,,,,CC BY-SA 4.0 26572,2,,26558,2/25/2021 19:55,,0,,"

You can use any state of the art object detection such as DETR, EfficientDET, YOLO etc. since they have an object 'person' in them. But, if the density of people is high in the images, then, you should look at 'Finding Tiny Faces' which is good for that problem (this will help you detect faces only, think group photos). If you want to find the whole person even if they are occluded, then, it will be difficult (you can then think of segmentation techniques and then, try to find out the people but it will be more taxing).

",37203,,,,,2/25/2021 19:55,,,,0,,,,CC BY-SA 4.0 26573,1,,,2/25/2021 20:04,,2,37,"

I want to predict two separate y-values (not really logically connected) based on an input sequence of data (values x). Using LSTM cells.

Should I train two models separately or should I just increase the dimension of the last layer to 2 instead of 1 and feed the fitting algorithm y-values as 2D pairs? In other words, will there be a lot of interference or can I expect on average similar results with one or two models?

",44996,,2444,,2/26/2021 10:29,2/26/2021 10:29,"If I want to predict two unrelated values given the same sequence of data points, should I have a model with two outputs or two models?",,0,2,,,,CC BY-SA 4.0 26576,1,,,2/25/2021 23:56,,1,250,"

What are examples of good free books that cover the back-propagation used to train multilayer perceptrons? I've just started to learn about artificial neural networks, so I'm looking for books that cover the theoretical basics of back-propagation.

",44999,,2444,,2/26/2021 0:30,3/28/2021 2:02,What are examples of good free books that cover the back-propagation algorithm?,,1,5,,,,CC BY-SA 4.0 26577,2,,26576,2/26/2021 1:59,,1,,"

Deep Learning by Goodfellow et. al is a good book for anything related neural networks, and it's freely available online. Backpropagation is covered in Chapter 6.5.

",32621,,,,,2/26/2021 1:59,,,,2,,,,CC BY-SA 4.0 26578,1,26579,,2/26/2021 4:02,,3,100,"

I'd like to ask you how do we know that neural networks start by learning small, basic features or "parts" of the data and then use them to build up more complex features as we go through the layers. I've heard this a lot and seen it on videos like this one of 3Blue1Brown on neural networks for digit recognition. It says that in the first layer the neurons learn and detect small edges and then the neurons of the second layer get to know more complex patterns like circles... But I can't figure out based on pure maths how it's possible.

",44965,,2444,,2/26/2021 15:14,2/26/2021 15:14,How do we know that the neurons of an artificial neural network start by learning small features?,,2,0,,,,CC BY-SA 4.0 26579,2,,26578,2/26/2021 4:46,,2,,"

We do it experimentally; you're able to look at what each layer is learning by tweaking various values throughout the network and doing gradient ascent. For more detail, watch this lecture: https://www.youtube.com/watch?v=6wcs6szJWMY&list=PL3FW7Lu3i5JvHM8ljYj-zLfQRF3EO8sYv&index=12 it provides many methods used for understanding exactly what your model is doing at a certain layer, and what features it has learnt.

",26726,,,,,2/26/2021 4:46,,,,1,,,,CC BY-SA 4.0 26580,2,,26423,2/26/2021 9:24,,0,,"

My suggestion is to compare the critics rating for the real data and the generated data. Compute the mean of the ratings on real data to get some kind of threshold. If your critic is designed so that a high rating indicates a real sample, then generated data with ratings greater than the mean of the ratings of the real data could be viewed as good enough to fool the critic at that point in training.

This might be a too harsh criteria, but you will at least not accept poorly generated data.

",45009,,,,,2/26/2021 9:24,,,,0,,,,CC BY-SA 4.0 26583,2,,17566,2/26/2021 11:48,,0,,"

TL DR; I do not think the inverse of any reasonable neural network would exist.

Assume that you are using $32-bit$ floating-point numbers in the MNIST example. The number of distinct numbers that a $32-bit$ float can represent is finite (say $x$)

The number of different images you can put into the neural network = $x^{784}$. But the total number of distinct outputs is only $x^{10}$ (As the output is a vector of length $10$).

Hence by the pigeonhole principle there are multiple inputs that correspond to the same output.

Also on an average there are $x^{784}$ / $x^{10}$ = $x ^ {774}$ input images for a single output vector.

This means the function is not invertible as it is not one to one (and that by a long shot).

Some people are also discussing pseudo inverses. I do not know much about pseudo inverses. Still, for these to work (by that, I mean to be able to produce images of numbers from a given output vector), they should be able to distinguish between the $x ^ {774}$ images into

  1. Images that look like real numbers and
  2. Others, which are a total mess of flickering pixels that just happen to produce the given output.

Whatever algorithm solves this problem hence mush possesses domain knowledge of the problem, probably inherited through the neural network weights.

This seems to be an unexplored region of Neural Networks.

Hope this helps:)

",45011,,45011,,3/16/2021 17:16,3/16/2021 17:16,,,,0,,,,CC BY-SA 4.0 26585,1,,,2/26/2021 14:09,,1,62,"

The input data is thousands, millions of 4x1000 matrices. Each row consists of 3 small natural numbers (1000 combinations) and a corresponding real number between 0 and 1.

The output is a 1x1000 vector for each of the matrices. Each output vector value [1 or 0] is not defined by the corresponding 4-argument entry, but the whole 4x1000 matrix. Each input matrix defines a few valid output 1x1000 vectors that cannot be computed analytically from the 4x1000 matrices.

What would be the options for setting up a deep learning architecture to try to tackle this challenge?

",45016,,2444,,2/26/2021 16:20,2/26/2021 16:20,Setting up a deep learning architecture for multi-dimensional data,,0,1,,,,CC BY-SA 4.0 26586,2,,26578,2/26/2021 15:06,,1,,"

The network architecture is relevant to this question.

Convolutional neural network architectures enforce the building up of features because the neurons in earlier layers have access to a small number of input pixels. Neurons in deeper layers are connected (indirectly) to more and more pixels, so it makes sense that they identify larger and larger features. Lots of the visual examples available online which show, for example, a curve, to a circle, to a part of an animal, to a whole animal, are based on convolutional networks. The beautiful examples from the Harvard lecture in the other answer use convolutional networks.

With that being said, increasing complexity with each layer is true generally, including for dense architectures like the 3Blue1Brown one. It's just that this is a more abstract 'increase in nonlinearity' rather than spatial feature size. Depending on the task the network is learning, earlier layers will be more 'basic', but their neurons might use large areas of the input.

",45018,,,,,2/26/2021 15:06,,,,3,,,,CC BY-SA 4.0 26592,1,,,2/27/2021 5:09,,1,57,"

I have my own environment for the DQN algorithm. In my environment, the state space is represented by a list of lists, where each sublist can be of different lengths. In my case, the length of the global list is 300 and the length of each of the sublists varies from 0 to 10. What is the best way to use such state representation as a DQN input if I want to use the PyTorch platform?

#exapmle state with only 4 sublists and each sublist length can be highest 5
state = [[1,2,3,4], [1,20,20], [10], [20,4,5,6,7]]

I am thinking of using the raw data with zero(s) at the end of every sublist to make them all of the equal lengths.

state = [[1,2,3,4,0], [1,20,20,0,0], [10,0,0,0,0], [20,4,5,6,7]]

Then I can convert them to torch.tensor (and maybe flattened) and take that as input in DQN. However, I am wondering - is there a better approach?

",45031,,45031,,3/3/2021 16:23,3/3/2021 16:23,What's the best way to take a list of lists as DQN input?,,0,2,,,,CC BY-SA 4.0 26593,1,,,2/27/2021 7:18,,2,41,"

In resnet paper they said that a deeper network should not produce more error than its shallow counterpart since it can learn the identity map for the extra added layer. But empirical result shown that deep neural networks have a hard time finding the identity map. But the solver can easily push all the weights towards zero and get an identity map in case of residual function($\mathcal{H}(x) = \mathcal{F}(x)+x$). My question is why it is harder for the solver to learn identity maps in the case of deep nets?

Generally, people say that neural nets are good at pushing the weights towards zero. So it is easy for the solver to find identity maps for residual function. But for ordinary function ($\mathcal{H}(x) = \mathcal{F}(x)$) it have to learn the identity like any other function. But I do not understand the reason behind this logic. Why neural nets are good to learn zero weights ?

",28048,,,,,2/27/2021 7:18,Why identity mapping is so hard for deeper neural network as suggested by Resnet paper?,,0,0,,,,CC BY-SA 4.0 26596,2,,23889,2/27/2021 10:54,,4,,"

the mask is needed to prevent the decoder from "peeking ahead" at ground truth during training, when using its Attention mechanism.

Encoder:

  • Both runtime or training:

    the encoder will always happen in a single iteration, because it will process all embeddings separately, but in parallel. This helps us save time.


Decoder:

  • runtime:

    Here the decoder will run in several non-parallel iterations, generating one "output" embedding at each iteration. Its output can then be used as input at the next iteration.

  • training:

    Here the decoder can do all of it in a single iteration, because it simply receives "ground truth" from us. Because we know these "truth" embeddings beforehand, they can be stored into a matrix as rows, so that they can be then submitted to decoder to be processed separately, but in parallel.

    As you can see during training, actual predictions by the decoder are not used to build up the target sequence (like LSTM would). Instead, what essentially is used here is a standard procedure called "teacher forcing".

    As others said, the mask is needed to prevent the decoder from "peeking ahead" at ground truth during training, when using its Attention mechanism.

As a reminder, in transformer, embeddings are never concatenated during input. Instead, each word flows through encoder and decoder separately, but simultaneously.

Also, notice that the mask contains negative infinities, not zeros. This is due to how the Softmax works in Attention.

We always first run the encoder, which always takes 1 iteration. The encoder then sits patiently on the side, as the decoder uses its values as needed.

",27042,,27042,,2/27/2021 13:54,2/27/2021 13:54,,,,0,,,,CC BY-SA 4.0 26597,1,,,2/27/2021 11:04,,3,87,"

For RNN's to work efficiently, we vectorize the operations, which results in an input matrix of shape

(m, max_seq_len) 

where m is the number of examples, e.g. sentences, and max_seq_len is the maximum length that a sentence can have. Some examples have smaller lengths than this max_seq_len. A solution is to pad these sentences.

One method to pad the sentences is called "zero-padding". This means that each sequence is padded with zeros. For example, given a vocabulary where each word is related to some index number, we can represent a sentence with length 4,

I am very confused 

by

[23, 455, 234, 90] 

Padding it to achieve a max_seq_len=7, we obtain a sentence represented by:

[23, 455, 234, 90, 0, 0, 0] 

The index 0 is not part of the vocabulary.

Another method to pad is to add a padding character, e.g. <<pad>>, in our sentence:

I am very confused <<pad>>> <<pad>> <<pad>>

to achieve the max_seq_len=7. We also add <<pad>> in our vocabulary. Let's say its index is 1000. Then the sentence is represented by

[23, 455, 234, 90, 1000, 1000, 1000]

I have seen both methods used, but why is one used over the other? Are there any advantages or disadvantages comparing zero-padding with character-padding?

",45034,,2444,,2/27/2021 14:29,2/27/2021 14:29,What is the difference between zero-padding and character-padding in Recurrent Neural Networks?,,0,1,,,,CC BY-SA 4.0 26601,1,,,2/27/2021 21:53,,2,181,"

I remember reading about two different types of goals for an intelligence. The gist was that the first type of goal is one that "just is" - it's an end goal for the system. There doesn't need to be any justification for wanting to achieve that goal, since wanting to do that is a fundamental purpose for that system. The second type of goal is a stepping stone, for lack of better words. Those aren't end goals in and of themselves, but they would help the system achieve its primary goals better.

I've forgotten the names for these types of goals and Googling didn't help me much. Is there a standard definition for these different types of goals?

",45048,,2444,,12/12/2021 19:14,12/12/2021 19:14,What are the different types of goals for an AI system called?,,2,1,,,,CC BY-SA 4.0 26602,1,,,2/28/2021 6:30,,2,206,"

Consider a heuristic function $h_2(n) = 3h_1(n)$. Where $h_1(n)$ is admissible.

Why are the following statements true?

  1. $A^*$ tree search with $h_2(n)$ will return a path that is at most thrice as long as the optimal path.
  2. $h_2(n) + 1$ is guaranteed to be inadmissible for any $h_1(n)$
",45038,,2444,,2/28/2021 23:50,4/5/2021 15:05,"If $h_1(n)$ is admissible, why does A* tree search with $h_2(n) = 3h_1(n)$ return a path that is at most thrice as long as the optimal path?",,1,1,,,,CC BY-SA 4.0 26603,2,,17410,2/28/2021 10:38,,0,,"

MICE is a Multiple imputation method, it contains three phases:

  • Imputation phase: The missing data are estimated M times from a specific model to obtain M complete and potentially different data sets.
  • Separate analysis phase: The selected analysis is performed separately on each of m = 1, ..., M sets of imputed data to obtain estimates (central tendency and variance).
  • Combined analysis phase: The results obtained are combined according to the rules established by Rubin to obtain a single final estimate.
",32560,,,,,2/28/2021 10:38,,,,0,,,,CC BY-SA 4.0 26607,2,,26602,2/28/2021 16:25,,1,,"

The sketch of the proof for your first question:

for an open node $n$, if $f_1(n) = g(n) + h_1(n)$, in the same situation in using $h_2$, it will be $f_2(n) = g(n) + 3 h_1(n)$. Hence, all the time for any node $n$, $f_2(n) \leqslant 3f_1(n)$. On the other hand, we know that A* with the admissible husritic function $h_1$ will be admissible (from Theorem 2 of Chapter 3 in "Heuristics Intelligent Search Strategies for Computer Problem Solving" book by Judera Pearl), i.e., for the node $n^*$ with optimal value, $f_1(n^*) \leqslant C^*$ that $C^*$ is the optimal value. Therefore, A* with $h_2$ will return a solution in node $n'$ by the cost of $f_2(n') \leqslant 3 f_1(n^*) \leqslant 3C^*$, as $f_1(n') \leqslant 3 f_1(n^*)$ (see more details of the proof in Theorem 13 of Chapter 13 in the same reference).

You can find more about $h_2$ under the title of $\epsilon$-admissibility as it is $(1 + \epsilon) h_1$ that $h_1$ is an admissible heuristic function. In your case, $\epsilon = 2$.

",4446,,2444,,3/6/2021 14:25,3/6/2021 14:25,,,,0,,,,CC BY-SA 4.0 26608,1,,,2/28/2021 17:27,,1,188,"

I have seen people normalize images by just dividing 255. But why? Why not use mean normalization or Z-score Normalization? I also came across this StackOverflow topic while searching but the answers there were not enough enlightening for me.

",45065,,2444,,3/3/2021 9:28,3/3/2021 9:28,How to normalize images before training?,,1,0,,,,CC BY-SA 4.0 26610,2,,25294,2/28/2021 19:20,,2,,"

It will recover the encrypted inputs.

The algorithm starts with dummy data and dummy labels, and then iteratively optimizes the dummy gradients to be close as to the original. This makes the dummy data close to the real training data:

$$\mathbf{x}^{\prime *}, \mathbf{y}^{\prime *}=\underset{\mathbf{x}^{\prime}, \mathbf{y}^{\prime}}{\arg \min }\left\|\nabla W^{\prime}-\nabla W\right\|^{2}=\underset{\mathbf{x}^{\prime}, \mathbf{y}^{\prime}}{\arg \min }\left\|\frac{\partial \ell\left(F\left(\mathbf{x}^{\prime}, W\right), \mathbf{y}^{\prime}\right)}{\partial W}-\nabla W\right\|^{2}$$

As the distance is minimized, the algorithm restores the original training data; in the case of encrypted training data - you should get an encrypted input (up to failures when the resulted 'input' isn't close to the original input).

",9233,,9233,,4/22/2021 15:21,4/22/2021 15:21,,,,0,,,,CC BY-SA 4.0 26615,2,,26608,3/1/2021 8:59,,1,,"

tl;dr subtracting the mean and dividing by the standard deviation is theoretically more sound, but is impractical compared to dividing by $255$.


Explanation

As you know neural networks perform better when their input is scaled. The 2 most common ways to perform scaling are:

  • normalization, where you want to scale the image to the $[0, 1]$ range.
  • standardization, where you want to bring the mean to $0$ and the standard deviation to $1$.

Theoretically, I think, the latter has some advantages, however in practice there is not a significant difference, especially if the network has normalization layers.

Images are usually stored in 8-bit color mode, meaning that they take integer values from 0 to 255. Because you know the minimum, and maximum values that each pixel can take, it is extremely easy to normalize an image (i.e. you simply have to divide each pixel value by $255$).

On the other hand it is much less practical to compute the mean and standard deviation of the pixel intensities of the training set. Keep in mind that in most cases the training set doesn't even fit into memory, and you have to approximate the true mean/std by moving averages (i.e. load a few images, compute their mean/std, close them, load some more, add their mean to the MA, ...). But why bother if you can similar results by simply dividing each image by $255$.

",26652,,,,,3/1/2021 8:59,,,,2,,,,CC BY-SA 4.0 26616,1,,,3/1/2021 9:52,,1,328,"

Cross Stage Partial Connections (CSPC) try to solve the next problems:

  1. Reduce the computations of the model in order to make it more suitable for edge devices.
  2. Reduce memory usage.
  3. Better backpropagate the gradient.

I cannot really understand how the first two points are actually achieved with this type of connection. Furthermore, in CSPC, the skip connection is just a slice of the feature map, and in Residual Connections, the skip connection is all the feature map. Aren't CSPC and Residual Connections (with concatenation) actually "almost" the same thing? Then, what advantages do you get for connecting with deeper layers only a slice of the previous feature map (CSPC) vs the whole feature map (Residual Connection)?

",42296,,42296,,3/2/2021 8:09,3/2/2021 8:09,What are the benefits of Cross Stage Partial Connections over Residual Connections?,,0,0,,,,CC BY-SA 4.0 26617,2,,12464,3/1/2021 11:53,,1,,"

While @Oliver Mason's comment is correct, and your proposed method won't provide perfect security, you can still protect your models at rest, so that they are stored encrypted in the memory, and your software feed the key at runtime to decrypt it.
On whatever DL inference engine that you have, once it supports loading the model from a buffer (e.g. void*) rather than a file path, you can do the following: read the encrypted model, decrypt it into a buffer, and initialize your neural network model from the decrypted buffer. OpenVino supports loading the model from a buffer.
For encrypt / decrypt the model, any framework such as OpenSSL, or even tiny-AES, can work.
Mentioning again, how to store and use the key is something that should be handled carefully, and a user with sufficient knowledge can read the model and the keys at runtime from the application's memory.

",9233,,9233,,3/1/2021 12:11,3/1/2021 12:11,,,,0,,,,CC BY-SA 4.0 26619,1,,,3/1/2021 13:36,,1,158,"

I have the model which has 3 outputs (it is a regression task, I have the angle of the steering wheel, brake and acceleration). I can divide my values to some smaller bins and in this way I can change this into classification problem. I can balance data to have the same number of data points in each bin.

But now I wonder how to balance this data correctly. I found some good resources and libraries imbalanced-learn | Python official documentation
multi-imbalance | Python official documentation
Multi-imbalance | Poznan University of Technology

But to my understanding, these algorithms can deal with imbalanced data (in normal and multi class classification) only if you have one output. But I have 3 outputs. And these outputs can be correlated somehow. How to balance them correctly?

I thought about 2 ideas:

  1. Creating tuples consist of 3 elements and balancing in such a way that you have the same number of different tuples But you can have this situation: (A, X, 1), (A, Y, 2), (A, Y, 3), (B, Z, 3) These tuples are different, but you can see that we have a lot of tuples with the value A at first position. So the data is still quite imbalanced.

  2. Balancing data iteratively considering only one column at a time. You balance first column, then you balance second column etc.

Are these ideas good or not? Maybe there are some other options for balancing data if you have multiple targets?

",,user40943,32410,,4/24/2021 16:10,1/22/2023 23:00,Handling imbalanced data with multiple targets,,1,0,,,,CC BY-SA 4.0 26620,2,,26601,3/1/2021 14:15,,1,,"

Do you mean weak AI and strong AI? The former is roughly about pretending to be intelligent, ie do intelligent things, but without trying to work the same way an actually intelligent system would work. The latter attempts to replicate how an intelligent system works, so would require us to understand a lot more about cognition than if we just mocked up a quick little chatbot to try and compete the Turing Test.

",2193,,,,,3/1/2021 14:15,,,,2,,,,CC BY-SA 4.0 26621,1,,,3/1/2021 16:17,,1,26,"

I'm trying to understand what LDA exactly does when used as a classifier. I've understood how the dimensionality reduction works and I've understood that the classification task is carried out with the application of Bayes' theorem, but I still can't figure out if LDA executes both operations when used as a classification algorithm.

Is it correct to say that LDA, as a classifier, executes by itself dimensionality reduction and then applies Bayes' theorem for classification?

If that makes any difference, I've used LDA in Python from the sklearn library.

",45102,,2444,,7/24/2021 12:35,7/24/2021 12:35,Does Linear Discriminant Analysis make dimensionality reduction before classification?,,0,0,,,,CC BY-SA 4.0 26622,2,,26619,3/1/2021 18:18,,0,,"

You can try weighting your training data instances. So, if for example class A has proportion $p_A$, you weight every instance of class A with $1/p_A$. There also exists more sophisticated approaches to train on unbalanced data, such as generating synthetic samples to create a balanced dataset and so on. You can start learning more here.

",32621,,,,,3/1/2021 18:18,,,,1,,,,CC BY-SA 4.0 26625,1,,,3/2/2021 2:23,,1,189,"

I am working on a classification problem.

I have a dataset $S$ and I am training several prediction algorithms using S: Naive Bayes, SVM, classification trees.

Intuitively, I was planning to combine my models, and, for each data point in the test sample $S'$, take the majority vote as my prediction.

Does that make sense? I feel this is a very simplistic way to combine different models.

",45119,,2444,,3/2/2021 8:06,3/2/2021 9:38,Does it make sense to combine classifiers trained on the same dataset?,,2,0,,,,CC BY-SA 4.0 26626,2,,26625,3/2/2021 5:53,,1,,"

It is a simple way to do it but it is not wrong.

If you are getting probabilities for each model, then, you can average them. Then, you can do the classification.

Also, you assign weights to each model based on a validation set and regressing the weights for each models prediction.

",37203,,,,,3/2/2021 5:53,,,,0,,,,CC BY-SA 4.0 26627,2,,26625,3/2/2021 9:38,,3,,"

These are generally known as ensemble methods. Your method is essentially what Scikit-Learn's VotingClassifier does, which is perfectly reasonable and might give you better results. Of course, if you have an ensemble of classifiers and some of them perform quite poorly, the ensemble might not be able to beat the best classifier: you'll need to check this in your cross-validation. Be aware that any confidence or probability estimates may not be well-calibrated, so the predictions from the ensemble may not be particularly meaningful.

There are more elaborate ways of ensembling classifiers. Random forest classifiers, for example, are just ensembles of decision trees; a technique called bagging is also used here to improve performance.

The technique of using a second model to weight the predictions from an ensemble is known as stacked generalisation. It was introduced by Wolpert in 1991 [1], and you can find plenty of interesting examples of using the technique, e.g. on Kaggle.

[1] David H. Wolpert. Stacked generalization. Neural Networks 5.2 (1992), pp. 241-259.

",44413,,,,,3/2/2021 9:38,,,,0,,,,CC BY-SA 4.0 26629,1,26639,,3/2/2021 10:58,,1,4893,"

I might be getting this completely wrong, but please let me first try to explain what I need, and then what's wrong.

I have a classification task. The training data has 50 different labels. The customer wants to differentiate the low probability predictions, meaning that, I have to classify some test data as "Unclassified / Other" depending on the probability (certainty?) of the model.

When I test my code, the prediction result is a numpy array. One example is:

[[-1.7862008  -0.7037363   0.09885322  1.5318055   2.1137428  -0.2216074
   0.18905772 -0.32575375  1.0748093  -0.06001111  0.01083148  0.47495762
   0.27160102  0.13852511 -0.68440574  0.6773654  -2.2712054  -0.2864312
  -0.8428862  -2.1132915  -1.0157436  -1.0340284  -0.35126117 -1.0333195
   9.149789   -0.21288703  0.11455813 -0.32903734  0.10503325 -0.3004114
  -1.3854568  -0.01692022 -0.4388664  -0.42163098 -0.09182278 -0.28269592
  -0.33082992 -1.147654   -0.6703184   0.33038092 -0.50087476  1.1643585
   0.96983343  1.3400391   1.0692116  -0.7623776  -0.6083422  -0.91371405
   0.10002492]]

I'm then using numpy.argmax() to identify the correct label.

My question is, is it possible to define a threshold (say, 0.6), and then compare the probability of the argmax() element so that I can classify the prediction as "other" if the probability is less than the threshold value?


Edit 1:

We are using 2 different models. One is Keras, and the other is BertTransformer. We have no problem in Keras since it gives the probabilities so I'm skipping Keras model.

The Bert model is pretrained. Here is how it is generated:

def model(self, data):
        number_of_categories = len(data['encoded_categories'].unique())
        model = BertForSequenceClassification.from_pretrained(
            "dbmdz/bert-base-turkish-128k-uncased",
            num_labels=number_of_categories,
            output_attentions=False,
            output_hidden_states=False,
        )

        # model.cuda()

        return model

The output given above is the result of model.predict() method. We compare both models, Bert is slightly ahead, therefore we know that the prediction works just fine. However, we are not sure what those numbers signify or represent.

Here is the Bert documentation.

",45132,,2444,,3/11/2021 8:44,9/15/2022 15:09,How do I calculate the probabilities of the BERT model prediction logits?,,1,0,,,,CC BY-SA 4.0 26632,1,,,3/2/2021 12:32,,3,65,"

I've spent a few days reading some of the new papers about Neural SDEs. For example, here is one from Tzen and Raginsky and here is one that came out simultaneously by Peluchetti and Favaro. There are others which I plan to read next. The basic idea, which are attained via different routes in each paper, is that if we consider the input data arriving at time $t=0$ and the output data arriving at time $t=1$, and with certain assumptions on the distribution of network weights and activations, the evolution of the data from one layer to the next inside the network is akin to a stochastic process. The more layers you have, the smaller the $\Delta t$ is between the layers. In the limit as the number of layers goes to infinity, the network approaches a true stochastic differential equation.

I am still working on the math, which is my main objective. However, what I find missing from these papers is: Why is this important? The question is not, why is this interesting?. It is certainly interesting from a purely mathematical perspective. But what is the importance here? What is the impact of this technology?

I was at first excited about this because I thought it proposed a way to apply a neural network to learn the the parameters of an SDE by fitting it to real time-series data where we don't know the form of the underlying data generation process. However I noticed in the experiment of Peluchetti and Favaro is simply the MNIST data set, while the data experiment from Tzen and Raginsky is in fact a simulated SDE. The later fit more with my intuition.

So, again, my question is, what is the general importance of Neural SDEs? And a secondary question is: am I correct in thinking this technology proposes a new way to fit a model to data which we suppose is generated by a stochastic process?

",22327,,2444,,3/3/2021 9:38,3/3/2021 9:38,What's up with Neural Stochastic Differential Equations from a practical standpoint?,,0,0,,,,CC BY-SA 4.0 26635,1,,,3/2/2021 20:50,,3,531,"

I am looking at a certain project that compares performance on a certain dataset for an object detection problem using YOLOv3 and RetinaNet (or the "SSD_ResNet50_FPN" from TF Model Zoo). Both YOLOv3 and RetinaNet seem to have similar features like detection at scales, skip connections, etc.

So, what is the exact main difference between YOLOv3 and SSD_ResNet50_FPN?

",31416,,2444,,3/3/2021 9:09,11/18/2022 20:02,What are the main differences between YOLOv3 and RetinaNet object detection algorithms?,,1,0,,,,CC BY-SA 4.0 26637,1,,,3/2/2021 22:19,,1,235,"

What is the number of game states/information sets in 6-players, no limit, Texas Holdem?

A year ago, Pluribus reached a super-human level in 6-players no limit Holdem Poker. I am interested in the size of poker because it is a simple heuristic method to compare the complexity of different games.


In the paper Measuring the Size of Large No-Limit Poker Games (2013), they write

The size of a game is a simple heuristic that can be used to describe its complexity and compare it to other games, and a game’s size can be measured in several ways. The most commonly used measurement is to count the number of game states in a game.

...

In imperfect information games, an alternate measure is to count the number of decision points, which are more formally called information sets.

Here's the definition of game states and information sets.

Game states are the number of possible sequences of actions by the players or by chance, as viewed by a third party that observes all of the players' actions. In the poker setting, this would include all of the ways that the players private and public cards can be dealt and all of the possible betting sequences.

Information sets: When a player cannot observe some of the actions or chance events in a game, such as in poker when the opponent’s private cards are unknown, many game states will appear identical to the player. Each such set of indistinguishable game states forms one information set, and an agent's strategy or policy for a game must necessarily depend on its information set and not on the game state: it cannot choose to base its actions on information it does not know.


Here are the number of game states of certain variants of Poker.

",43351,,63866,,11/30/2022 15:01,11/30/2022 15:01,What is the size of 6-players no limit Texas holdem Poker?,,0,2,,,,CC BY-SA 4.0 26639,2,,26629,3/3/2021 8:12,,2,,"

Your call to model.predict() is returning the logits for softmax. This is useful for training purposes.

To get probabilties, you need to apply softmax on the logits.

import torch.nn.functional as F
logits = model.predict()
probabilities = F.softmax(logits, dim=-1)

Now you can apply your threshold same as for the Keras model.

",1847,,,,,3/3/2021 8:12,,,,2,,,,CC BY-SA 4.0 26640,2,,6892,3/3/2021 8:40,,1,,"

Another point of view -
In safety-critical real world systems, this attack should be evaluated from other aspects as well.
In many systems the attack is somewhat mitigated to physical attacks only - for example, you can't add digital noise to a camera used for autonomous driving - you need to print an adversarial e.g. stop sign and locate it in a place, where it's still viewed and interpreted incorrectly from several points of view, angles, light and whether conditions, etc.
Given that, I think that the overall current risk of adversarial examples for scalable attacks on real-world mission-critical systems isn't very high for now.
That's why such work is existing in companies at research level, but not production, yet.

",9233,,,,,3/3/2021 8:40,,,,0,,,,CC BY-SA 4.0 26643,1,26650,,3/3/2021 10:19,,3,67,"

In this lecture (starting from 1:31:00) the professor says that the set of all images of a person lives in a low dimensional surface (compared the the set of all possible images). And he says that the dimension of that surface is 50 and that they get this number by adding the three translations of the body, the three rotations of the head and the independent movements of the face's muscles. He also adds that it may be more than 50 but less than 100. How do we get the number 50 ?

The professor previously said (in the same lecture, 1:29:00) that the set of all the images that we could describe as natural and that we could interpret are in a manifold. I try to understand how the number 50 came up like the following: let's take an image of a person, since it's "natural" then it belongs to that manifold. Hence there is an open set to which this image belongs to and there is a homeomorphic map from this open set to an euclidean space. Let's suppose (I don't know why but it's the only possible thing I could come up with to understand) that all the images of that same person, regardless of his position and expressions..., are in that open space then through the homeomorphic mapping we have the "same points" in an euclidean space, do we get the base of it by decomposing all the possible movements of the person?

I hope someone can clarify things for me, it seems this doesn't only work with images but all types of non-structured types of data.

",44965,,2444,,3/3/2021 14:37,3/16/2021 8:09,"Why different images of the same person, under some restrictions, are in a 50 dimension manifold?",,1,0,,,,CC BY-SA 4.0 26644,1,,,3/3/2021 11:23,,3,355,"

I have a problem in which my input data may have a varying number of channels. Let me explain with an example.

Imagine we have a classification problem in which we wish to identify if certain species are present in wildlife photographs. This can be done via a neural network including maybe some convolutions. For the first layer of the network we could set up a convolutional layer with 3 input channels (one for R, G and B respectively) and this would probably work well enough.

Now imagine that someone comes along with some new data for us and this time they have not only taken regular RGB images but they have used an IR-camera as well. Great, but how do we treat this data, we have one more channel?! One could of course simply add an extra channel and re-train the network but that would mean that our old data (without IR-info) is useless and what if someone comes along with a UV-camera.....

My situation is similar but I will most definitely be dealing with varying numbers of channels and the range can be quite wide (from 5 channels all the way up to maybe 50). Is there a good way of dealing with a situation like this?

",45149,,2444,,3/6/2021 18:38,3/6/2021 18:38,How to deal with a variable number of channels of the inputs?,,0,6,,,,CC BY-SA 4.0 26646,1,,,3/3/2021 15:38,,4,159,"

How to define machine learning to cover clustering, classification, and regression? What unites these problems?

",43741,,2444,,3/4/2021 12:07,3/4/2021 16:22,"How to define machine learning to cover clustering, classification, and regression?",,1,0,,,,CC BY-SA 4.0 26647,2,,9010,3/3/2021 15:59,,0,,"

There are several ways to use clustering for classification.

  1. To use features associated with classes, then do clustering and find the relationships between clusters and classes.
  2. To use classes in the training set as clusters and to classify each new data vector by its closeness to each cluster.
  3. To use class label as one of features for clustering. Then, find association between clusters and classes.
",43741,,,,,3/3/2021 15:59,,,,0,,,,CC BY-SA 4.0 26649,1,26714,,3/3/2021 16:23,,3,125,"

Context and detail

I've been working on a particular image retrieval problem and I've found two popular threads in the literature:

Image retrieval (usually benchmarked with landmark retrieval datasets)

Face recognition/verification:

I'm still making my way through these lists and more (I've checked the ones I've looked at already) but I'm starting to get a sense that there's not much overlap in the techniques used, or the collective trains of thought in the research community. Here are the main points of divergence where I think both communities should be borrowing from each other.

  • Facial recognition seems to focus on getting embeddings to be as discriminative as possible by playing around with loss functions and training methods, whereas image retrieval seems to care more about ways of extracting feature descriptors from CNN pretrained backbones (types of pooling operations, which feature maps to look at, etc..).
  • Image retrieval has a considerable amount of work on what needs to happen after an embedding is obtained. Eg: dimensionality reduction, whitening + l2 norm, databise-side augmentation, query expansion, reranking etc
  • Facial recognition cares about keeping a minimum margin between non-matching faces in order to avoid mismatches, but I would think that should be imposed in image retrieval tasks as well (this is kind of a sub-point to my first point)

So to sum up: Why is it that facial recognition focuses on generating discriminative embeddings, while landmark retrieval focusses on generating rich "descriptors"? Why does landmark retrieval use this cool bag of tricks for database search while facial recognition just mentions kNN? Shouldn't all these considerations boost performance in either domain?

",16871,,16871,,3/8/2021 15:46,3/8/2021 18:57,Why are the landmark retrieval and facial recognition literature so divergent?,,1,3,,,,CC BY-SA 4.0 26650,2,,26643,3/3/2021 21:05,,2,,"

The number 50 is essentially just a guess based on results when compressing and/or generating data of a certain type. The variables such as "the three translations of the body, the three rotations of the head and the independent movements of the face's muscles" are examples only. There is no known formal map with well-defined parameters that defines a well understood manifold of "clear images of this person" in natural images. The lecturer has not constructed such a map as far as I can tell, but has done some related experiments.

Experimentally, it is possible to establish parameter vectors that work, with models like Variational Autoencoders and Generative Adversarial Networks. Depending on the size of the target image, and amount of variation in subject matter that you want to allow for (pose, lighting, clothing, hair style, makeup, camera properties etc), you will end up with different sizes of embedding vectors that appear to capture the important variations. When dealing with multiple people, it is common to see vector sizes of 64, 128, 256.

The lecture suggests compressing images with a clear background, consistent lighting, same person with only changes being in pose. Around 50 dimensions for this relatively simple image space seems reasonable, given facial recognition engines that work well in a more complex domain using 128 dimensional embeddings.

I expect that the lecturer has seen experimental evidence that a vector of 50 dimensions performs well at representing all variations in these images, plus smaller vectors perform measurably worse and larger vectors do not perform better. This experiment is possible by constructing something like a VAE with a specific size of embedding vector, training it, then measuring loss when reconstructing a set of test images.

",1847,,1847,,3/16/2021 8:09,3/16/2021 8:09,,,,3,,,,CC BY-SA 4.0 26651,1,26652,,3/3/2021 22:52,,-1,56,"

I am working on a computer vision project, based on face detection to record the time spent by a person in an office.

It consists of detecting the face by camera number 1 (input), temporarily storing the detected face, calculating the time spent until this same person leaves and his face is detected by camera number 2. (We don't have a customer database).

Is there a better approach to follow? I would also appreciate articles to read on the topic.

",45163,,2444,,3/4/2021 11:08,3/4/2021 11:08,Face recognition from single image provided,,1,1,,,,CC BY-SA 4.0 26652,2,,26651,3/3/2021 23:28,,1,,"

Matching 2 image of the same person can be done by help of "Siamese Neural network". Here they compare feature of 2 images and if 2 features distance are very close then it's a match. Good thing about this is you do not need person face to match in database. You can use pre-trained network like deepface and use it to compare. However, I guess you will have more trouble connecting real time camera input. As you have to store camera 1 input so that it can be used later for comparison with camera 2 images.

",44197,,,,,3/3/2021 23:28,,,,0,,,,CC BY-SA 4.0 26654,2,,17577,3/4/2021 6:25,,3,,"

We sometimes see that binary cross-entropy (BCE) loss is used for regression problems. This post is my opinion on using BCE for regression problems.

The figure below is the plots of BCE, $-t*\log(x) - (1-t)*\log(1-x)$, for several target values $t = 0.0, 0.1, ..., 0.5$. (The plots for $t>0.5$ are mirror images of those for $t<0.5$, so I omitted them.)

As you can see, when the target value $t$ is closer to the medium ($t=0.5$), BCE is flatter around its minimum ($x \sim t$). This means that BCE is less 'focal' when the target value is intermediate value.

So, BCE suits your purpose when the edge values ($t=0$ and $t=1$) are of special importance for you but the difference between intermediate values ($t=0.4$ and $t=0.5$, for example) is not very important for you.

On the other hand, when any value of target is equally important for you, then BCE will not be a good choice. Another loss function, MSE for example, is better for you.

Note added: If you use BCE for regression problems, it will be better to subtract $-x*\log(x) - (1-x)*\log(1-x)$ from the original BCE expression, so that the loss becomes zero when the prediction value coincides with the target one, $x=t$. This will not matter for backpropagation, but will be convenient to monitor the value.

Note added: After I submit this post, I came to think that we can tune how 'focal' the loss function is around its minimum, by simply multiplying an arbitrary factor of target values. For example, we can tune BCE loss $L_{\rm BCE}(x,t)$ by making it $f(t)*L_{\rm BCE}(x,t)$ where $f(t)$ is whatever factor you want. This factor tunes how focal the loss is around its minimum for each target value $t$.

",45168,,2444,,3/11/2021 8:40,3/11/2021 8:40,,,,0,,,,CC BY-SA 4.0 26656,2,,26412,3/4/2021 8:06,,6,,"

For anyone wondering, I believe to have found the answer:

  1. Yes, it will be an 8x8 plane where all the entries are the same, the number of moves (or mpves with no progress).

  2. There are two repetitions planes (for each position from the most recent T=8 positions):

    a) The first repetition plane will be a plane where all the entries are 1's if the position is being repeated for the first time. Else 0's.

    b) The second repetition plane will be a plane where all the entries are 1's if the current position is being repeated for the second time. Else 0's.

",45173,,,,,3/4/2021 8:06,,,,0,,,,CC BY-SA 4.0 26661,1,,,3/4/2021 10:37,,0,41,"

Meta-learning has 3 broad approaches: model, metric and optimization-based approach. Each of them has its own sub-approach, like matching network, meta-agonistic and Siamese-based network, and so on.

How do I decide which approach to select for a task? For my case, I have a noisy image, and they need to be compared with 10 different new images every time. Do I have to start with the trial and error method, or there is some methodology behind this approach selection?

",44197,,2444,,3/9/2021 10:02,11/29/2022 15:03,Which meta-learning approach selection methodology should I use for similarity learning of an image?,,1,1,,,,CC BY-SA 4.0 26662,1,,,3/4/2021 11:35,,1,87,"

I developed a novel type of CAPTCHA based on text comprehension and random tokens. Given a task Pick the first pair of adjacent letters and a random token 8NBA596V, the user has to provide the solution NB. It offers basic protection and an attacker can solve individual tasks with specific effort. I am curious, whether contemporary AI can solve it generically?

You can access more example tasks here: https://www.topincs.com/manual/captcha

There is a task database and at every attempt a new task is presented with a new random token. They always have a solution of varying length and pure guessing thus has limited chances of success. It is easy to attack an individual task by writing a small piece of code, thus a large task database is essential. What intrigues me is the question whether natural language processing or machine learning at its current state can attack the CAPTCHA generically by building a model of the meaning of the task – essentially a predicate in a tiny universe of discourse – and then applying it to the random token.

",45187,,45187,,3/4/2021 15:15,12/25/2022 21:00,CAPTCHA based on text comprehension and random tokens,,1,2,,,,CC BY-SA 4.0 26663,1,,,3/4/2021 12:04,,1,94,"

Is the expression for the DQN cost function, Equation (2) of the DQN paper

$$\begin{align}L_1 &= E_{\mu,\pi}\left[\left(y_i - q(s,a;\theta)\right)^2\right]\\ &=E_{\mu,\pi}\left[\left(E_{\mathcal{E}}[r + \gamma \max\limits_{a'}q(s',a';\theta^-)] - q(s,a;\theta)\right)^2\right] \end{align}$$

equivalent to this? (Substituting the expression for $y_i$ defined in the paragraph directly after, $\mathcal{E}$ represents the transition distribution governed by the environment, $\pi$ represents the behaviour policy and $\mu$ represents the stationary distribution of states)

$$L_2 = E_{\mu,\pi,\mathcal{E}}\left[\left(r + \gamma \max\limits_{a'}q(s',a';\theta^-) - q(s,a;\theta)\right)^2\right]$$

Can the law of iterated expectation be used to derive the second expression from the first, if not, is there another way to go about showing their equivalence IF they are equivalent.

It seems as though $L_2$ is used for sampling but I'm not sure how it's possible to get here from the original cost function $L_1$. If it is possible to use $L_2$ to sample I assume that means the two expressions must be equivalent. The second expression is used for sampling in the DQN paper here.

I do realise that the gradient for each function is the same and thus so is the $n^{th}$ derivative for some $n\geq1$ and since the curvature and optimas align I guess that also means they are the same function (minus some constant difference)?

$$\nabla_{\theta} L = E_{\mu,\pi,\mathcal{E}}\left[\left(r + \gamma \max\limits_{a'}q(s',a';\theta^-) - q(s,a;\theta)\right)\nabla q(s,a;\theta)\right]$$


Related Question

A related problem that concerns equivalence of sampling from $L_1$ and $L_2$. Is it possible to sample from a nested expectation that is squared as follows?

$$E[E[X|Y]^2] \approx \frac{1}{n}\sum X^2$$

Where $X$ is generated according to the marginalised distribution $P(X)$. I don't think it is true since $E[X]^2 \neq E[X^2]$ which should mean sampling from $L_1$ and $L_2$ are not equivalent.

",42514,,42514,,3/5/2021 18:29,11/28/2022 17:03,Can the law of iterated expectation be used on the inner expectation of the DQN cost function described in the DQN paper,,1,6,,,,CC BY-SA 4.0 26664,2,,26646,3/4/2021 12:07,,3,,"

I report three definitions of machine learning (ML) and I also explain that ML can be divided into multiple sub-tasks or sub-categories in this answer. However, it may not always be clear why classification, regression, or clustering can be considered machine learning tasks or can be solved with ML algorithms/programs, so let me explain why these tasks can be solved with ML, based on Tom Mitchell's definition of an ML algorithm that I report below for completeness.

A computer program is said to learn from experience $E$ with respect to some class of tasks $T$ and performance measure $P$, if its performance at tasks in $T$, as measured by $P$, improves with experience $E$.

So, according to this definition, for a computer program to be a "machine learner", we need to identify $E$, $T$, and $P$, and show that the computer program improves with $E$ at performing the task(s) $T$, according to $P$.

For the case of classification or regression (these are the tasks, which is just a synonym for problems), let's suppose that we are given the training labelled dataset $D = \{(x_1, y_i), \dots, (x_N, y_N) \}$ of $N$ tuples $(x_i, y_i)$, where $x_i$ are the inputs to the program (or model) and $y_i$ (the label, aka class, hence the name classification) is the output that the program should produce. In the case of classification, $y_i$ is an element of a discrete set, e.g. $\{0, 1\}$ in the case of binary classification, while, in the case of regression, $y_i \in \mathbb{R}$ is a real number.

So, in this case, the experience $E$ is $D$ (the training labeled dataset). The task $T$ is classification or regression, depending on whether $y_i$ belongs to a discrete or continuous space. The performance measure $P$ can be e.g. the cross-entropy (which is typically used to solve the classification task) or the mean squared error (which is typically used to solve the regression task). (I will not recall the definitions of these performance measures here: you can take a book on ML for the details).

So, we have identified $E$, $T$ and $P$. Now, we also need to argue why a program's performance measured by $P$ would improve with $E$. Let's suppose that you are using a neural network trained with gradient descent to solve these tasks. After one epoch/iteration of gradient descent, the loss will be smaller, so the performance of the program will be higher. So, this would indeed be a machine learner, according to Mitchell's definition of ML.

In the case of clustering (the task $T$), the only difference is that the dataset $D$ (i.e. the experience $E$) is unlabelled, i.e. we do not have labels, but we are given only the inputs, and the goal, as you know, is to group these inputs based on their similarity, according to some notion of similarity, which is your performance measure $P$.

The other definitions of ML that I report in the other answer are also consistent with Mitchell's definition. More precisely, most of these definitions are based on the idea that an ML algorithm is an algorithm that "finds patterns in data". To solve the classification, regression and clustering tasks/problems, an ML algorithm/program needs to find patterns in the data (either explicitly, like in the case of clustering, or indirectly, like in the case of classification), in order for the program's performance to improve.

Moreover, given that you brought this up in the now-deleted comments, I don't think that the definition of machine learning necessarily implies prediction (i.e. the use of a model to forecast something about future data), but, yes, ML is often used for prediction.

It's also important to note that, when someone says "machine learning" without further details or information (like you did in your post/question), one may refer to

  1. the field/area of study that studies and applies ML algorithms, or
  2. an ML algorithm (and sometimes model).

Mitchell's definition is the definition of what it means for an algorithm/program to be called an ML algorithm/program. However, my definition of ML as a field immediately follows from the definition of an ML algorithm.

Finally, we could say that a problem that can be formulated in such a way that an ML algorithm can be applied is an ML problem. Note that Mitchell's definition does not mention "ML problem" anywhere, but just "problem", so this is not a circular definition. So, we could say that classification, regression, and clustering are "ML problems" because they can be solved with ML algorithms. More generally, all these three problems can be thought of as "function approximation" problems, i.e., in all these cases, the solutions are functions.

",2444,,2444,,3/4/2021 16:22,3/4/2021 16:22,,,,14,,,,CC BY-SA 4.0 26665,1,26671,,3/4/2021 12:47,,2,191,"

In the appendix of the Constrained Policy Optimization (CPO) paper (Arxiv), the authors denote the discounted future state distribution $d^\pi$ as:

$$d^\pi(s) = (1-\gamma) \sum_{t=0}^\infty{\gamma^t P(s_t = s \vert \pi)}\tag1$$

and the discounted total reward $J(\pi)$ as:

$$J(\pi) = \frac{1}{1-\gamma} E_{\substack{s\sim d^\pi \\ a \sim \pi \\ s' \sim P}}[R(s,a,s')]\tag2$$

I have two questions regarding these equations.

Question 1

Intuitively, I understand that $d^\pi(s)$ returns the discounted probability of landing on state $s$ when executing policy $\pi$.

I understand that the summation part of $(1)$ results in values that are greater than $1$, and are, therefore, not fit for a probability distribution. But I do not understand why the value that results from this is multiplied by $(1-\gamma)$.

I have read in this question that "$(1−\gamma)$ normalizes all weights introduced by γ so that they are summed to $1$". I have confirmed that this is true, but I don't understand why.

I tested this with a simple example:

Suppose there is are only two states $s_A$ and $s_B$ and the probabilty of landing on $s_A$ is $0.4$ and on $s_B$ is $0.6$, independently of the previous state or action taken (therefore, independently of the policy $\pi$). Also suppose we set the maximum number of time steps $t_{max} = 1000$ (to make the equation easy to compute) and $\gamma = 0.9$.

Then:

$$d^\pi(s_A) = (1-0.9) \sum_{t=0}^{1000} 0.9^t \cdot 0.4 \approx (1-0.9) \cdot 4$$

and

$$d^\pi(s_B) \approx (1-0.9) \cdot 6$$

So indeed if we sum them and multiply by $(1-\gamma)$ we get:

$$(1-0.9)\cdot(4+6) = 1$$

Q: My question is why does multiplying by $(1-\gamma)$ normalize to $1$? And what does $(1-\gamma)$ represent in this context?

Question 2

Similarly, I can't understand the use of $\frac{1}{1-\gamma}$ in $(2)$.

Q: How does multiplying the expected value of the reward function by $\frac{1}{1-\gamma}$ result in the discounted reward, instead of multiplying by $\gamma$? What does $\frac{1}{1-\gamma}$ represent?

",22742,,,,,3/4/2021 14:59,Intuition behind $1-\gamma$ and $\frac{1}{1-\gamma}$ for calculating discounted future state distribution and discounted reward,,1,1,,,,CC BY-SA 4.0 26667,1,,,3/4/2021 13:39,,1,27,"

I have done some reading. I want to implement an LSTM with pre-trained word embeddings (I also have plans to create my word embeddings, but let's cross that bridge when we come to it).

In any given sentence, you don't usually need to have all the words as most of them do not contribute to the sentiment, such as the stop words and noise. So, let's say there is a sentence. I remove the stop words and anything else that I deem unnecessary for the project. Then I run the remaining words through the word embedding algorithm to get the word vectors.

Then what? How does it represent the sequence or the sentence 'cause it's just vector for a word.

For example, take the sentence:

The burger does not taste good.

I could remove certain words and still retain the same sentiment like so:

Burger not good.

Let's assume some arbitrary vectors for those three words:

  • Burger: $[0.45, -0.78, .., 1.2]$

  • not: $[9.6, 4.0, .., 5.6]$

  • good: $[3.5, 0.51, 0.8]$

So, those vectors represent the individual words. How do I make a sentence out of them? Just concatenate them?

",45170,,2444,,3/8/2021 9:41,3/8/2021 9:41,"Given the word embeddings, how do I create the sentence composed of the corresponding words?",,0,0,,,,CC BY-SA 4.0 26669,2,,26662,3/4/2021 14:10,,0,,"

(Assuming English.) In your specific example, there would be $26^2$ combinations of capitalized letters. Also assuming a fixed-length token of eight, that gives you $26 * 26 * 7 = 4732$ possible combinations. My intuition is that the key space is too small.

It may actually be simpler for a machine to solve the CAPTCHA than that. Let's say that both your system and a generic one obscure text in an image with equal ability. The generic one is a string of length six composed of capital letters and numbers. So the attacker needs to make an informed guess from $(26 + 10)^6 = 2,176,782,336$ possibilities. That keyspace is $460,013$ times larger than your system with the longer string. It gets worse though. An attacker could use a probabilistic approach, determining a "letter probability" for each of the eight positions, taking the two adjacent positions that maximize the probability, and then choosing the letter with the highest probability for each of those solutions. It would not be a guaranteed win for the attacker. But it would be guessing at a higher probability than $\frac{1}{4732}$.

Also, it may be easier for an ML system to distinguish certain letters from others. B/P may be confused more easily compared to M/T (I'm hypothesizing here). So the attacker's ML could be more confident in some letter pairs than others. This is analogous to the idea of weak keys in cryptography.

I like where your head is at. But this may not be the result you're looking for. In theory, you could have multiple rules beyond just "Pick the first pair of adjacent letters." But each new rule would add a negligible benefit on top of what is a fundamental problem in the design.

",19703,,36737,,4/4/2021 15:14,4/4/2021 15:14,,,,1,,,,CC BY-SA 4.0 26671,2,,26665,3/4/2021 14:59,,3,,"

Question 1

The taylor expansion of $\frac{1}{1-\gamma}$ at $\gamma= 0$ is as follows

$$\frac{1}{1-\gamma} = 1 + \gamma + \gamma^2 + \dots$$

When you multiply by $1-\gamma$ you get

$$ 1 = (1-\gamma)(1 + \gamma + \gamma^2 + \dots)$$

Which can be equivalently written as

$$1 = (1-\gamma)\sum_\limits{i=0}^{\infty}\gamma^i$$

Hence we can see that by multiplying the coefficients by $(1-\gamma)$ we get a weighted sum of transition probabilities


Question 2

Multiplying by $(1-\lambda)$ is to cancel out the normalisation term which was included in the definition of the discounted distribution of the states. Partially expanding out your expressions for the discounted reward you get this

$$ J(\pi) = \frac{1}{1-\gamma}\sum_\limits{s}E_ {a\sim\pi,s'\sim P}[R(s,a,s')]\cdot\left((1-\gamma)\sum_\limits{t=0}^{\infty}\gamma^tP(s_t=s|\pi)\right)$$

The discounted return form is usually denoted

$$J(\pi) = E[\sum\gamma^iR_i(s,a,s')]$$

Where you notice the discounting terms aren't normalised which motivates cancelling it out

",42514,,,,,3/4/2021 14:59,,,,2,,,,CC BY-SA 4.0 26673,1,,,3/4/2021 21:53,,0,133,"

I'm making a custom neural network framework (in C++, if that is of any help). When I train the model on MNIST, depending on how happy the network is feeling, it'll give me either 90%+ accuracy, or get stuck at 10-9% (on validation set).

I shuffle all my data before feeding it to the neural net.

Is there a better randomizer I should be using, or maybe I am not initializing my weights properly (Using srand to generate values between +/-0.1). Did I somehow hit a saddle point?

My network consists of 784 size input layer, 256, 64, 32, 16 neuron hidden layers, all with RELU, and 10 output with SMAX

Where should I start investigating based on this kind of behavior, when I can't even replicate what is going on?

",43270,,2444,,3/8/2021 10:16,3/8/2021 10:16,"Why would my neural network have either an accuracy of 90% or 10% on the validation data, given a random initialization?",,0,5,,,,CC BY-SA 4.0 26675,1,,,3/4/2021 23:22,,2,96,"

I am a beginner in neural networks. I am building a neural network with 3 layers. The input $X$ has 7 features and the output $Y$ is a real number. In the hidden layer, there are two nodes. The bottom node contains weights and biases which should be hard set.

Now, I want to train this neural network with the training data $X$ and $Y$, such that the red weights are held constant while all other weights are learnable.

Is there a way of doing this during the training of the neural network? I'm using TensorFlow and Keras, so, if you could provide also the code necessary to do this, that would be very useful.

",45197,,2444,,3/7/2021 11:28,12/21/2021 4:52,How to train a neural network with few weights and biases held constant?,,1,1,,,,CC BY-SA 4.0 26676,1,,,3/5/2021 2:29,,1,46,"

I aim to do action recognition in videos on a private dataset.

To compare with the existing state-of-the-art implementations, other guys published their code on Github, like the one here (for the paper Self-supervised Video Representation Learning Using Inter-intra Contrastive Framework). Here, the author first trains the embedding network (3D ResNet without final classification layer) with contrastive learning. Finally, he adds a final layer and finetunes the weights, training the whole network again for some epochs.

Now, here is my doubt: Is there a way, while training just the embedding network, to find the test accuracy?

One way to tell if the final accuracy after finetuning would be good is to see if the training loss is decreasing or not. If the training loss decreases, that certainly builds up the hope that the test accuracy would be improving during the training, but in no way gives an idea about how much the test accuracy would be.

Another way is to plot the t-SNE, see if, on the test data, the data points from the same class are close together, thus forming a cluster. Then it could be said that the test accuracy would also be good. But it's not quantifiable, and hence it would be hard to compare the t-SNE plots obtained from two different models.

I was also suggested to add a final layer to my embedding network and just test it on the test data, without training or fine-tuning again. The reason for that is that the embedding network should have learned the weights reasonably by now; even if I finetune the model, the test dataset's test accuracy won't vary a lot. I need some advice here. Is that suggestion good? Are there any potential pitfalls with this suggestion?

Or do you have any other possible suggestions I could try?

",45200,,2444,,3/8/2021 10:11,3/8/2021 10:11,"Is there a way, while training (with contrastive learning) the embedding network, to find the test accuracy?",,0,0,,,,CC BY-SA 4.0 26677,2,,26675,3/5/2021 4:15,,1,,"

The strategy is turning b2 into a separate model, initialise b2 the way it should be, and train your network without b2 as usual.

In the middle of the main network, combine the output of b1 layer and b2 network using concatenation function, for example in TensorFlow:

# Axis 0 is the batch dimension, axis 1 for the dimension of value in every sample
tf.concat([b1out,b2out],axis=1)

Example source code (paste to Google Colab to test):

%tensorflow_version 2.x
%reset -f

import tensorflow                   as     tf
from   tensorflow.keras             import *
from   tensorflow.keras.layers      import *
from   tensorflow.keras.activations import *
from   tensorflow.keras.models      import *
from   tensorflow.keras.callbacks   import *

class MyModel(Model):
    def __init__(self):
        super(MyModel,self).__init__()
        self.dense1 = Dense(200, activation=relu)
        self.dense2 = Dense(1,   activation=tf.identity)

    @tf.function
    def call(self,x):
        # MIND THE CONCATENATION IN THIS PART:
        h1a = self.dense1(x)
        h1b = b2(x) # b2 won't get trained, so it stays fixed

        # CONCATENATION AT AXIS 1, COZ AXIS 0 IS BATCH DIMENSION
        h1  = tf.concat([h1a,h1b],axis=1) 
        u   = self.dense2(h1)
        return u

class ModelB2(Model):
    def __init__(self):
        super(ModelB2,self).__init__()
        self.dense1 = Dense(200)
        # self.dense2 = ...
        # Init weights of B here or B is pre-trained model

    @tf.function
    def call(self,x):
        u = self.dense1(x)
        return u

# PROGRAMME ENTRY POINT ========================================================
if __name__=="__main__":
    inp = [[1,2,3,4,5,6,7],[7,6,5,4,3,2,1]] # Example values
    exp = [[0],            [1]            ]

    mm = MyModel()
    b2 = ModelB2()

    mm.compile(loss=tf.losses.MeanSquaredError(), optimizer=tf.optimizers.Adam(1e-3))
    b2.compile(loss=tf.losses.MeanSquaredError(), optimizer=tf.optimizers.Adam(1e-3))
    mm_loss = mm.evaluate(x=inp,y=exp, batch_size=len(inp), steps=1) # Init weights in here
    b2_loss = b2.evaluate(x=inp,y=exp, batch_size=len(inp), steps=1) # Init weights in here

    print("\nbefore training:")
    print("mm weights:",mm.get_weights()[0][0][:3],"...")
    print("b2 weights:",b2.get_weights()[0][0][:3],"...")
    print("mm loss:",mm_loss)
    print("b2 loss:",b2_loss)

    mm.fit(x=inp,y=exp, batch_size=len(inp), epochs=500, verbose=0)
    mm_loss = mm.evaluate(x=inp,y=exp, batch_size=len(inp), steps=1) 
    b2_loss = b2.evaluate(x=inp,y=exp, batch_size=len(inp), steps=1) 

    print("\nafter training:")
    print("mm weights:",mm.get_weights()[0][0][:3],"...")
    print("b2 weights:",b2.get_weights()[0][0][:3],"... <-- UNCHANGED AS WANTED")
    print("mm loss:",mm_loss)
    print("b2 loss:",b2_loss,"<-- UNCHANGED AS WANTED")
# EOF

Google colab: https://colab.research.google.com

",2844,,2844,,12/21/2021 4:52,12/21/2021 4:52,,,,0,,,,CC BY-SA 4.0 26678,2,,21839,3/5/2021 4:34,,5,,"

I can reproduce this problem for an even more easily separable dataset:

The ideal tree for it should be as follows:

However, when I run DecisionTreeClassifier with the maximal depth = 2 in scikit-learn many times, it splits the dataset randomly and never gets it right.

This is an example of 4 different runs:

The problem is that scikit-learn has only two measures of the quality of a split: gini, and entropy. Both of them estimate mutual information between the target and only one predictor.

However, in XOR problem, mutual information of each predictor with the target is zero. You can read more about it here: link from which you can see that this problem exists not only for XOR but for any task where interaction between features is important.

In order to solve it, the tree should be built based neither on the Gini impurity, nor on the information gain but on measures that estimate how the target depends on multiple features, e.g. multivariate mutual information, distance correlation, etc which might solve simple problems like XOR but might fail in the case of real tasks. It is easy to find simple cases when they fail (just try them for a regression for simple non-linear functions of a few variables). There is no such a measure that would estimate the dependence of a target on multiple interacting predictors very well and would work for all problems.

EDIT to answer Asher's comment: I did several runs for max_depth=3. It is better than for max_depth=2 but still misses the correct classification from time to time. Taking max_depth=4 almost always gets XOR correctly with the occasional misses. Below are pictures of some runs for max_depth=3 and max_depth=4.

However, the trees for max_depth=3 and max_depth=4 become ugly. They are ugly not only because they are bigger than the ideal tree shown above but they totally obscure the XOR function. For example, can you decipher an XOR from this tree?

It is probably possible with some pruning technique but still, an extra work.

",15524,,15524,,3/6/2021 6:10,3/6/2021 6:10,,,,5,,,,CC BY-SA 4.0 26679,1,,,3/5/2021 5:02,,2,36,"

I've been researching different frameworks for hierarchical RL (mainly options, HAMs, and MAXQ) and noticed that both options and HAMs have names that relate to how they function. I can't seem to find anything stating how MAXQ got its name and I was wondering if anyone knew what the name was referencing.

",45161,,2444,,3/5/2021 8:53,3/5/2021 8:53,"Where does the hierarchical reinforcement learning framework name ""MAXQ"" come from?",,0,1,,,,CC BY-SA 4.0 26680,1,,,3/5/2021 7:13,,0,141,"

I want to solve a symbolic regression problem with genetic programming. My dataset is similar to this one, but I have 30 features, and I want to use only the most sensitive features. I found this library interesting for Symbolic Regression, but could not find the right approach for feature selection.

",45205,,2444,,3/7/2021 11:13,3/8/2021 14:37,How can I select features for a symbolic regression problem to be solved with genetic programming?,,0,6,,,,CC BY-SA 4.0 26689,1,,,3/6/2021 7:36,,1,99,"

In a linear regression problem, a line can divide a data set into two categories. So, basically, points above the line belong to category 1, and points below the line belong to category -1.

However, my professor has asked me to write a C++ program in which the program will classify whether the data points lie above or below a sine function.

Let me explain a bit more. So, first, we will generate a data set $$D = \{(x_i, y_i) \} \label{0}\tag{0} $$ with random $x$ and $y$ coordinates, for example, according to this equation

$$y(x) = A + B \sin(Cx)\label{1}\tag{1},$$

where $A$, $B$, and $C$ are known.

The data points above the sine function will have a label 1 on them, and the points below the function will have -1.

Now, this data set $D$ in \ref{0} has to be fed to a C++ program. This C++ program has to somehow learn the curve separating the two data point categories. After training, the program will then classify some new query data points.

The key difficulty is that the program does not know in advance that the points were scattered around a sine curve. It does not know the values of $A$, $B$, or $C$ in equation \ref{1}. It also does not know that the curve is a sine curve.

Now, this is where I am stuck. I do not know if I need to use a neural network to solve this problem. If a neural network is to be used, then I presume that backpropagation will have to be used in some way. I can generate the data set and I can feed the data into the program.

Which approach (algorithm and model) should I use to solve this problem?

I have studied linear classification with the perceptron learning algorithm, but this sine-classifier stuff is a huge step-up for me. Another important thing is that I am not allowed to use any ready-made C++ libraries for Machine Learning. If a neural network solution is needed, then I will have to design the neural network from scratch. Note that I don't need any C++ code, but I am just looking for some guidance on how to approach this problem.

",45228,,2444,,3/7/2021 11:04,3/7/2021 11:04,Which approach should I use to classify points above and below a sine function $y(x) = A + B \sin(Cx)$?,,1,0,,,,CC BY-SA 4.0 26691,2,,26689,3/6/2021 11:06,,1,,"

You can try using Fourier basis functions to transform your observable variables and then apply a general linear regression model. To clarify, if you have pairs of observables $(y_i, x_i)$ where $y_i$ is $i$-th output, and $x_i$ is $i$-th input then you can transform your input variable into vector

\begin{equation} \phi = [1, \sin(x), \cos(x), \sin(2x), \cos(2x), \ldots, \sin(nx), \cos(nx)] \end{equation} where $n$ is some arbitrary number.

Then you will have a standard linear regression model \begin{equation} Y = A\Phi \end{equation} where $Y = [y_1, \ldots, y_n]$, $\Phi = [ \phi_1, \ldots, \phi_n]$ and $A$ is matrix of unknown parameters that you need to learn.

",20339,,,,,3/6/2021 11:06,,,,1,,,,CC BY-SA 4.0 26693,1,26700,,3/6/2021 14:47,,1,721,"

Here's an extract from Chollet's book "Deep Learning with Python" about using pre-trained CNN to predict class from a photo set (p. 146):

At this point, there are two ways you could proceed:

  • Running the convolutional base over your dataset, recording its output to a Numpy array on disk, and then using this data as input to a standalone, densely connected classifier similar to those you saw in part 1 of this book. This solution is fast and cheap to run, because it only requires running the convolutional base once for every input image, and the convolutional base is by far the most expensive part of the pipeline. But for the same reason, this technique won’t allow you to use data augmentation.

  • Extending the model you have (conv_base) by adding Dense layers on top, and running the whole thing end to end on the input data. This will allow you to use data augmentation, because every input image goes through the convolutional base every time it’s seen by the model. But for the same reason, this technique is far more expensive than the first.

The first method is called (1) and the second is (2).

If I use data augmentation to expand my data set, then could (1) be as good ad (2)? If no, why?

",37267,,2444,,3/7/2021 10:21,3/7/2021 14:34,What is the difference between feature extraction with or without data augmentation?,,1,1,,,,CC BY-SA 4.0 26698,1,,,3/7/2021 8:31,,2,107,"

Wikipedia states:

  • The Hutter Prize is a cash prize funded by Marcus Hutter which rewards data compression improvements on a specific 1 GB English text file.
  • The goal of the Hutter Prize is to encourage research in artificial intelligence (AI). The organizers believe that text compression and AI are equivalent problems.

Did the Hutter Prize help research in artificial intelligence in any way?

",4,,2444,,5/14/2022 21:14,5/14/2022 21:14,Did the Hutter Prize help research in artificial intelligence in any way?,,0,0,,,,CC BY-SA 4.0 26700,2,,26693,3/7/2021 10:38,,1,,"

There are two ways that you could perform data augmentation:

  • Up front, by expanding the input dataset into a larger one, performing a range of changes to each input then storing the result. This appears to be what you are suggesting.

  • Just in time, by sampling from possible augmentations on each epoch, or even per sample when building a mini-batch. This appears to be what Chollet is suggesting.

Chollet's approach allows for augmentation to include finer degrees of augmentation that are different each time an input is considered, e.g. rotations of any angle, selecting a slightly different area from a larger image each time. For your approach you could consider the same set of augmentations but they would have to be "frozen in" at the time of building a dataset, and you would not be able to consider all possible variations for each image because it would make the dataset too large.

Both approaches are valid, and both would be called data augmentation. Chollet's approach obtains better re-use of image samples in the long term, and may result in better generalisation in the final trained network. Your approach may allow for more efficient use of CPU time to obtain a result that passes a threshold in accuracy. In some cases the difference between approaches may be minor compared to that caused by other changes in hyperparameters.

",1847,,1847,,3/7/2021 14:34,3/7/2021 14:34,,,,0,,,,CC BY-SA 4.0 26704,1,,,3/7/2021 17:03,,2,59,"

I've recently read much about feature engineering in continuous (uncountable) feature spaces. Now I am interested what methods exist in the setting of large discrete state spaces. For example consider a board game with grid as a basic layout. Each position on a grid can contain exactly one of multiple elements and the agent makes decisions according to the current board position. If the grid is large enough, say 30x30, and there are only two different elements we could model the states as a linear model with $2*30*30 = 1800$ variables (using dummy variables) and this model can't even distinguish relationships between positions. For this we would need to use $\binom{90}{2}$ or even $\binom{90}{k}$, $k = 2, 3, 4$ more features.

How would one approach this problem? Are the methods for feature selection for linear approximations, which even automatically find/learn non-linear combinations? What was the approach to solving these problems when NN where not around?

",45258,,2444,,3/8/2021 9:46,3/8/2021 9:46,How to find good features for a linear function approximation in RL with large discrete state set?,,0,6,,,,CC BY-SA 4.0 26705,1,,,3/7/2021 19:10,,1,109,"

Theorem 1 (page 5) in the paper about Integrated Gradients states that

Integrated gradients is the unique path method that is symmetry-preserving.

What I miss is

  1. A precise formulation of the theorem: in particular, the exact properties that must be satisfied by the function $f$ used in the proof (continuity, differentiability, etc.). Also, should the paths be assumed to be monotonic?

  2. A consistent definition of function $f$ in the proof - note that $f$ is defined inconsistently, e.g. in the region where $x_i<a$ and $x_j>b$, where it is not clear whether its value should be $0$ or $(b-a)^2$.

Point 2 is easy to fix with an appropriate redefinition (e.g. replacing "if $\text{max}(x_i,x_j)\geq 0$" with "else if $\text{max}(x_i,x_j)\geq 0$"). What it is not clear if whether there is a redefinition that:

  1. preserves the properties that have been assumed in the rest of the paper, in particular in Proposition 1 (proving completeness), where the function is assumed to be continuous everywhere, and the set of discontinuous points of each of its partial derivatives along each input dimension has measure zero, and

  2. the function is a constant for $t \notin [t_1,t_2]$.

Does anybody have a precise formulation and full proof of the theorem?

",45259,,2444,,3/8/2021 9:55,8/15/2021 21:06,Is there a full and precise formulation of Theorem 1 in the Integrated Gradients paper?,,1,2,,,,CC BY-SA 4.0 26708,1,,,3/8/2021 9:21,,1,751,"

I have implemented a simple version of the DQN algorithm for CartPole-v0. The algorithm works fine, in the sense that achieves the highest possible scores. The below diagram shows the cumulative reward versus training episode.

The scary part is when I tried to plot the q values during training. For this purpose, 1000 random states were generated and stored. After each training episode, I fed these states to the Q-network and computed the Q-value for all actions. Then for each state, the max Q-value was computed. Finally, these max Q-values were averaged to yield a single number. The following plot shows this quantity during training.

As you can see, there is a slight increase in average-max-q in the beginning and it is followed by a sharp decrease. My question is, how can this value be negative? Since I know all rewards received by the agent are positive, and expected cumulative reward of a state can not be negative.

Edit

I added the code for clarity:

from matplotlib import pyplot as plt

import torch as th
import torch.nn as nn
import torch.nn.functional as F

import gym
import random
import numpy as np
from collections import deque

from test_avg_q import TestEnv


class ReplayBuffer():
    def __init__(self, maxlen):
        self.buffer = deque(maxlen=maxlen)

    def add(self, experience):
        self.buffer.append(experience)

    def sample(self, batch_size):
        sample_size = min(len(self.buffer), batch_size)
        samples = random.choices(self.buffer, k=sample_size)
        return map(list, zip(*samples))


class QNetwork():
    def __init__(self, state_dim, action_size):
        self.action_size = action_size
        self.q_net = nn.Sequential(nn.Linear(state_dim, 100),
                                   nn.ReLU(),
                                   nn.Linear(100, action_size))
        self.optimizer = th.optim.Adam(self.q_net.parameters(), lr=0.001)

    def update_model(self, state, action, q_target):
        action = th.Tensor(action).to(th.int64)
        action_one_hot = F.one_hot(action, num_classes=self.action_size)
        q = self.q_net(th.Tensor(state))
        q_a = th.sum(q * action_one_hot, dim=1)
        loss = nn.MSELoss()(q_a, q_target)
        self.optimizer.zero_grad()
        loss.backward()
        self.optimizer.step()


class DQNAgent():
    def __init__(self, env):
        self.state_dim = env.observation_space.shape[0]
        self.action_size = env.action_space.n
        self.q_network = QNetwork(self.state_dim, self.action_size)
        self.replay_buffer = ReplayBuffer(maxlen=1_000_000)
        self.gamma = 0.97
        self.eps = 1.0

    def get_action(self, state):
        with th.no_grad():
            q_state = self.q_network.q_net(th.Tensor(state).unsqueeze(0))
            action_greedy = th.argmax(q_state).item()
        action_random = np.random.randint(self.action_size)
        action = action_random if random.random() < self.eps else action_greedy
        return action

    def train(self, state, action, next_state, reward, done):
        self.replay_buffer.add((state, action, next_state, reward, done))
        states, actions, next_states, rewards, dones = self.replay_buffer.sample(50)
        with th.no_grad():
            q_next_states = self.q_network.q_net(th.Tensor(next_states))
            q_next_states[dones] = th.zeros(self.action_size)
            q_targets = th.Tensor(rewards) + self.gamma * th.max(q_next_states, dim=1)[0]
        self.q_network.update_model(states, actions, q_targets)

        if done: self.eps = max(0.1, 0.99 * self.eps)


env_name = "CartPole-v0"
env = gym.make(env_name)

agent = DQNAgent(env)
num_episodes = 400
testEnv = TestEnv()
avg_qs = []
rewards = []
render = False

for ep in range(num_episodes):
    state = env.reset()
    total_reward = 0
    done = False
    while not done:
        action = agent.get_action(state)
        next_state, reward, done, info = env.step(action)
        agent.train(state, action, next_state, reward, done)
        total_reward += reward
        state = next_state
        if render and ep > 80:
            env.render()

    avg_qs.append(testEnv.run(agent.q_network.q_net))
    rewards.append(total_reward)
    print("Episode: {}, total_reward: {:.2f}".format(ep, total_reward))

plt.plot(avg_qs)
plt.show()
plt.figure()
plt.plot(rewards)
plt.show()

here is the code for class TestEnv:

import torch as th
from torch.utils.data import Dataset, DataLoader


class TestEnv:
    def __init__(self):
        self.dataloader = DataLoader(TestDataset(), batch_size=100, num_workers=10)

    def run(self, model):
        out_list = []
        model.eval()
        with th.no_grad():
            for batch in self.dataloader:
                out = model(batch)
                out_list.append(out)
            qs = th.cat(out_list, dim=0)
            maxq = th.max(qs, dim=1)[0]
            result = th.mean(maxq)
        return result.item()



class TestDataset(Dataset):
    def __init__(self):
        self.db = th.load('test_states.pt')

    def __len__(self):
        return len(self.db)

    def __getitem__(self, idx):
        return self.db[idx]

",41127,,41127,,3/8/2021 11:08,3/8/2021 11:08,"Why does Q-value become negative during training of DQN, while the agent learns to play?",,0,3,,,,CC BY-SA 4.0 26709,1,26869,,3/8/2021 11:06,,4,510,"

Usually, Neural Networks uses raw data. You do not need to extract features manually. NN's can find & extract good features which is a pattern of an image, signal or any kind of data. When we check layer outputs in a NN, we can see and visualize how NNs extract features.

Do neural networks extract features by themselves every time? When is it necessary to manually extract or engineer features to feed into the neural network rather than providing raw data?

For example, I had a time series sensor data. When I use LSTM & GRU on a raw dataset, I had bad test accuracy but when I extract some features manually I had really good test set accuracy results. I extract Fast Fourier Transform, Cross-correlation features which helped a lot to increase accuracy. "Extraction of features manually" helped to solve my problem.

",28129,,28129,,3/8/2021 13:27,3/17/2021 12:46,When is it necessary to manually extract features to feed into the neural network rather than providing raw data?,,2,0,,,,CC BY-SA 4.0 26710,2,,26663,3/8/2021 11:34,,0,,"

Using the related question and simplifying notation

$$\begin{align} E_{\mu,\pi}[E_{\mathcal{E}}[s'|s,a]^2] &= \sum_\limits{s,a}\left(\sum_\limits{s'}s'p(s'|s,a)\right)^2p(s,a) \\ &=\sum_\limits{s,a}\left(\sum_\limits{s_1'}s_1'p(s'_1|s,a)\cdot\sum_\limits{s_2'}s_2'p(s'_2|s,a)\right)p(s,a) \\ &= \sum_\limits{s,a,s_1',s_2'}s_1's_2'p(s_1'|s,a)p(s_2'|s,a)p(s,a) \\ &= E_{\mathcal{E}\sim (s_1',s_2'),\mu,\pi}[s_1'\cdot s_2'] \end{align}$$

This is the closest I could come up with, basically instead of the same sample squared, we pick two samples independently given the pair $(s,a)$.

",42514,,42514,,3/8/2021 13:04,3/8/2021 13:04,,,,0,,,,CC BY-SA 4.0 26711,2,,26378,3/8/2021 12:49,,-1,,"

Use TextBlob to get the text sentiment polarity and subjectivity. polarity is negative or positive sentiment and subjectivity is how factual the sentiment is.

my_valance=TextBlob(sentence)
df.loc[key,'sentiment_polarity']=my_valance.sentiment.polarity
df.loc[key,'sentiment_subjectivity']=my_valance.sentiment.subjectivity
",44679,,,,,3/8/2021 12:49,,,,8,,,,CC BY-SA 4.0 26714,2,,26649,3/8/2021 18:57,,1,,"

Landmark retrieval has photographs of landmarks that you need to find out. Consider the degrees of freedom for this, the landmarks can many different colours (more than humans' faces) and also the colour range is all over the place (a landmark may be blue or white or red). The shapes of the various landmarks will also vary.

Now, consider face recognition problem. All humans faces look alike morphologically. If you look at the colour, it is not as varied as landmark recognition.

Because of the inherent data in both the problems, the research focuses on the diverging lines of thought. Rich descriptors are good for landmarks because the data itself is very rich and mired with variation. On the other hand, discriminative features are more desirable for face recognition because faces are more similar and less rich in variation, so differentiating between is hard.

It is the requirement of the problem that steers research in diverging directions.

",37203,,,,,3/8/2021 18:57,,,,3,,,,CC BY-SA 4.0 26716,1,,,3/8/2021 19:12,,2,92,"

I'd like to evaluate the possibility of using a Machine/Deep Learning technique as a sort of pattern recognition and parameters estimation.

The problem I want to address can be stated as follows: Let's consider that I have a set of interacting "particles" that can be represented as a graph in which the vertices represent the particles and the edges the magnitude of the interaction amongst them. For instance, in the diagram below I'm showing a particle graph formed by 4 interacting particles.

So each particle/vertex has a value (e.g. $A=3.1$, $B = 4.2$, etc.) and each edge contains the magnitude of the interaction between two connected nodes/àrticles (e.g. $AB = 5.3$, $AC = 1.1$, $DB = 0$, etc).

With all this information, there exists a quantum mechanics algorithm that, after some complex calculations, results in a 1D signal (the pattern; essentially a vector of X-Y values). The overall process is illustrated in the figure below:

The appearance of the obtained signal will therefore depend upon the values of the graph. The goal is, in this case, the inverse problem: given one of these 1D signals (that is, a characteristic pattern), is it possible to determine the graph with its corresponding values?

I could create a training set formed by a very large number of simulated graphs with corresponding 1D patterns.

Since my experience with ML has so far focused only on simple classification problems, it is not clear to me which ML method would be more convenient or whether or not this problem can actually be addressed by an ML technique. Any general recommendation or advice would be highly appreciated.

",45285,,45285,,3/9/2021 7:02,3/9/2021 7:02,Is there any known technique to determine a graph from a 1D signal pattern?,,0,3,,,,CC BY-SA 4.0 26722,2,,25367,3/9/2021 4:40,,2,,"

Excuse my lack of rigor. Although I believe this could be rigorously proven for certain definitions of GNN, the term is still too loose for me to honestly claim one way or another on this. Hopefully the following thoughts will be helpful anyway.

I prefer the term Message Passing Networks as a generalization of many things people like to call GNN. In a generic Message Passing Network, every node has an associated vector. Each node's vector, $x_i$, is then updated as a function of itself and its neighbors. (I've left off time indices for readability...)

For instance: $$ x_i := Update(x_i, \sum_{x_j\in N(x_i)} Message(x_i,x_j, e_{ij})) $$

where $N(x_i)$ is the set of vectors associated with nodes neighboring $x_i$ and $Update$ and $Message$ are arbitrary parameterized functions.

In the case where $Message(x_i, x_j, e_{ij}) = e_{ij} x_j$ and $e_{ij}$ is a scalar, and $Update(a,b) = b$, this becomes the following:

$$ x_{i} := \sum_{x_j\in N(x_i)} e_{ij} x_j $$

This means that each node becomes a weighted sum of its neighbors and this is precisely convolution (given that you have a stride of 1 ... and that you have padding ... and that the graph is a grid where all upward edges are equal, all leftward edges are equal, all rightward edges are equal and all downward edges are equal.)

",40822,,40822,,3/9/2021 23:00,3/9/2021 23:00,,,,1,,,,CC BY-SA 4.0 26723,1,26726,,3/9/2021 5:34,,4,424,"

I have a large 1D action space, e.g. dim(A)=2000-10000. Can I use continuous action space where I could learn the mean and std of the Gaussian distributions that I would use to sample action from and round the value to the nearest integer? If yes, can I extend this idea to multi-dimensional large action space?

",32517,,32517,,3/9/2021 21:39,3/9/2021 21:39,Can a large discrete action space be represented using Gaussian distributions?,,1,0,,,,CC BY-SA 4.0 26726,2,,26723,3/9/2021 9:18,,4,,"

The answer is "it depends". Once you have arranged the actions into order, a key trait is whether the action value function has a simple enough shape that sampling from a Gaussian policy function would give consistent expected returns, enough that learning can occur. If the underlying "true" value function has a lot of high frequency noise then learning would be slow. In the worst case, if the action value $Q(s,a_{n})$ and $Q(s,a_{n+1})$ is not correlated for any $n$, then it will not be possible to learn with the approximation at all.

You may have some sense of how similar actions $a_{n}$ and $a_{n+1}$ are. If the actions represent different ordinal choices, such as selecting an integer number of items to perform some task with such as buy/sell or transport, then in many environments there will often be a strong correlation between outcome of choosing e.g. $a_{900}$ and $a_{901}$. If this holds in general, then that is a good indicator that you can treat $n$ as being continuous, and use learned parameters of a simple distribution function to find optimal policies (and in addition this could be far more efficient than using a discrete representation).

It might not matter if for a small fraction of cases the difference in outcomes between $a_{n}$ and $a_{n+1}$ is large, provided that successive approximations to the optimal policy can improve towards optimal through adjusting mean $\mu(s)$ and standard deviation $\sigma(s)$.

There may still be difficult cases that cannot be learned by the approximation - for instance if a specific action $a_n$ is optimal for a given state $s$, but $a_{n-1}$ and $a_{n+1}$ are a lot worse, then the training process of a typical policy gradient approach may never settle upon $\mu(s) = n$ and $\sigma(s) \approx 0$, because any intermediate values between the starting policy and the optimal one will perform badly.

For expanding into more dimensions, the same ideas apply to each dimension separately. You may want to use different distributions, or even have one dimension that uses a continuous model with a few parameters whilst another remains discrete with a free parameter for each choice.

",1847,,1847,,3/9/2021 13:51,3/9/2021 13:51,,,,0,,,,CC BY-SA 4.0 26727,1,,,3/9/2021 9:42,,0,28,"

I think heatmap outputs of architectures like CenterNet, OpenPose, etc. can be changed to coordinator outputs, and loss functions like focal loss can be modified so they can deal with coordinators instead of heatmaps. Is there any particular reasons that researchers use heatmaps instead of coordinators?

",45299,,,,,3/9/2021 10:38,Why do popular object detecting models output heatmaps instead of coordinators of object directly?,,1,0,,,,CC BY-SA 4.0 26729,1,26732,,3/9/2021 10:12,,1,756,"

In image classification, there are sometimes images that do not fit in any category.

For example, if I build a CNN in Keras to classify Dogs and Cats, does it help (in terms of training time and performance) to create an "other" (or "unclassified") category in which images of houses, people, birds, etc., are classified? Is there any research paper that discusses this?

A similar question was asked before here, but, unfortunately, it has no answer.

",44356,,2444,,3/10/2021 0:30,3/10/2021 0:35,"Should one use an ""other"" category in image classification?",,1,3,,,,CC BY-SA 4.0 26731,2,,26661,3/9/2021 10:22,,0,,"

The only principled approach is to see if the problem you are working has been worked on before. If yes, then, use what they used for your problem. It is the alternative to go through all the models one-by-one.

Like, in your case, it bears a semblance with face models, so, Siamese network with ArcFace loss for getting your embeddings is a good bet for you.

",37203,,,,,3/9/2021 10:22,,,,0,,,,CC BY-SA 4.0 26732,2,,26729,3/9/2021 10:26,,1,,"

It is not advisable because if you use an "other" class, you are just increasing problems for your network. Since "other" means not dog and not cat, then, what common feature does it have? Most of the time the "other" images won't have many features in common. If they do, then go ahead and make an "other" class.

There is a better way: if the probabilities for both cat and dog are less than a threshold (you need to decide that, take, 0.5), then, you can say it is an "other" object.

",37203,,2444,,3/10/2021 0:35,3/10/2021 0:35,,,,2,,,,CC BY-SA 4.0 26733,2,,26727,3/9/2021 10:38,,1,,"

To have a more powerful representational output. You can derive a bounding box from heatmap but not vice-versa. Also, in case of dense object detection it is hard to create bounding boxes for each object (people standing in front of each other).

That being the case, it is better to run a segmentation loss for these networks. It also leads to less confusion for the network. Also, it is a single shot creation of output. On the other hand, the bounding box approach will have the creation of a lot of bounding boxes then finding their objectness score and finally applying NMS (non-max suppression).

In regards to focal loss and use of bounding boxes, it is something that is a consequence of the heatmaps/activations that neural networks learns. They are built upon that and then regressed.

",37203,,,,,3/9/2021 10:38,,,,0,,,,CC BY-SA 4.0 26735,1,,,3/9/2021 12:04,,1,65,"

I am looking for advice or suggestion. I have photos like these: photo_1 and photo_2 and many more similar to that. The average shape of these photos is about 160 x 100. What we are doing is we are trying to find wheather or not person in a photo is wearing safety vest and helmet (if person is wearing both it is 1, if something is missing or both are missing it is 0). Training data consists of about 5k almost equally distributed image sets. I have tried to use augmentation techniques (flipping, adding noise, brighness correction) but results didn't improove. I tried to train on many pretrained popular models: resnet101, mobilenet_v2, efficientneyb3, efficientneyb0, DenseNet121, InceptionResNetV2, InceptionV3, ResNet152V2, ResNet50V2, but results are not eyepleasing. I have tried different input sizes ranging from 224x224 to 112x112 but result didn't improve as much as I would have liked it to be. And the weird thing is that the image shape does not correlate to wheather or not there are more wrong predictions using bigger or smaller images. As a side not I would lik to ask couple questions:

  1. Should I use my own written small net?
  2. Are the models that I use too big for this problem?

Any advice will be appreciated.

",41591,,,,,3/11/2021 11:41,What model structure I should use to train on low res and blurry images?,,2,2,,,,CC BY-SA 4.0 26736,2,,26735,3/9/2021 15:14,,1,,"

Yes, you may use the small model you have but you need to tune it.

The models you have used are not too big if any overfitting happens you can just use dropout to alleviate that.

You need to improve the quality of your images. One way is to reconstruct them using autoencoders or GANs (Super-resolution GANs). Then, you can classify them.

",37203,,,,,3/9/2021 15:14,,,,0,,,,CC BY-SA 4.0 26737,2,,17468,3/9/2021 16:47,,0,,"

I'm not aware of a model that could generalize from single segment to multi segment without nontrivial surgery, augmentation, and retraining, but I have had some success doing this in cases where the regions to segment are non-overlapping: After finding the first segment, edit the input image to distort or obscure the segmented region and run it through again, repeating until it stops finding new segments.

",29873,,,,,3/9/2021 16:47,,,,0,,,,CC BY-SA 4.0 26739,1,26745,,3/9/2021 21:43,,4,3937,"

I am self-studying applications of deep learning on the NLP and machine translation.

I am confused about the concepts of "Language Model", "Word Embedding", "BLEU Score".

It appears to me that a language model is a way to predict the next word given its previous word. Word2vec is the similarity between two tokens. BLEU score is a way to measure the effectiveness of the language model.

Is my understanding correct? If not, can someone please point me to the right articles, paper, or any other online resources?

",41187,,2444,,3/10/2021 0:45,6/3/2021 23:16,What is the difference between a language model and a word embedding?,,2,0,,,,CC BY-SA 4.0 26744,1,26763,,3/10/2021 10:18,,2,147,"

I'd like to ask you if feature engineering is an important step for a deep learning approach.

By feature engineering I mean some advanced preprocessing steps, such as looking at histogram distributions and try to make it look like a normal distribution or, in the case of time series, make it stationary first (not filling missing values or normalizing the data).

I feel like with enough regularization, the deep learning models don't need feature engineering compared to some machine learning models (SVMs, random forests, etc.), but I'm not sure.

",44965,,2444,,3/10/2021 15:53,3/19/2021 11:03,Is feature engineer an important step for a deep learning approach?,,2,1,,,,CC BY-SA 4.0 26745,2,,26739,3/10/2021 10:45,,4,,"

Simplified: Word Embeddings does not consider context, Language Models does.

For e.g Word2Vec, GloVe, or fastText, there exists one fixed vector per word.

Think of the following two sentences:

The fish ate the cat.

and

The cat ate the fish.

If you averaged their word embeddings, they would have the same vector, but, in reality, their meaning (semantic) is very different.

Then the concept of contextualized word embeddings arose with language models that do consider the context, and give different embeddings depending on the context.

Both word embeddings (e.g Word2Vec) and language models (e.g BERT) are ways of representing text, where language models capture more information and are considered state-of-the-art for representing natural language in a vectorized format.

BLEU score is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Which is not directly related to the difference between traditional word embeddings and contextualized word embeddings (aka language models).

",41319,,2444,,3/10/2021 15:45,3/10/2021 15:45,,,,0,,,,CC BY-SA 4.0 26746,2,,26744,3/10/2021 11:52,,1,,"

From what I believe, feature engineering is important, it's a part of the job of ML network designer.

Network designing involves

  • Feature engineering: What should be in the input to the network, as processed from similar or totally different data
  • Deciding network shape, layer shapes, types of neurons in layers, etc.
  • Feature engineering again (but labels), in the output, what should the output be, either regression values or classes

And possibly also tasks rather simple as mentioned in the question: filling missing values, normalising data, create pre-feeding normalisation steps in code, etc.

",2844,,2844,,3/13/2021 4:40,3/13/2021 4:40,,,,1,,,,CC BY-SA 4.0 26747,1,26779,,3/10/2021 13:54,,0,158,"

When creating artificial columns for your categorical variables there are two mainstream methods you could use:

Disclaimer: For this example, I use the following definitions of dummy variables and one-hot-encoding. I'm aware both methods can be used to either return n or n-1 columns.
Dummy variables: each category is converted to it's own column and the value 0 or 1 indicates if that category is present for each record
one-hot-encoding: similar to dummy variables, but one column is dropped, as its value can be derived from the other columns. This is to prevent multicollinearity and the dummy variable trap.

As an arbitrary example, let's take people's favorite color: pink, blue and green. For a person who's favorite color is pink, the dummy and one-hot-encoded data would look as follows:

dummy variables

person_id favorite_color_pink favorite_color_blue favorite_color_green
xyz 1 0 0

one-hot-encoded variables

person_id favorite_color_blue favorite_color_green
xyz 0 0

From a statistics point of view, I would use the one-hot encoded columns to build my model. In addition, I can infer the favorite color is pink, because I encoded the variables.
However, when I'm applying XAI to explain the prediction to someone else and they see the favorite color wasn't blue or green. I'm not so sure they will infer the favorite color was pink unless it's explicitly stated. So using dummy variables might serve explainability better, but brings other risks..

Are there any best practices on this?

",45156,,45156,,3/10/2021 14:16,3/12/2021 7:41,One hot encoding vs dummy variables best practices for explainable AI (XAI),,2,0,,,,CC BY-SA 4.0 26748,2,,17605,3/10/2021 14:35,,1,,"

I am a novice in Reinforcement Learning and I have been struggling for several monthes about the TD()'s logic. Initially it seemed to me that it was a successfull purely heuristic formula without any theoretical foundation. But nowadays, I understand it simply as a mean's calculation, using the recurrent formula that states that when you a have a mean and a new value arrives, it modifies the mean by an amount equal to its difference with it (the mean) divided by the new values number.

To summarize, the exposed mean calculation is an instance of a general formula of recurrent mean calculation that uses as increasing factor for the difference between the new value and the actual mean multiplied by any number between 0 and 1. By the way, this number - usually called step size parameter - can be dynamic, and in the first paragraph (usual mean calculation) its amount is the inverse of the number of values considered in the mean's calculation.

Intuitively, we can understand that it is an accurate estimation procedure independently the initial (guessed or not) value. With a high number of estimates (new values arriving), the initial values fades its importance, and that it can be extended to treat many (lambda) new values simultaneously.

Until now I have not found this explanation nowhere, even if it is very simple, and I am not so sure that it is sound. I will appreciate if someone let me know if this intuition is correct and if it has already been exposed somewhere.

",33566,,33566,,3/14/2021 23:17,3/14/2021 23:17,,,,0,,,,CC BY-SA 4.0 26754,1,,,3/10/2021 20:58,,5,69,"

It is possible to use deep learning to give approximate solutions to NP-hard graph theory problems?

If we take, for example, the travelling salesman problem (or the dominating set problem). Let's say I have a bunch of smaller examples, where I compute the optimal values by checking all possibilities, can this be then used for bigger problems?

In particular, let's say I take a large graph and just optimize subgraphs of this large graph. This is perhaps a more general question: My experience with deep learning (TensorFlow/Keras) is to predict values. How can I get graph isomorphism and/or a list of local moves on the graph, to obtain a better solution? Can ML/DL give you a list of moves or local changes to get closed to an optimal value, or does it just return the predicted optimal value?

",44996,,2444,,3/10/2021 23:15,3/10/2021 23:15,It is possible to use deep learning to give approximate solutions to NP-hard graph theory problems?,,0,1,,,,CC BY-SA 4.0 26755,2,,12065,3/10/2021 21:58,,1,,"

It seems that another rather controversial point is about the inclusion of evolutionary algorithms as Reinforcement Learning ones. Sutton & Barto do not. They argue that

And also:

Other people related with the subject, as the HSE University that offers a course in Coursera, Maxim Lapan , or P. Palanisamy (both Packt's authors) include them into the subject. Apparently they support the idea that RL is defined by changes in performance resulting from interaction with the environment. For instance, Lapan classifies the cross-entropy method as model-free, policy-based and on -policy.

",33566,,,,,3/10/2021 21:58,,,,0,,,,CC BY-SA 4.0 26756,2,,26739,3/10/2021 22:45,,2,,"

A language model aims to estimate the probability of one or more words given the surrounding words. Given a sentence composed of $w_{1},...,w_{i-1},\_ , w_{i+1},..,w_{n}$, you can find which is the i-th missing word using a language model. In this way, you can estimate which is the most probable word using for example the conditional probability $P(w_i=w|w_1,…,w_n)$. An example of a simple language model is an n-gram where instead of conditioning on all previous words, you look only to the previous n words.

Word embeddings are a distributed representation of a word. Instead of using an index or a one-hot encoding to represent a word, a dense vector is used. If two words have similar embeddings then these words share some properties. These properties are driven by the way embeddings are constructed, for example in word2vec two words with similar embeddings are two words that often appear in the same context, which is not to say they have the same meaning. Sometimes words with opposite meanings can have similar embeddings just because they are placed within the same sentences/contexts.

The BLEU score is a way to quantify the translation quality of an automatic translation. The score aims to look at how different model translation is to human translation.

",41306,,47471,,6/3/2021 23:16,6/3/2021 23:16,,,,0,,,,CC BY-SA 4.0 26758,2,,23335,3/11/2021 6:02,,0,,"

TensorFlow is interesting that it can store not only weights, but also training data in video RAM.

with tf.device('/gpu:0'):
    tensorflow_dataset = tf.constant(numpy_dataset)

Feeding training data and weights to GPU for matrix mul is faster than from regular RAM.

Video RAM required = Number of params * sizeof(weight type) +
                     Training data amount in bytes

However, I believe that video RAM required should be at least 1.5 times the above value just to be sure things would be working.

",2844,,,,,3/11/2021 6:02,,,,0,,,,CC BY-SA 4.0 26759,2,,26747,3/11/2021 9:11,,0,,"

Personally I would chose one hot encoding as it is statistically correct/model friendly. Moreover, you can always provide additional help/tools to aid explainability. Lastly even if you add the nth column, you still need some idea about the working of model(and the boundaries it created while training) to interpret the result.

",45344,,30725,,3/11/2021 10:32,3/11/2021 10:32,,,,0,,,,CC BY-SA 4.0 26760,1,26761,,3/11/2021 10:00,,2,2596,"

I'm a bit confused about the activation function in the output layer of a neural network trained for regression. In most tutorials, the output layer uses "sigmoid" to bring the results back to a nice number between 0 and 1.

But in this beginner example on the TensorFlow website, the output layer has no activation function at all? Is this allowed? Wouldn't the result be a crazy number that's all over the place? Or maybe TensorFlow has a hidden default activation?

This code is from the example where you predict miles per gallon based on horsepower of a car.

// input layer
model.add(tf.layers.dense({inputShape: [1], units: 1}));

// hidden layer
model.add(tf.layers.dense({units: 50, activation: 'sigmoid'}));

// output layer - no activation needed ???
model.add(tf.layers.dense({units: 1}));
",11620,,2444,,3/11/2021 17:59,3/11/2021 17:59,Why is no activation function needed for the output layer of a neural network for regression?,,1,0,,,,CC BY-SA 4.0 26761,2,,26760,3/11/2021 10:20,,3,,"

In regression, the goal is to approximate a function $f: \mathcal{I} \rightarrow \mathbb{R}$, so $f(x) \in \mathbb{R}$. In other words, in regression, you want to learn a function whose outputs can be any number, so not necessarily just a number in the range $[0, 1]$.

You use the sigmoid as the activation function of the output layer of a neural network, for example, when you want to interpret it as a probability. This is typically done when you are using the binary cross-entropy loss function, i.e. you are solving a binary classification problem (i.e. the output can either be one of two classes/labels).

By default, tf.keras.layers.Dense does not use any activation function, which means that the output of your neural network is indeed just a linear combination of the inputs from the previous layer. This should be fine for a regression problem.

",2444,,,,,3/11/2021 10:20,,,,2,,,,CC BY-SA 4.0 26763,2,,26744,3/11/2021 11:21,,0,,"

No, feature engineering is not an important step for deep learning (EDIT: compared to other techniques) provided that you have enough data. If your dataset is big enough (which varies from task to task), you can perform what is called an end-to-end learning.

To further clarify, according to this article, deep neural nets trained with backpropagation algorithm are basically doing an automated feature engineering.

I feel like with enough regularization, the deep learning models don't need feature engineering compared to some machine learning models (SVMs, random forests, etc.)

That is basically correct. Beware, you need a large dataset. When a large dataset is not available, you will do some manual work (feature engineering).

Nevertheless, it is always a good idea to look at your data first!

EDIT

I would also like to quote Rich Sutton here:

We want AI agents that can discover like we can, not which contain what we have discovered. Building in our discoveries only makes it harder to see how the discovering process can be done.

Perhaps this statement is more true with Deep Learning than with previous techniques, but we are not quite there yet. And as user nbro rightfully pointed out in the comments below, you may still need to normalise your data, pre-process it, remove outliers, etc. Thus in practice, you may still need to transform your data to a certain degree, depending on many factors.

",23360,,23360,,3/19/2021 11:03,3/19/2021 11:03,,,,5,,,,CC BY-SA 4.0 26764,2,,26735,3/11/2021 11:41,,1,,"

I would first suggest to cluster your data using T-SNE visualisation technique. Try to see if different classes are separable. If not, try to perform different image enhancement filters (e.g. white balance, sharpening) and see how does the separation change. This is how you design your image preprocessing pipeline.

Since your dataset is relatively small, it is indeed a good idea to use a pretrained network (e.g. MobileNet V3) and perform transfer learning (by retraining only the final layer).

Finally, for your task image size might be not crucial indeed.

",23360,,,,,3/11/2021 11:41,,,,0,,,,CC BY-SA 4.0 26765,2,,26546,3/11/2021 12:17,,1,,"

I will first address your main question "Why did the development of neural networks stop between 50s and 80s?" In 40-50s there was a lot of progress (McCulloch and Pitts); the perceptron was invented (Rosenblatt). That gave rise to an AI hype giving many promises (exactly like today)!

However, Minsky and Papert have proved in 1969 that a single-layer architecture is not enough to build a universal approximating machine (see e.g. Minsky, M. & Papert, S. Perceptrons: An Introduction to Computational Geometry, vol. 165 (1969)). That led to the first disappointment in "AI". Which lasted until several major breakthroughs in 1980s: the proof of universal approximating capabilities of multi-layer perceptron by Cybenko, the popularisation of backpropagation algorithm (Hinton and colleagues), etc.

I agree with LeCun that using continuous activation functions have enabled the backpropagation algorithm at the time. It is only recently that we have learned to backpropagate in networks with binary activation functions (2016!).

",23360,,23360,,3/17/2021 9:48,3/17/2021 9:48,,,,5,,,,CC BY-SA 4.0 26766,1,,,3/11/2021 13:04,,1,98,"

This is a theoretical question.

Setup

I have a time series classification task in which I should output a classification of 3 classes for every time stamp t.

All data is labeled per frame.

The problem:

In the data set are more than 3 classes [which are also imbalanced].

My net should see all samples sequentially, because it uses that for historical information.
Thus, I can't just eliminate all irrelevant class samples at preprocessing time.

In case of a prediction on a frame which is labeled differently than those 3 classes, I don't care about the result.


My thoughts:

  1. The net will predict for 3 classes
  2. The net will only learn (pass backward gradient) for valid classes, and just not calculate loss for other classes.

Questions

  1. Is this the way to go for "don't care" classes in classification?
  2. How to calculate loss only for relevant classes in Pytorch?
  3. Should I apply some normalization per batch, or change batch norm layers if dropping variable samples per batch?

I am using nn.CrossEntropyLoss() as my criterion, which has only mean or sum as reductions.
I need to mask the batch so that the reduction will only apply for samples whose label is valid.

I could use reduction='none' and do that manually, or I could do that before the loss and keep using reduction='mean'.
Is there some method to do this using built in Pytorth tools?

Maybe this can be done in the data-fetching phase somehow?


I am looking some standard, vanilla, thumb rule implementation to tackle this. The least fancy the better.


I am aware this is more than a single question. They are still not separable, as the solution will be unified most likely.

",21645,,21645,,3/11/2021 16:50,12/1/2022 22:05,"How to define a ""don't care"" class in time series classification in Pytorch?",,1,0,,,,CC BY-SA 4.0 26768,1,,,3/11/2021 14:13,,1,26,"

In most of the multi-agent reinforcement learning models I've found, it seems to generate the observations for each of the agents simultaneously and then uses a centralized critic to assess all of the agent's actions together.

However, what if two agents have a finite amount of resources to allocate, and the more one agent spends the less the other agent can spend. So really, the state space of the second agent is conditional on the action of the first agent.

Are there any papers or resources that describe an architecture like this?

",45348,,30725,,3/12/2021 5:08,3/12/2021 5:08,"How would I design a finite budget, cascaded multi agent deep reinforcement learning model?",,0,0,,,,CC BY-SA 4.0 26770,1,,,3/11/2021 16:25,,2,369,"

The VQ-VAE implimentation:https://colab.research.google.com/github/zalandoresearch/pytorch-vq-vae/blob/master/vq-vae.ipynb

quantized = inputs + (quantized - inputs).detach()

Why are we subtracting and adding input to quantized result?

",43254,,,,,1/1/2022 16:07,In VQ-VAE code what does this line of code signify?,,1,2,,,,CC BY-SA 4.0 26771,1,,,3/11/2021 18:14,,2,61,"

If I understand correctly, when training language models, we take a document and then chunk the document into a sequences of k tokens. So if the document is of length 30 and k=10, then we'll have 20 chunks of 10 tokens each (token 1-11, 2-12, and so on).

However these training sequences are not iid, right? If so, are there any papers that try and deal with this?

",45353,,45353,,3/11/2021 20:10,3/11/2021 22:05,Are training sequences for LMs sampled in an IID fashion?,,1,0,,,,CC BY-SA 4.0 26772,2,,26766,3/11/2021 19:02,,0,,"

Don't care classes are not advisable since don't care doesn't have any inherent pattern which would hurt your model inadvertently.

You can assign class weights for your classes. CrossEntropyLoss has an argument 'weights'.

BatchNormalization can be left alone since the normalization should happen on the basis of all data and not just some of the data.

",37203,,,,,3/11/2021 19:02,,,,6,,,,CC BY-SA 4.0 26773,1,26777,,3/11/2021 19:03,,1,184,"

I'm trying to use Q-learning, but I'm stuck because I don't know how to compute the state.

Let's say, in my problem, there are the following variables, which I'm using to compute state:

x in range 0-3
y in range 0-3
d in range 0-3
g in range 0-1
a in range 0-1
s in range 0-4
br in range 0-4
bu in range 0-4
gl in range 0-1

So, the state space is equal to $64000$ ($4 * 4 * 4 * 2 * 2 * 5 * 5 * 5 * 2$). I'd like to create a number, from the above variables, which is contained in the range $[0, 63999]$.

My previous idea was to create a binary number from the binary representation of state variables (just write them next to each other and convert into an int). It seems to fail if a variable is not a power of two (bonus question: why doesn't it work?).

",45354,,2444,,3/12/2021 11:40,3/12/2021 11:40,Compute state space from variables in Q-learning (RL),,1,0,,,,CC BY-SA 4.0 26774,1,,,3/11/2021 19:23,,3,178,"

I'm working on a problem that involves an RL agent with very large states. These states consist of several pieces of information about the agent. The states are not images, so techniques like convolutional neural networks will not work here.

Is there some general solutions to reduce/compress the size of the states for reinforcement learning algorithms?

",34341,,2444,,3/12/2021 9:40,12/2/2022 16:06,How can I compress the states of a reinforcement learning agent?,,1,0,,,,CC BY-SA 4.0 26775,2,,26771,3/11/2021 20:01,,1,,"

Language models explicitly assume that word sequences are not independent and identically distributed (iid). A word-based model that assumed iid within each sequence could only predict probabilities of words according to some other context than surrounding words, which would not be very useful for a language model.

The training processes for statistical models often require iid input/output pairs when sampling e.g. minibatches. Neural networks definitely train better when datasets are shuffled to remove any chance of correlation between examples as they are used to update parameters.

How to resolve these two different needs? When training a sequence-based model on many sequences, it is the distribution of sequences that needs to be iid and representative of the overall population. The distribution of items within the sequence is what is being learned, so should not be obscured or removed.

As an analogy, you do not usually want to shuffle the rows of pixels in an image when training image classifiers. The spatial pattern in the image needs to be preserved in the same way that the sequential pattern in a sentence needs to be preserved, because the pattern is part of the data being modelled. With image data, you accept non-iid relationships between the pixels that are next to each other within a single image, then apply shuffling and stratifying algorithms at the level of individual images.

If so, are there any papers that try and deal with this?

There may be some original papers from 50 years ago which compare iid with non-iid data when training sequences fed into RNNs, but it has been a standard part of engineering practice for decades now to shuffle datasets, and the separate sequences used to train RNNs are no different.

From your comments:

However the sequences that are sampled for training are not iid if they are multiple sequences generated sequentially from the same document, which from what I understand happens often?

You are correct, the raw set of sequences is not iid if they are collected in that fashion. However, the dataset is always shuffled or resampled for training purposes, it is not fed into training routines in that raw state. The shuffling (of selected sequences which are kept intact internally) happens in-between the raw data collection and the training.


There are some simple statistical models that do not require iid data to train on. This occurs for example in tabular reinforcement learning, which can learn online from a single continuous sequence of states, actions and rewards. An equivalent language model would be word- or letter-based ngrams.

",1847,,1847,,3/11/2021 22:05,3/11/2021 22:05,,,,6,,,,CC BY-SA 4.0 26776,1,,,3/11/2021 22:32,,0,34,"

Why is my siamese network learning very well in e.g. 1 out of every 5 runs? The rest of the time it's not learning and maintains an accuracy of 0.5.

Any explanations? Is the contrastive loss taken in the embedded space to loose of a constraint?

The task is greyscale signature matching.

Additionally, trying the model on facial matching gives a constant 0.5 accuracy, no learning at all - the images are RGB, and maybe it's a higher-order task in general.

Anyways, would appreciate any and all enlightenment in this matter.

P.S. I'm thinking to try a variational autoencoder for the face dataset, where I then use the trained encoder as the siamese network "head".

I would appreciate any guidance or thoughts on this approach as well.

",15405,,2444,,3/18/2021 13:42,3/18/2021 13:42,Why is my siamese network learning very well in e.g. 1 out of every 5 runs?,,0,8,,,,CC BY-SA 4.0 26777,2,,26773,3/11/2021 22:48,,0,,"

It seems to fail if a variable is not a power of two (bonus question: why doesn't it work?).

This does not work because you are wasting some space, some values are not used in your representation.

For example with your 0-4 variables, you need 3 bits, but you only use 000, 001, 010, 011 and 100 values. The values 101, 110 and 111 are still part of the representation you are using. It doesn't matter that you don't need to use them, you have created a representation where they exist. Every time you encode 5 values as 3 bits, you are being 62.5% efficient, and the efficiencies multiply to get the overall efficiency of your representation (proportion of states you need to represent compared to the size of the representation).

Sometimes this is acceptable. Your example state representation using bitwise coding per variable would still fit easily into a 32 bit integer for storage, and you also have the convenience of easily extracting the individual values quickly using bit masks. It's a bit less convenient if you were hoping to build a Q table as an array with the state as a simple offset.

In your example case it is not super-wasteful, your array will be around 4 times larger than optimal due to wastage in 3 separate places. For convenience of coding simplicity you might accept this 8MB space per action over 2MB space per action (assuming 32 bit floating point values for action values). 6MB of dead space is not much to worry about - the chances are that the programming language you have loaded to run it wastes far more space on features that you are not using.

If you have a few more variables and a few more wasted bit patterns, then the waste could be more noticeable and important. That might also be true if you are using a space-efficient compiled language on a system where memory resources are low.

Let's say in my problem there are following variables which i'm using to compute state . . . So the state space is equal to 64000 (4 * 4 * 4 * 2 * 2 * 5 * 5 * 5 * 2)

You are very close to a working answer for the most efficient representation here. You can use products of each variable's size to separate terms, to create multiplication factors for each variable. Imagine building up a cuboid out of the first three terms, and needing to address each cubic building block sequentially with an index position $i$. You would do this:

$$i = x + 4y + (4 \times 4)d = x + 4y + 16d$$

If you had only thie first 5 variables you would do this:

$$i = x + 4y + (4 \times 4)d + (4 \times 4 \times 4)g + (4 \times 4 \times 4 \times 2)a = x + 4y + 16d + 64g + 128a$$

To cover your whole set of variables, keep extending the same pattern. Each variable added to the end of the list is multiplied by the product of the space required by all previous variables. The last variable in your example, gl, would be multiplied by $32000$.

Reversing this encoding is more involved, especially if you only want to access a single variable. You need to start with the value of $i$ then repeatedly use an integer divmod operation to find the next unknown variable at the end of the list - using the same multiplier for that variable as when constructing the code - and a remainder for calculating the next variable along.

Luckily in reinforcement learning you probably don't need to do any reversing of state id to individual variables - you can maintain a structure of all current variables for the state when running the environment, and only need this compressed version to create an offset into the Q table for lookups for the policy or updates to stored values.

",1847,,1847,,3/11/2021 23:05,3/11/2021 23:05,,,,1,,,,CC BY-SA 4.0 26778,2,,26774,3/12/2021 4:10,,0,,"

Compression will be lossy, some detailed features in the state will be off calculation.

A common technique might be using max-pool function or layer (before feeding to policy network if RL here is deep RL).

Max-pooling is very lossy, use some other classic compression algos such as Zip, Rar but using these classic no-loss compression is weird and extremely slow in the model pipeline.

Possible solution if allowing lossy data, commonly: Use max-pool (giving out high contrast data), average-pool (giving out blurred data).

To keep data intact, TensorFlow can compress tensors: "only sacrificing a tiny fraction of model performance. It can compress any floating point tensor to a much smaller sequence of bits."
See: https://github.com/tensorflow/compression

",2844,,2844,,3/12/2021 13:59,3/12/2021 13:59,,,,3,,,,CC BY-SA 4.0 26779,2,,26747,3/12/2021 7:41,,0,,"

Pasting an answer here from a colleague, credits go to Amelie Groud

"we encountered the same issue during our PoC, not only with encoding but also with normalization (seeing "age=0.18" wasn't super meaningful).

If you are using model-agnostic techniques (such as Lime, PDP or ANCHOR), you should be able to apply XAI on your original data, before applying the encoding. With these techniques, usually we need 2 main elements: 1. an input dataset and 2. a "predict()" function (calling your trained model). From there you have 2 possibilities:

  • use your already transformed data and the predict function of your trained model (which seems to be what you are describing)
  • or, use the original, unchanged data and have a custom predict function which first apply the transformation needed (encoding, normalization, etc.) and then call the predict() function of your trained model.

If you choose the 2nd option, then you are free to choose any encoding strategy you see fit. Plus, most XAI techniques have some embedded capacity to deal with categorical data so instead of seeing "pink=1, blue=0, green=0", you will see "color=pink".

Here is an example with LIME but it works similarly for other XAI techniques: https://marcotcr.github.io/lime/tutorials/Tutorial%20-%20continuous%20and%20categorical%20features.html with the predict function defined as predict_fn = lambda x: rf.predict_proba(encoder.transform(x))"

Please feel free to add more answers/views if you have a different way of dealing with the question at hand

Regards, Koen

",45156,,,,,3/12/2021 7:41,,,,0,,,,CC BY-SA 4.0 26781,1,,,3/12/2021 13:07,,1,37,"

I know how to use SIFT algorithm for images but I never use it for other kinds of data. I have tabular data (x, y, z, time) where x,y,z is the joint position along x, y, z coordinates. Now, can I apply the SIFT algorithm to this data to find features that will act as input to traditional machine learning algorithms, like SVM, DT, etc.?

",28048,,2444,,3/12/2021 17:39,3/12/2021 17:39,Can I use the SIFT feature detector on data other than images?,,0,2,,,,CC BY-SA 4.0 26785,1,,,3/12/2021 13:56,,2,84,"

I've trained a CNN-LSTM model but the results weren't satisfactory, so I took a look at my weight distributions and this is what I got:

I don't understand. Is this layer learning anything? Or no?

Update: I've also tried LeakyReLU activation and also removed l2 regularization and this is what I got: So I guess my layer isn't learning or does take more epochs to train LSTM layers? The gradients are not vanishing because the CNN layer before this is changing.

",42948,,42948,,3/15/2021 14:20,3/15/2021 14:20,Is this LSTM layer learning anything?,,1,0,0,,,CC BY-SA 4.0 26787,2,,26785,3/12/2021 15:52,,1,,"

What may be more informative in terms of whether it is learning or not, is to track gradients.

Through gradients you will be able to understand better whether activations are receiving error terms to adjust weights accordingly or not. In the latter case, this would be characteristic of vanishing gradients problem.

You are developing with tf.keras, in which case you can add to your tensorboard callback: tf.keras.callbacks.TensorBoard(write_grads=True)

Additional experiments may include:

  • trying shorter length sequences and compare grad flow
  • try replacing tanh with alternative activation functions in the LSTM layers of your model tf.keras.LSTM(activation=tf.keras.layers.LeakyReLu()) (see tanh saddle points problem)

(graph borrowed from d2l)

",40560,,40560,,3/12/2021 16:03,3/12/2021 16:03,,,,7,,,,CC BY-SA 4.0 26792,1,27104,,3/13/2021 8:36,,0,265,"

I posted this question on stackoverflow and got downvoted for unmentioned reason, so I'll repost it here, hoping to get some insights

This is the plot

This is the code:

with strategy.scope():

  model2 = tf.keras.applications.VGG16(
    include_top=True,
    weights=None,
    input_tensor=None,
    input_shape=(32, 32, 3),
    pooling=None,
    classes=10,
    classifier_activation="relu",
  )

  model2.compile(optimizer='adam',
                loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                metrics=['accuracy'])
  
  history = model2.fit(
            train_images, train_labels,epochs=10, 
            validation_data=(test_images, test_labels)
            )

I'm trying to train VGG16 from scratch, hence not importing their weights I also tried a model which I created myself, with same hyperparameters, and that worked fine

Any help is highly appreciated

Heres the full code

",44529,,,,,3/31/2021 15:54,Validation Accuracy remains constant while training VGG?,,1,0,,,,CC BY-SA 4.0 26794,1,,,3/13/2021 10:12,,1,726,"

I am new to BERT and NLP and I am a little confused with tokenization and word embedding. My doubt is if I use the BertTokenizer for tokenizing a sentence then do I have to compulsorily use BertEmbedding for generating its corresponding word vectors of the tokens or I can train my own word2vec model to generate my word embedding while using BertTokenizer?

Pardon me if this question doesn't make any sense.

",45386,,45386,,3/13/2021 11:16,5/8/2022 6:09,Should I need to use BERT embeddings while tokenizing using BERT tokenizer?,,2,0,,,,CC BY-SA 4.0 26795,2,,26794,3/13/2021 10:55,,0,,"

word2vec and BERT are completely different things. You don't need to use BERT Tokenizer for training word2vec model. But, if you want to go ahead. Here's a link which can help you.

",37203,,,,,3/13/2021 10:55,,,,4,,,,CC BY-SA 4.0 26801,1,26868,,3/13/2021 17:35,,3,120,"

I am new to GANs. I noticed that everybody generates a random vector (usually 100 dimensional) from a standard normal distribution $N(0, 1)$. My question is: why? Why don't they sample these vectors from a uniform distribution $U(0, 1)$? Does the standard normal distribution has some properties that other probability distributions don't have?

",36107,,2444,,3/17/2021 11:02,3/17/2021 11:54,Why do we sample vectors from a standard normal distribution for the generator?,,1,0,,,,CC BY-SA 4.0 26803,1,26805,,3/13/2021 19:00,,0,241,"

Performing a prediction of a continuous y target using Keras, the simple structure of the code revolves around;

model = Sequential()  
model.add(Dense(200, input_dim=15, activation= "relu"))  
model.add(Dense(750, activation= "relu"))  
model.add(Dense(500, activation= "relu"))  
model.add(Dense(750, activation= "relu"))  
model.add(Dense(500, activation= "relu"))  
model.add(Dense(200, activation= "relu"))  
model.add(Dense(100, activation= "relu"))  
model.add(Dense(50, activation= "relu"))  
model.add(Dense(1)) 

model.compile(loss= 'mse' , optimizer='adam', metrics=['mse','mae'])  
history=model.fit(X_train, y_train, batch_size=50,  epochs=150,  
                  verbose=1, validation_split=0.2)

This has resulted in the following metric chart;

What might be causing these, and how to eliminate (or greatly reduce) them?

UPDATE: Just reduced the learning rate to 0.0001 per Neil Slater's suggestion, and the loss curve may have had the spikes reduced, though the scale of the graph has changed. The training loss has increased from 0.00007 to 0.00037, the validation loss from 0.0014 to 0.002, and the prediction error increased from 0.037 to 0.046.

I then changed epsilon from it's value of 1e-07 to 0.1 and increased the epochs from 150 to 500. The validation loss increased to 0.0082 and the prediction error increased to 0.093, with the corresponding model loss shown below.

While not an overall improvement at either step, this did remove the spikes as I requested, hence Neil's advice gives me additional considerations to explore and measure within the Adam optimizer (along with other optimizers), so I consider this to have been an important learning experience. One such exploration uncovered this more detailed explanation of optimizers than I had been exposed to before, as well as this 3D visualization of loss topologies and the effect differing optimizers and parameters have on finding the optimal minima (keep a sharp eye on the options being chosen in the upper right corner).

",45390,,45390,,3/14/2021 18:48,3/14/2021 18:48,"""Porpoising"" in latter stages of validation loss and MSE charts in Keras",,1,0,,,,CC BY-SA 4.0 26805,2,,26803,3/13/2021 20:05,,0,,"

What might be causing these, and how to eliminate (or greatly reduce) them?

It is difficult to be sure just from the graph, but I note you are using the Adam optimiser. It shares with a few other optimisers (most notably RMSProp) that it divides current gradients by a rolling mean of recent gradients to set step sizes. This can cause some minor instability when gradients get close to zero for a long while before growing again. This might happen at a saddle point where only some fraction of the network parameters are critical to results for a few iterations before hitting some other direction of slope where changing the "quiet" parameters becomes important again. This is more likely to occur as loss values become small, and close to perfect convergence.

There are a couple of hyperparameters in Adam that may reduce the effect:

  • Reduce learning rate. The default learning rate in Adam is often set to 0.001, but you will find a lot of researchers will reduce that to magnitudes around 0.0001 or 0.00001

  • Increase epsilon. This is effectively the minimum rolling average gradient. The default is 1e-7 but it sometimes needs to be increased significantly, it depends on the loss surface. The official documentation suggests that it may even be useful to increase it up to 1.0

",1847,,1847,,3/13/2021 20:10,3/13/2021 20:10,,,,0,,,,CC BY-SA 4.0 26806,1,,,3/13/2021 21:54,,4,3140,"

Is there any tutorial that walks through a multi-agent reinforcement learning implementation (in Python) using libraries such as OpenAI's Gym (for the environment), TF-agents, and stable-baselines-3?

I searched a lot, but I was not able to find any tutorial, mostly because Gym environments and most RL libraries are not for multi-agent RL.

",41169,,2444,,3/16/2021 10:17,4/2/2022 21:26,How do I get started with multi-agent reinforcement learning?,,2,2,,,,CC BY-SA 4.0 26809,1,26812,,3/14/2021 5:46,,2,104,"

Hi I am working on a project which requires the You Only Look Once algorithm in order to classify and localise objects within images. I have to prepare my dataset (which has 2 classes, and predicts 6 objects per grid cell, and the 448 * 448 image is split into a 7*7 grid). What would be a viable approach to do that? I found this code, found in this article. However I do not understand why he has done what he has done, e.g why is he specifically checking the 24th element of the “box”, and so what element of the box would I have to check? Is there any tutorial running through that? Would it be possible for someone to explain or even adapt his approach to fit my dataset?

FYI: I am coding the YOLO algorithm from scratch

",32636,,,,,4/22/2021 13:04,Preparing data set for the YOLO algorithm,,1,1,,,,CC BY-SA 4.0 26810,1,,,3/14/2021 6:16,,0,52,"

Are there any videos or other books/notes/slides that anyone has come across that follow Computer Vision Algorithms and Applications by Richard Szeliski? We are using this book in class but the professor did a bad job explaining and I have some trouble getting through the book. Thanks a lot!

",38729,,,,,3/14/2021 6:16,Resources for Computer Vision Algorithms and Applications,,0,2,,,,CC BY-SA 4.0 26811,1,,,3/14/2021 6:41,,1,17,"

I was reading in a On the Decision Boundary of Deep Neural Networks that the final layer of a MLP can be equated to an SVM and can generate decision boundaries similar to methods with SVM. I was wondering if using this boundary detection method or another can you quantify how much probability a model assigns to each bin where a bin is a class. So for example, if a project of an input before the final layer has a SVM margin of let's say 4 from the boundaries and classification 1 and 2, can we determine how much probability it'll give class 1 or 2 after the final layer?

",30885,,,,,3/14/2021 6:41,Can you correlate decision boundary of final layer of a neural network to predictive distribution?,,0,0,,,,CC BY-SA 4.0 26812,2,,26809,3/14/2021 7:46,,1,,"

Ok, let go step by step.

What you are working on is YOLOv1, in this version of the YOLO algorithm, the maximum bounding boxes that the model can return is 7x7 = 49 boxes as 49 cells since the output shape is 7x7x30.

For each box, the depth of output is 30 because the number of labels of PASCAL VOCS 2012 is 20 (the author of YOLOv1 trained on this dataset) so from index 0 to index 19 will represent the label of that bounding box. From 20 to 23 are the position and size of that box.

The 24th represents 2 things, first is the confidence of that box, since this is ground truth so the confidence should be 1. Second, you know that YOLOv1 can only return 49 boxes at maximum (actually, you can edit the number of boxes by yourself) so the ground truth should only handle one per cell, hence the 24th is the binary value to make sure there are no duplicates bounding boxes in one cell. (25 to 29 is because the author predict 2 bounding boxes per cell)

In your case, the output should be 7x7x(2 + 5 x 2) = 7x7x12 with 2 classes and 2 boxes per cell.

",41287,,,,,3/14/2021 7:46,,,,0,,,,CC BY-SA 4.0 26813,1,26816,,3/14/2021 12:16,,1,246,"

I'm trying to use Q-learning in order to solve Wumpus world environment.

Wumpus world is a toy problem on 4x4 gridworld. The agent starts in entry position of the cave, looks for gold (agent can sense that he is on the gold field), then he has to pick it up and leave cave in the entry position. Some fields are safe, other contain pit or wumpus (monster). If agent move to the pit or wumpus field he dies. The fields next to wumpus or pit (not diagonally!) have properties that agent can sense - stench (wumpus), breeze (pit). Agent achieve positive reward if he leaves cave with gold and negative if he dies. Action space: turn right/left, move forward, shoot arrow (if shot in good direction it can kill wumpus, only 1 available), pick up gold, leave cave. There: https://www.javatpoint.com/the-wumpus-world-in-artificial-intelligence you can find more detailed description of environment.

It is easy to solve this problem if the gridworld is constant. I have a huge problem to even start thinking about it if gridworld is random in every learning episode (random fields and random size) and also random in testing. How should I define it (especially state space)? How should I create q table? I'm using Python.

Thank you in advance for any help.

",45354,,45354,,3/14/2021 15:07,3/14/2021 16:01,Q-learning in gridworld with random board,,1,0,,,,CC BY-SA 4.0 26815,1,,,3/14/2021 15:00,,1,31,"

I'm trying to research modeling that can help me find very specific patterns in data. I've done a fair amount of work about generalized predictions with machine learning, but I'm very confused about how to approach something that gets into very specific predictions.

As an example, for an IT infrastructure that has 1000 Cisco Router across the world. I'm trying to find patterns in this data when outages occur. Outages typically are either power related on transport circuit related. I have historical data for over a year. I'm trying build a model to help predict the outage type. But, I want it to have very specific knowledge of previous outages. Maybe there is a pattern when Routers 17,245 and 813 fail it is always a power problem at each of the sites.

I will have input data about geospacial, diagnostics, and other IT type information.

I know a lot of modeling can generalize this type of scenario, but I'm trying to see if there are options to remember more specific patterns within a large dataset.

",45410,,,,,3/14/2021 15:00,Finding Specific Patterns in Data,,0,1,,,,CC BY-SA 4.0 26816,2,,26813,3/14/2021 15:54,,1,,"

From your linked description of the game, we can see it has a key property when used normally in AI teaching:

  • Partially observable: The Wumpus world is partially observable because the agent can only perceive the close environment such as an adjacent room.

This makes sense, the problem of avoiding hazards would be trivial if the full map was revealed to the agent. The problem is also not about learning a specific map.

You could use the agent's perception to construct a simple state table based on current observations:

  • Current location
  • Current facing
  • Boolean flags for stench, breeze, glitter, bump, scream

This would be relatively easy to build into a Q table, and might have some success. However, it would likely perform a lot worse than the propositional logic and planning suggested suggested at https://www.javatpoint.com/the-wumpus-world-in-artificial-intelligence for two reasons:

  • Agent will not take account of knowledge specific to the current map that it has gathered. This is critical, because the stench and breeze sensors only tell you that at least one of the adjacent rooms has a hazard. In theory you know that this will not be any of the rooms that the agent has already visited, but a simple state representation based on current observations will not capture this.

  • Agent will not plan using the deterministic rules of the game that you know and could code for.

There are a few different approaches you could take to improve on these issues. Sticking with Q learning and trying to solve this by improving the state representation, you could look at the suggested knowledge-building structure from the java T point site, and replicate something like it as input features:

  • Current facing
  • Whether or not the last move forward resulted in a bump
  • Whether or not there has been a scream
  • For each allowed room, a binary flag set to true if the room has been visited by the agent
  • For each allowed room, a set of binary variables covering the three observations that are about the room: stench, breeze, glitter
  • Either the current location separately, or another binary variable added to each grid square set to true for the one room that the agent is in

When multiplied by 16 to cover each room, you will end up with ~84 binary flags. Although many combinations will not be possible, this is still going to be far too large a state space to use with a Q table. You would probably use a simple neural network and DQN agent to solve this problem using Q learning.

",1847,,1847,,3/14/2021 16:01,3/14/2021 16:01,,,,4,,,,CC BY-SA 4.0 26817,1,,,3/14/2021 16:21,,2,81,"

What is the effect of K in K-NN on the VC dimension? When K increases, is the VC dimension decreased or increased, or we can't say anything about this? Is there a reference book that discusses this?

",45412,,2444,,3/16/2021 0:26,3/16/2021 0:26,What is the effect of K in K-NN on the VC dimension?,,0,0,,,,CC BY-SA 4.0 26819,1,26898,,3/14/2021 18:15,,1,224,"

I have come across a Google paper that uses the REINFORCE algorithm (a Policy Gradient Method) for a case where the trajectory of the episodes it proposes would be only one step.

When trying to replicate the experiments they propose I found that there are some problems with the stability of the method (maybe that's why it is not accepted by peer review).

Researching on my own I have found something I suspected, and that is that the problem they present could be solved as a multiarmed bandit problem in THIS link. But because of this event, I have the doubt if using methods based on trajectories (such as Policy Gradient Methods) has some mathematical problem in situations where the trajectory is a single step.

PS: I think the problem of this paper may be also that they average only after one execution of a trajectory and not over k trajectories as it is necessary for a Policy Gradient Method, so I would also like to know the opinion of more people about this issue.

",22930,,,,,3/18/2021 14:18,It is mathematically correct to use a Policy Gradient method for 1-step trajectories?,,1,3,,,,CC BY-SA 4.0 26821,1,,,3/14/2021 23:16,,0,50,"

I was going through paper titled "Algorithms for Inverse Reinforcement Learning" by Andrew Ng and Russell.

It states following basics:

  • MDP $M$ is a tuple $(S,A,\{P_{sa}\},\gamma,R)$, where

    • $S$ is a finite seto of $N$ states
    • $A=\{a_1,...,a_k\}$ is a set of $k$ actions
    • $\{P_{sa}(.)\}$ are the transition probabilities upon taking action $a$ in state $s$.
    • $R:S\rightarrow \mathbb{R}$ is a reinforcement function (I guess its what it is also called as reward function) For simplicity in exposition, we have written rewards as $R(s)$ rather than $R(s,a)$; the extension is trivial.
  • A policy is defined as any map $\pi : S \rightarrow A$

  • Bellman Equation for Value function $V^\pi(s)=R(s)+\gamma \sum_{s'}P_{s\pi(s)}(s')V^\pi(s')\quad\quad...(1)$

  • Bellman Equation for Q function $Q^\pi(s,a)=R(s)+\gamma \sum_{s'}P_{sa}(s')V^\pi(s')\quad\quad...(2)$

  • Bellman Optimality: The policy $\pi$ is optimal iff, for all $s\in S$, $\pi(s)\in \text{argmax}_{a\in A}Q^\pi(s,a)\quad\quad...(3)$

  • All these can be represented as vectors indexed by state, for which we adopt boldface notation $\pmb{P,R,V}$.

  • Inverse Reinforcement Learning is: given MDP $M=(S,A,P_{sa},\gamma,\pi)$, finding $R$ such that $\pi$ is an optimal policy for $M$

  • By renaming actions if necessary, we will assume without loss of generality that $\pi(s) = a_1$.

Paper then states following theorem, its proof and a related remark:

Theorem: Let a finite state space $S$, a set of actions $A=\{a_1,..., a_k\}$, transition probability matrices ${\pmb{P_a}}$, and a discount factor $\gamma \in (0, 1)$ be given. Then the policy $\pi$ given by $\pi(s) \equiv a_1$ is optimal iff, for all $a = a_2, ... , a_k$, the reward $\pmb{R}$ satisfies $$(\pmb{P}_{a_1}-\pmb{P}_a)(\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R}\succcurlyeq 0 \quad\quad ...(4)$$ Proof:
Equation (1) can be rewritten as
$\pmb{V}^\pi=\pmb{R}+\gamma\pmb{P}_{a_1}\pmb{V}^\pi$
$\therefore\pmb{V}^\pi=(\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R}\quad\quad ...(5)$
Putting equation $(2)$ into $(3)$, we see that $\pi$ is optimal iff
$\pi(s)\in \text{arg}\max_{a\in A}\sum_{s'}P_{sa}(s')V^\pi(s') \quad...\forall s\in S$
$\iff \sum_{s'}P_{sa_1}(s')V^\pi(s')\geq\sum_{s'}P_{sa}(s')V^\pi(s')\quad\quad\quad\forall s\in S,a\in A$ $\iff \pmb{P}_{a_1}\pmb{V}^\pi\succcurlyeq\pmb{P}_{a}\pmb{V}^\pi\quad\quad\quad\forall a\in A\text{\\} a_1 \quad\quad ...(6)$ $\iff\pmb{P}_{a_1} (\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R}\succcurlyeq\pmb{P}_{a} (\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R} \quad\quad \text{...from (5)}$
Hence proved.

Remark: Using a very similar argument, it is easy to show (essentially by replacing all inequalities in the proof above with strict inequalities) that the condition $(\pmb{P}_{a_1}-\pmb{P}_a)(\pmb{I}-\gamma\pmb{P}_{a_1})^{-1}\pmb{R}\succ 0 $ is necessary and sufficient for $\pi\equiv a_1$ to be the unique optimal policy.

I dont know if above text from paper is relevant for what I want to prove, still I stated above text as a background.

I want to prove following:

  1. If we take $R : S → \mathbb{R}$—there need not exist $R$ such that $π^*$ is the unique optimal policy for $(S, A, T, R, γ)$
  2. If we take $R : S × A → \mathbb{R}$. Show that there must exist $R$ such that $π^*$ is the unique optimal policy for $(S, A, T, R, γ)$.

I guess point 1 follows directly from above theorem as it says "$\pi(s)$ is optimal iff ..." and not "unique optimal iff". Also, I feel it also follows from operator $\succcurlyeq$ in equation $(6)$. In addition, I feel its quite intuitive: if we have same reward for any given state for every action, then different policies choosing different actions will yield same reward from that state hence resulting in same value function.

I dont feel point 2 is correct. I guess, this directly follows from the remark above which requires additional condition to hold for $\pi$ to be "uinque optimal" and this condition wont hold if we simply define $R : S × A → \mathbb{R}$ instead of $R : S → \mathbb{R}$. Additionally, I feel, this condition will hold iff we had $=$ in equation $(3)$ instead of $\in$ (as this will replace all $\succcurlyeq$ with $\succ$ in the proof). Also this also follow directly from point 1 itself. That is we can still have same reward for all actions from given state despite defining reward as $R : S × A → \mathbb{R}$ instead of $R : S → \mathbb{R}$, which is the case with point 1.

Am I correct with the analysis in last two paragraphs?

Update

After some more thinking, I felt I was doing it all wrong. Also I feel the text from the paper which I specified is of not much help in proving these two points. So let me restate new intuition for proofs for the two points:

  1. For $R: S\rightarrow \mathbb{R}$, if some state $S_1$ and next state $S_2$ has two actions between them, $S_1-a_1\rightarrow S_2$ and $S_1-a_2\rightarrow S_2$, and if optimal policy $π_1^*$ chooses $a_1$, then $π_2^*$ choosing $a_2$ will also be optimal, thus making NONE "uniquely" optimal since both $a_1$ and $a_2$ will yield same reward as reward is associated with $S_1$ instead of with $(S_1,a_x)$.

  2. For $R: (S,A)\rightarrow\mathbb{R}$, we can assign large reward say $+∞$ to all actions specified in given $π^*$ and $-∞$ to all other actions. This reward assignment will make $π^*$ a unique optimal policy.

Are above logics correct and enough to prove given points?

",41169,,41169,,3/16/2021 15:09,3/16/2021 15:09,"Proving existence or non existence of reward function to make given policy ""uniquely"" optimal when reward function is dependent only on S or both S,A",,0,3,,,,CC BY-SA 4.0 26822,1,,,3/15/2021 0:01,,2,99,"

There are at least three questions on this site related to this

  1. What is the effect of using pooling layers in CNNs?
  2. Is pooling a kind of dropout?
  3. What are the benefits of using max-pooling in convolutional neural networks?

I got the following useful information regarding the purpose of pooling. As per my understanding, the purposes of pooling, based on priority, in general, are as follows:

  1. To decrease the size of the feature maps
  2. To make the model stronger in feature extraction

Are there any other purposes of pooling in CNN other than them?

",18758,,18758,,9/30/2021 0:42,9/30/2021 0:42,What are the purposes of pooling in CNNs?,,0,2,,,,CC BY-SA 4.0 26823,1,,,3/15/2021 3:08,,2,32,"

Consider maximizing the function $R(w)$ with parameter $w$ using gradient ascent. However, we don't know the gradient $\nabla_wR(w)$ formula. Now suppose $w$ is sampled from a probability distribution $\pi(w,\theta)$ parameterized by $\theta$. Then we can define

$$J(\theta)=E[R(w)]=\int R(w)\pi(w,\theta)dw.$$ And we have

$$\nabla_\theta J(\theta)=E[R(w)\nabla_\theta \log \pi(w,\theta)]$$.

Then, if we sample $w_1,\ldots,w_N$, we can estimate the gradient as $$\nabla_\theta J(\theta)\approx \frac{1}{N}\sum_{i=1}^N R(w_i) \nabla_\theta \log \pi(w_i, \theta).$$

It looks like REINFORCE algorithm in Deep Reinforcement Learning. Does this algorithm have a name? Is the above derivation correct?

I wonder if it is useful in optimizing $R(w)$ function.

",45424,,2444,,3/16/2021 10:04,3/16/2021 10:04,What is the name of this algorithm that estimates the gradient with an average by sampling from a distribution?,,0,0,,,,CC BY-SA 4.0 26826,2,,26806,3/15/2021 7:28,,2,,"

After checking the Internet, you will probably find several resources such as

Try to understand the principles first (see above). After some reasonable amount of coding you can adapt OpenAI gym. Good luck!

Update 17 March 2022:

You may want to check this popular repository as well https://github.com/Farama-Foundation/PettingZoo

",23360,,23360,,3/17/2022 11:10,3/17/2022 11:10,,,,0,,,,CC BY-SA 4.0 26827,2,,12933,3/15/2021 11:22,,0,,"

This may not be directly answering your question, but predicting market movement based on past prices is probably not very sensible.

Assuming that future samples are drawn from the same populations as the past samples basically violates the founations of AIML and statistics quite frankly. See relevant figure below.

As far as accuracy goes, its is all relative. If you have a cpu which is right 0.999 of the time you have a useless piece of silicon, but if you have a 0.501 accuracy on stock market ID then you are the richest man in the world. That said, stock historical data is just not a phenomenon that repeats itself based on its own underlying distribution.

Always remember that when it comes to markets, past performance is not a good predictor of future returns—looking in the rear-view mirror is a bad way to drive. Machine learning, on the other hand, is applicable to datasets where the past is a good predictor of the future.

Deep learning with python, Francois Chollet

",40560,,38123,,3/15/2021 13:04,3/15/2021 13:04,,,,4,,,,CC BY-SA 4.0 26828,2,,11613,3/15/2021 11:38,,1,,"

I know it's too late to answer your query after 1.5 years

The inference algorithm is explained at https://charon.me/posts/pytorch/pytorch_seq2seq_5/#inference

Implementation of the algorithm is available at: https://github.com/pytorch/fairseq.

One could get a complete understanding from their source code.

",45437,,,,,3/15/2021 11:38,,,,0,,,,CC BY-SA 4.0 26829,1,,,3/15/2021 12:47,,1,176,"

Given an LSTM model with 3 cells shown below, what would be the input to the left most cell c(t-1) and h(t-1)?

",45440,,,,,6/9/2022 8:03,What is the input to the left most LSTM cell c(t-1) and h(t-1)?,,2,0,,,,CC BY-SA 4.0 26831,2,,26829,3/15/2021 16:44,,0,,"

Your question is related to the initial states of LSTM, where c(t-1) is the cell state (memory) and h(t-1) is the previous LSTM block output.

As pointed out here, it is reasonable to assume that those are random values.

",23360,,,,,3/15/2021 16:44,,,,0,,,,CC BY-SA 4.0 26832,1,,,3/15/2021 18:54,,0,438,"

I am training a WGAN-GP network based on the following paper, though I am using a different dataset. Now, for the first ~ 60-70 epochs, my network trained really well, which I could see in the loss going down, but I also made sure to regularly check the quality of the images.

Unfortunately, what I am seeing now (for the last $20$ epochs) is that the generator is getting worse and worse, the images don't look that good anymore. I save checkpoints every epochs, so in principle, I could stop training and get myself a state of the network from where it was still performing quite okay.

However, my question would be: How can I improve the training of the GAN? Would you decrease the learning rate?

I use a batch size of 124 and a learning rate of 1e-3. Maybe I could/should continue training (with a checkpoint that was still quite okay) with a learning rate of 5e-4?

Any other hints would be appreciated!

",43286,,2444,,3/16/2021 9:46,3/16/2021 9:46,What to do with a GAN that trained well but got worse over time?,,1,0,,,,CC BY-SA 4.0 26833,1,,,3/15/2021 19:01,,1,56,"

Are there any methods of regularisation of deep neural networks, particularly CNNs (or generally ANN but that will also work on CNNs) that are related only to the network's architecture and not the training itself?

I mean maybe something like how deep they are, amount of conv/pooling/fully connected layers, size of filters, size of steps of filters, etc. any pointers that would help with regularisation.

EDIT: To explain deeper what I mean I might add that I am exploring an experimental idea for the training of the CNNs that is not in any way related to typical gradient descent with backpropagation. That is why typical methods related to training will not work. I can see already that the models train satisfactorily on the training set but don't perform that well on a test set and since I didn't figure out any regularization methods for this type of training I thought maybe there are some related to architecture, that the training process will have to abide.

",22659,,32410,,4/26/2021 16:31,4/26/2021 16:31,Are there regularisation methods related only to architecture of the CNNs?,,0,4,,,,CC BY-SA 4.0 26834,1,26842,,3/15/2021 21:04,,4,363,"

A few days ago, I started looking a bit more into AI and learning about the way it works, and it is very interesting, but I can't find a clear answer on how the artificial intelligence is implemented in 3d shooter games, like COD or practically any 3d game.

I just don't understand how they teach the enemies such different things based on the game to fit its narratives. For example, is the enemy "AI" in 3d games just a bunch of if-else statements, or do they actually teach the enemies to think strategically? In big AAA games, you can clearly see that enemies hide from you in shootings and peek to shoot not just rush and get killed.

So, how is the AI in 3d games implemented? How do they code it? You don't need to explain in detail, but just give me the idea. Do they use algorithms?

",45453,,2444,,3/16/2021 10:23,3/16/2021 17:23,How is the AI in 3d games implemented?,,2,0,,,,CC BY-SA 4.0 26836,2,,26832,3/16/2021 2:59,,1,,"

This is from my own experience with (Vanilla) GANs, so it might not translate exactly to your application, but maybe it gives some orientation.

  • your learning rate seems quite high. I've quite frequently found that 1e-5 is a good value for me. The training might take longer but will probably be more stable.
  • have you tried using dropout? It's a good regularisation mechanism that can prevent from overfitting and also make training more stable. I've made good experience with a dropout rate of ~0.4 and keeping it in both training and testing stages.
  • what is your discriminator vs generator update ratio? In my experiments, ~5 discriminator updates per generator update has often shown to improve things.
  • how many training samples do you have? If they are limited, it might be helpful to add a bit of noise on them. This manipulates the data a bit, but might make training more stable.
  • I have often not seen a major change when varying the batch size.
  • what are your network architectures? I have the feeling that small networks are often sufficient...

This is entirely based on my own experience, but maybe there are some interesting directions for you to explore. Happy to hear back from you about the success of these tests.

",44121,,,,,3/16/2021 2:59,,,,1,,,,CC BY-SA 4.0 26837,2,,26834,3/16/2021 3:35,,0,,"

Most of the so called 'AI' enemies in games nowadays are based on human logics and not machine learning, although they're called AIs.

Mentioning about games, the first thing for ML is reinforcement learning. Let the bots play round to get rewards (positive) or punishment (negative rewards). These bots may finally know how to hide or move out a little bit to shoot.

I can guess of some possible actions for bots in shooter games:
Move U/D/L/R, jump, duck, find target (coordinates already in memory, valid only in view angle), shoot.

Training bot in 3D environments is much harder than in 2D games, since every looking angle of the bot is another environment state, 360 degrees around and 360 degrees up/down. And enemies can be at any location, in or off the frame.

",2844,,2844,,3/16/2021 3:42,3/16/2021 3:42,,,,2,,,,CC BY-SA 4.0 26839,2,,20941,3/16/2021 9:00,,3,,"

What I was looking for is multi-agent RL, where I have multiple RL agents, each controlling actions of one user. All RL agents/user make an action in each environment step and each get their own reward.

I represent my RL agents' actions as dict, containing the RL agent ID as key and its action as value. The different agents may either use the same or a different policy and use this policy and each their own observations (also stored as dict) to compute their actions.

action = {}
for agent_id, agent_obs in obs.items():
    # get the policy of the current agent
    policy_id = self.config['multiagent']['policy_mapping_fn'](agent_id)
    # use the policy and the agent's observations to compute its next action
    action[agent_id] = self.agent.compute_action(agent_obs, policy_id=policy_id)

My environment's step(self, action) function expects action to be such a dict and knows how to handle it. It applies all actions in the dict before calculating each agent's reward and progressing time in the environment. So basically, treating observations, actions, rewards (and info) as dicts solves the problem for me.

For the multi-agent RL approach, I used PPO and the Ray RLlib framework.

",19928,,,,,3/16/2021 9:00,,,,0,,,,CC BY-SA 4.0 26840,2,,25148,3/16/2021 9:34,,2,,"

Multiple attention heads in a single layer in a transformer is analogous to multiple kernels in a single layer in a CNN: they have the same architecture, and operate on the same feature-space, but since they are separate 'copies' with different sets of weights, they are hence 'free' to learn different functions.

In a CNN this may correspond to different definitions of visual features, and in a Transformer this may correspond to different definitions of relevance:1

For example:

Architecture Input (Layer 1)
Kernel/Head 1
(Layer 1)
Kernel/Head 2
CNN Image Diagonal edge-detection Horizontal edge-detection
Transformer Sentence Attends to next word Attends from verbs to their direct objects

Notes:

  1. There is no guarantee that these are human interpretable, but in many popular architectures they do map accurately onto linguistic concepts:

    While no single head performs well at many relations, we find that particular heads correspond remarkably well to particular relations. For example, we find heads that find direct objects of verbs, determiners of nouns, objects of prepositions, and objects of possesive pronouns...

  2. Multiple heads was originally proposed as a way to mitigate the lack of descriptive power of a single head in self-attention has:

    In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions [...] This makes it more difficult to learn dependencies between distant positions [12]. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention...

",23503,,23503,,3/17/2021 12:19,3/17/2021 12:19,,,,0,,,,CC BY-SA 4.0 26841,1,,,3/16/2021 9:41,,0,49,"

I would like to use the multi-target regression with scikit-learn. However, the examples I've seen use Xtrain and ytrain?

What is ytrain in regression?

I know y it is used for classes in classification. My data is composed of two columns of data, and I want to predict both values independently (but using a single MTR). So, I have clear X is a training set of n number of samples from those two values, however, I don't know how to create y.

My data is composed of two attributes not correlated, I guess this is Xtrain. What should I use as ytrain?

I'm based on this source https://machinelearningmastery.com/multi-output-regression-models-with-python/.

Any clue?

",44484,,2444,,3/16/2021 11:10,3/16/2021 11:10,Multi-target regression using scikit-learn without ytrain,,0,2,,,,CC BY-SA 4.0 26842,2,,26834,3/16/2021 9:52,,5,,"

Overlap between AI and "Game AI"

Nowadays, if you search for AI online, you will find a lot of material about machine learning, natural language processing, intelligent agents and neural networks. These are not the whole of AI by any means, expecially in a historical context, but they have recently been very successful, there is lots of published material about them.

Games, especially action games, tend not to use these new popular technologies because they have other priorities. However, under the broadest definitions, there is good overlap between AI in general and game AI, or more specifically enemy AI within a game:

  • Computer-controlled opponents within a game are effectively autonomous agents operating within an environment.

  • The game's purpose is to entertain the player, and this usually translates into requiring challenging and believable behaviour for actions taken by opponents (for some value of "believable").

  • A 3D map with a physics engine, other moving agents and possible hazards is a complex environment that needs some effort from a developer in order to define working behaviours.

  • Many games need to solve one or two classic AI problems such as pathfinding.

All the above means that the term "Game AI" is a good choice for describing how enemy agents are designed and implemented. However, there are large differences between goals of AI applied today in a 3D shooter and the goals of AI used in industry to control decision making.

  • Action games have a low CPU budget available per enemy for decision-making. Control systems in industry can have dedicated machinery, sometimes multiple machines, just to run the AI.

  • Action game priority is the game experience of the player, and this translates into strict targeting of a definition of "believable" for enemy behaviour that will entertain. In contrast, AI used in industry has accuracy as a high priority.

Typical game AI in 3D shooters

Developing enemy AI in games is a specialist skill, and only partly related to the purpose of this site. For detailed discussion you may want to look into Game Development Stack Exchange which has many questions and answers on topics like Enemy AI

Very roughly, the key traits of enemy AI in a game like Call of Duty could look like this (I have no inside knowledge of how COD does this, so may be inaccurate in places):

  • Enemies will have one or more modes of behaviour defined. This might be as simple as "Idle" and "Attack Player", or there could be many. For COD I would suspect there are many, and some may be in a hierarchy - e.g. there may be several sub-types of "Attack" behaviour depending on the enemy design.

  • Within a mode of behaviour, there may be a few different components defined, only some of which use AI routines. For instance, there will be specific animations related to walking or standing which are not AI, but there will also be some pathfinding AI if the agent is attempting to move around.

  • Decisions to switch between different modes of behaviour are often scripted with simple triggers, such as detecting player visibility using ray-casting between the location of player and enemy. When game AI fails to produce realistic results, it is often these high level triggers being brittle and not covering edge cases that causes it. Depending on complexity of the game and number of behaviours, there may be an algorithm managing the transitions between them, such as a finite state engine or behaviour trees.

  • Enemies will be presented with highly simplified observations of the game world from their perspective, in order to speed AI decisions.

  • AI systems will have CPU budget restricted. A 3D game spends significant resources rendering scenes, and will often try to render quickly, e.g. 100 times per second. There are multiple ways that AI budget can be allocated, but it is relatively common for AI calculations to be spread over muliple frames, and for search structures for tasks like path-finding to persist over time.

  • There will be a middle ground between scripted behaviour and AI-driven behaviour where analysis is done as part of game design. For instance, pathfinding routes may be pre-calculated to some degree. A system of way points is one example of this - it might be set by the game designer, it may be calculated by an AI component of the game asset-building pipeline, or it may be dynamically calculated and cached during a game session so that multiple enemy units can share it.

  • When complex AI is not needed to achieve a goal, when a simple caclulation or "puppet-like" behaviour would do just fine, then this could be chosen instead. For instance, an enemy "aiming" at a player can be a simple vector calculation, perhaps with a fudge factor of a miss chance depending on the range to make the enemy seem fallible and not as much like a machine.

I do not know how the behaviour of enemies using cover is implemented in COD. A simple variant would be to have each enemy hard-coded to use a specific piece of cover that usually works well against the player due to map design. However, it is definitley possible to have a search algorithm running (perhaps over multiple frames) that assesses nearby locations that the enemy could reach against the player's current position, and then pick those as variables to plug into the "take cover and fire on player" scripted behaviour. That assessment would use the same kind of visibilty detection between enemy and player that is used to trigger changes between "Idle" and "Attack" behaviours for the enemy.


As an aside, one related thing I find interesting is in using modern AI techniques to blend animations and make interactions between actors and the environment look more realistic. Although it is a lower-level feature than the question about enemy behaviour you are asking, it is an interesting cross-over between robotics and game playing that we will likely see in next-generation games, and probably applied first to the more detailed player model animations. Here is another example applied to a humanoid agent switching smoothly between different tasks, which any game player will recognise as something which game engines cannot do well at the moment - there are usually many jarring transitions caused by events in the game.

",1847,,1847,,3/16/2021 17:23,3/16/2021 17:23,,,,5,,,,CC BY-SA 4.0 26843,1,26863,,3/16/2021 11:22,,0,81,"

So I am new to NN and I'm trying to go deep and apply it to my subject. I would like to ask: the input of the NN can be 2 or more values for example-> the measurement of a value, distance, and time? An example of input data would be [ [1,2,3, ....],[11,22,33, .....],[5] ] whose output is a value 1 for example or cat or an generated model.

",45464,,30725,,3/17/2021 5:02,3/17/2021 5:02,Can we use Multiple data as Input in a NN for a single Output?,,1,1,,,,CC BY-SA 4.0 26844,2,,26709,3/16/2021 11:45,,2,,"

Feature engineering may be necessary when one cannot achieve acceptable error rate — within a budget or in principle.

NN may be stalling due to information bottleneck: too many pigeons, not enough holes. In that case, custom features may provide slightly better information compression. (Alas, this is not a panacea: some layer(s) may still be too narrow. That's why starting fat and pruning later deems superior, although not always affordable.)

Not a necessity, but a no-brainer nonetheless: having strong insight into underlying processes is a clear call for feature engineering. Let's say, it's apparent that known meaningful transformation can hardly be approximated by (reasonably small) subnetwork: mapping between periodic and non-periodic functions, as an example.

Personally in my check-list, feature stage is coupled with a reminder to search for a domain expert. In some practical sense features are being extracted from people rather than data. Our neural networks are pretty good too!

",27155,,,,,3/16/2021 11:45,,,,0,,,,CC BY-SA 4.0 26845,1,,,3/16/2021 12:52,,2,47,"

I have built a wildfire 'simulation' in unity. And I want to train an RL agent to 'control' this fire. However, I think my task is quite complicated, and I can't work out to get the agent to do what I want.

A fire spreads in a tree-like format, where each node represents a point burning in the fire. When a node has burned for enough time, it spreads in all possible cardinal directions (as long as it does not spread to where it came from). The fire has a list of 'perimeter nodes' which represent the burning perimeter of the fire. These are the leaf nodes in the tree. The rate of spread is calculated using a mathematical model (Rothermel model) that takes into account wind speed, slope, and parameters relating to the type of fuel burning.

I want to train the agent to place 'control lines' in the map, which completely stops the fire from burning. The agent will ideally work out where the fire is heading and place these control lines ahead of the fire such that it runs into these lines. Please could you guide me (or refer me to any reading that would be useful) on how I can decide the rules by which I give the model rewards?

Currently, I give positive rewards for the following:

  • the number of fire nodes contained by a control line increases.

And I give negative rewards for:

  • the number of fire nodes contained by a control line does not increase.
  • the agent places a control line (these resources are valuable and can only be used sparingly).

I end the session with a win when all nodes are contained, and with a loss if the agent places a control line out of the bounds of the world.

I am currently giving the agent the following information as observations:

  • the direction that the wind is heading, as a bearing.
  • the wind speed
  • the vector position that the fire is started at
  • the current percentage of nodes that are contained
  • the total number of perimeter nodes

I am new to RL, so I don't really know what is the best way to choose these parameters to train on. Please could you guide me to how I can better solve this problem?

",27493,,27493,,3/17/2021 17:18,3/17/2021 17:18,Train agent to surround a burning fire,,0,0,,,,CC BY-SA 4.0 26846,2,,24643,3/16/2021 12:58,,2,,"

The authors of the original paper don't provide an explanation, but I suspect it's a combination of:

",23503,,,,,3/16/2021 12:58,,,,0,,,,CC BY-SA 4.0 26848,1,,,3/16/2021 14:41,,0,24,"

I have implemented a U-Net, similar to this implementation, but for a different dataset, this one, to segment roads.

It works fine using the test folder images, but, for example, when I pick a print from bing maps and try to infer with the trained model, this is returned:

Why this is happening?

I already tried to change the thresholding values, normalization, etc.

Tensorboard

",36672,,2444,,3/18/2021 10:30,3/18/2021 10:30,Why doesn't U-Net work with images different from the dataset?,,0,2,,,,CC BY-SA 4.0 26849,1,26856,,3/16/2021 16:20,,1,1850,"

I have a binary classification problem where I have 2 classes. A sample is either class 1 or class 2 - For simplicity, lets say they are exclusive from one another so it is definitely one or the other.

For this reason, in my neural network, I have specified a softmax activation in the last layer with 2 outputs and a categorical crossentropy for the loss. Using tensorflow:

model=tf.keras.models.Sequential()
model.add(tf.keras.layers.Dense(units=64, input_shape=(100,), activation='relu'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Dense(units=32, activation='relu'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Dense(units=2, activation='softmax'))

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

Here are my questions.

  1. If the sigmoid is equivalent to the softmax, firstly is it valid to specify 2 units with a softmax and categorical_crossentropy?

  2. Is it the same as using binary_crossentropy (in this particular use case) with 2 classes and a sigmoid activation, and if so why?

I know that for non-exclusive multi-label problems with more than 2 classes, a binary_crossentropy with a sigmoid activation is used, why is the non-exclusivity about the multi-label case uniquely different from a binary classification with 2 classes only, with 1 (class 0 or class 1) output and a sigmoid with binary_crossentropy loss.

",34530,,2444,,3/16/2021 18:05,3/17/2021 8:59,Is it appropriate to use a softmax activation with a categorical crossentropy loss?,,1,0,,,,CC BY-SA 4.0 26850,1,26851,,3/16/2021 17:01,,1,129,"

I am learning about the deep deterministic policy gradient (DDPG) (Lillicrap et al, 2016) and got confused about the notation of the behavior policy.

Lillicrap et al. denote the policy gradient by

$$\nabla _{\theta^\mu} J \approx \mathbb{E}_{s_t \sim \rho^\beta} \left[ \nabla _{\theta^\mu} Q(s,a|\theta^Q) | s=s_t, a=\mu(s_t ; \theta ^\mu) \right],$$

where $\beta$ denotes the behavior policy (equation 5 in the original paper).

However, when they talk about exploration, they denote the exploration policy by $ \mu'$. This notation seems confusing to me since the target actor network is also denoted by $\mu'(s|\theta^{\mu'})$.

As far as I understand, the exploration policy is not directly linked to the target critic network but rather corresponds to the previously mentioned behavior policy $\beta$. Is this correct or am I understanding it wrong?

",45210,,45210,,3/18/2021 12:28,3/18/2021 12:28,Why is the behaviour policy denoted by $\beta$ and the exploration policy by $ \mu'$ in the DDPG paper?,,1,5,,,,CC BY-SA 4.0 26851,2,,26850,3/16/2021 17:28,,1,,"

You are right, it is sloppy notation by the authors. However, the target network is not necessarily linked to the behaviour policy $\beta$ either.

Essentially when they take the expectation with respect to $\rho^\beta$ they are taking expectation with respect to a state distribution induced by some policy $\beta$ that is not necessarily the same as our current policy -- this is what makes DDPG an off-policy algorithm.

The target actor network $\mu'$ is used in the loss function for the critic; the target for the critic (ignoring parameters for brevity) $y = r + \gamma Q(s', \mu'(s'))$ where $s'$ is the state we transitioned to from $s$ when we took an action $a$.

Now, $a$ was sampled as part of the tuple $(s, a, r, s')$ from our replay buffer, meaning that $a$ will have been chosen according to some past version of our policy, plus some exploration noise $\mathcal{N}$. The way this links to the behaviour policy $\beta$ is that because we have sampled it from some old version of our policy, i.e. not our current policy, it is instead coming from this behaviour policy $\beta$.

The target network $\mu'$ is simply a copy of the current actor network where the weights are updated using the polyak averaging technique and is not really related to the behaviour policy $\beta$, at least not in any useful way for you to think about it.

",36821,,,,,3/16/2021 17:28,,,,0,,,,CC BY-SA 4.0 26854,1,26874,,3/16/2021 20:51,,0,330,"

I need to forecast two non-correlated time-series (non-stationary). A sample is presented below:

414049364,21773560
414049656,21773926
414049938,21774287
414050204,21774638
414050453,21774975
414050682,21775296
414050895,21775597
414051093,21775874
414051278,21776125
414051453,21776344
414051620,21776530
414051780,21776678
414051935,21776785
414052089,21776849
414052242,21776865

The above is the input (two attributes) and the output (prediction) is composed of two targets (the same as input) for instance,

414052252,21776765

However, current regression techniques only consider a single attribute (class) forecasting but two or more. I've checked the following site https://machinelearningmastery.com/multi-output-regression-models-with-python/ for multi-target regression or predictive clustering trees. Unfortunately, I don't know how to adapt my data to those techniques. Ideally, I would like to predict multiple steps.

Any idea?

",45477,,2444,,3/18/2021 10:22,3/19/2021 13:47,How to forecast multiple target attributes in Python?,,1,2,,,,CC BY-SA 4.0 26856,2,,26849,3/17/2021 0:01,,1,,"

Let's first recap the definition of the binary cross-entropy (BCE) and the categorical cross-entropy (CCE).

Here's the BCE (equation 4.90 from this book)

$$-\sum_{n=1}^{N}\left( t_{n} \ln y_{n}+\left(1-t_{n}\right) \ln \left(1-y_{n}\right)\right) \label{1}\tag{1},$$

where

  • $t_{n} \in\{0,1\}$ is the target
  • $y_n \in [0, 1]$ is the prediction (as produced by the sigmoid), so $1 - y_n$ is the probability that $n$ belongs to the other class

Here's the CCE (equation 4.108)

$$ -\sum_{n=1}^{N} \sum_{k=1}^{K} t_{n k} \ln y_{n k}\label{2}\tag{2}, $$ where

  • $t_{n k} = \{0, 1\}$ is the target of input $n$ for class $k$, i.e. it's $1$ when $n$ is labelled as $k$ and $0$ otherwise (so it's $0$ for all $K$ except for one of them)
  • $y_{n k}$ is the probability that $n$ belongs to the class $k$, as produced by the softmax function

Let $K=2$. Then equation \ref{2} becomes

$$ -\sum_{n=1}^{N} \sum_{k=1}^{2} t_{n k} \ln y_{n k} = -\sum_{n=1}^{N} \left( t_{n 1} \ln y_{n 1} + t_{n 2} \ln y_{n 2} \right) \label{3}\tag{3} $$

So, if $[y_{n 1}, y_{n 2}]$ is a probability vector (which is the case if you use the softmax as the activation function of the last layer), then, in theory, the BCE and CCE are equivalent in the case of binary classification. In practice, if you are using TensorFlow, to choose the most suitable loss function for your problem, you could take a look at this answer.

",2444,,2444,,3/17/2021 8:59,3/17/2021 8:59,,,,1,,,,CC BY-SA 4.0 26859,2,,26130,3/17/2021 1:57,,0,,"

There's are some solutions to calculating q-values; find the exact values:

  • Brute-force the action sequence to find max (not pratical)
  • Do recursion on Bellman equation to get max (the same like action sequence brute-force, not pratical)

Estimate the q-values:

  • Based on different problems to solve, apply some classic algorithms or human logics to estimate; during estimation, some heuristic tactics may be used
  • Do a lot of randomisation to find max, including Monte Carlo tree search

Gradually optimise the q-values:

(1) Do optimisation based on Bellman equation (Q-learning): $$ q_t(s_t,a_t) = q_t(s_t,a_t) + \alpha(r + \gamma\times\max(q_{t+1}(s_{t+1},a_{t+1})) - q_t(s_t,a_t))$$

Bellman equation is true when the temporal difference (the part multiplied with $\alpha$) reaches zero, which means the max of time t+1 reaches exact value.

(2) Do optimisation based on Bellman equation (Q-network), fit the neural network to expected value: $$r + \gamma\times\max(q_{t+1}(s_{t+1},a_{t+1}))$$

",2844,,,,,3/17/2021 1:57,,,,0,,,,CC BY-SA 4.0 26860,1,26866,,3/17/2021 2:11,,2,1078,"

When using the Bellman equation to update q-table or train q-network to fit greedy max values, the q-values very often get to the local optima and get stuck although randomization rate ($\epsilon$) has already been applied since the start.

The sum of q-values of all very first steps (of different actions at the original location of the agent) increases gradually until a local optimum is reached. It gets stuck and this sum of q-values starts decreasing slowly a bit by a bit.

How to avoid being stuck in a local optimum? and how to know if the local optimum is already the global optimum? I may think of this but it's chaotic: Switch on randomization again for a while, worse values may come at first but maybe better in the future.

",2844,,32410,,12/11/2021 8:49,12/11/2021 8:49,How to avoid being stuck local optima in q-learning and q-network,,1,0,,,,CC BY-SA 4.0 26863,2,,26843,3/17/2021 4:25,,0,,"

Yes, it can be those multiple inputs, you would have to design the network like that (to have 3 input layers and then pool them afterwards).

But, an easier solution would be to flatten this array and put it through 1D CNN and finally put it through a softmaxx.

",37203,,,,,3/17/2021 4:25,,,,11,,,,CC BY-SA 4.0 26864,1,,,3/17/2021 8:49,,1,364,"

I'm trying to understand the concept behind BNN. Their are based on the Bayes Theorem:

$$p(w \mid \text{data}) = \frac{p(\text{data} \mid w)*p(w)}{p(\text{data})}$$

which boils down to

$$\text{posterior} = \frac{\text{likelihood} * \text{prior}}{\text{evidence}}.$$

I understand that if we assume a Gaussian distribution for the model, the likelihood function comprises a product (or sum if we use log) of each data point inserted into the Gaussian pdf. The parameters which we can change are $\mu$ and $\sigma^2$. We want to increase the likelihood because higher values of the Gaussian pdf means higher probability.

How do things work when using a neural network? I assume that the likelihood function looks s.th. like inserting each data point inside the calculation (weighted sums) of the neural network. But does the neural network need a softmax layer at the end so that we can interpret the outputs as probabilities/likelihoods? Or do we measure likelihood by applying some error measurement like cross-entropy or squared-loss?

",45494,,45494,,3/17/2021 9:08,3/20/2021 10:07,What's the likelihood in Bayesian Neural Networks?,,1,0,,,,CC BY-SA 4.0 26865,1,26880,,3/17/2021 9:03,,2,86,"

I'm using a Neural Network as an agent in a simple car racing game. My goal is to train the network to imitate a brute-force tree search to an arbitrary depth.

My algorithm goes something like the following:

  • The agent starts with depth 0
  • Generate a bunch of test states
  • For each test state s:
    • For each action a:
      • s' = env.step(s, a)
      • for x in range(agent.depth): s' = env.step(s', agent.predict(s'))
      • calculate reward at state s'
    • Set the label for test state s as whichever action a produced the highest reward
  • Train the agent using the test states and labels, and increment agent.depth
  • Loop until desired depth

The idea is that an agent trained in this way to depth N should produce output close to a brute-force tree search to depth N...so by using it to play out N moves, it should be able to find me the best final state at that depth. In practice, I've found that it performs somewhere between N and N-1 (but of course it never reaches 100% accuracy).

My question is: what is the name of this algorithm? When I search for tree search with playout, everything talks about MCTS. But since there's no randomness here (the first step is to try ALL actions), what would this be called instead?

",44278,,,,,3/17/2021 21:57,What is this algorithm? Is it a variant of Monte-Carlo Tree Search?,,1,8,,,,CC BY-SA 4.0 26866,2,,26860,3/17/2021 9:38,,1,,"

I found out the problem of why the optimization process got stuck and never moved closer to the global optimum. It's because of the rate between 'explore' or 'exploit'.

Basically, in RL, the agent explores by doing a random action and to find new solutions, exploits the existing so-called known max future rewards to do the max action.

Initially, I put the agent to explore when $random() < 1/(replay\_index+1)$, exploration rate reduces too quick (<10% after 10 iterations), and when the number of replays (number of times to play again from start) is not enough, the explore rate at the end of the loop is almost zero, and nothing new learned.

The solution opted is allowing 'explore' and 'exploit' to have the same rate (or lower exploration a bit is also ok), pseudo-code:

# Part 1 in a step: Choose action
if random() < 0.5: # 0.25 is also good, 25% for exploration
    action = random_action()
else:
    action = choose_best_known_action()

Explore rate can be reduced correctly this way:

if random() < 1-i/NUM_REPLAYS: # i is current train step index
    action = random_action()
else:
    ...

With the half-explore/half-exploit scheme above, the agent will learn to infinity, so, it is kinda sure that the global optimum would be reached. When knowing from practice the number of iterations that should be used, 'exploit' may be utilized more for faster convergence.

Note that the 'explore' and 'exploit' rates are put equal above, the but q-table or q-network is still better and better due to having another 'exploit'-kind when updating q-table or fitting q-network with Bellman equation, there's another 'exploit' here, the 'max' in Bellman equation:

Pseudo-code:

# Part 2 in a step: Update q-table or q-network
q[s][a] += learning_rate * (reward + max(q[sNext][aNext]) - q[s][a])

# Q-network
# target = r + max(...
```
",2844,,32410,,12/11/2021 8:48,12/11/2021 8:48,,,,3,,,,CC BY-SA 4.0 26867,2,,26864,3/17/2021 10:33,,1,,"

The likelihood depends on the task that you are solving, so this is similar to traditional neural networks (in fact, even these neural networks have a probabilistic/Bayesian interpretation!).

  • For binary classification, you should probably use a Bernoulli, which, in practice, corresponds to using a sigmoid with a binary cross-entropy (you can show that the minimization of the cross-entropy is equivalent to the maximization of Bernoulli p.m.f.)

  • If it's a multi-class classification problem, you should use a categorical distribution, which corresponds to a softmax with a categorical cross-entropy; see e.g. this implementation

  • If it's a regression problem, the likelihood could be a Gaussian (which is equivalent to using the MSE: you can also show this).

In the case of (mean-field) variational BNNs (VBNNs) (note that not all neural networks denoted as BNNs are VBNNs, but here I will only focus on VBNNs), rather than directly performing Bayesian inference (i.e. applying the Bayes theorem directly), you will perform approximate inference and, more specifically, you cast the inference problem as an equivalent optimization problem, where you typically optimize the loss function known as the evidence lower bound (ELBO), which is composed of two terms

  1. the (log-)likelihood of the parameters (so the Bernoulli/categorical/Gaussian)
  2. the KL divergence (the regularization part)

Theoretically, given that I am not a statistician, I cannot tell you right now why we can have a Bernoulli likelihood and a Gaussian prior (which is typically the case and, more specifically, in the case of this paper, they assume the weights to be independent, which is kind of a strong assumption but simplifies the computations, so this leads to a factorized multi-variate Gaussian over the weights), but this is what I had seen in practice.

",2444,,2444,,3/17/2021 12:15,3/17/2021 12:15,,,,2,,,,CC BY-SA 4.0 26868,2,,26801,3/17/2021 11:54,,0,,"

It has become our human bias that data will arrive from a normal distribution. It is also the most prevalent distribution in nature occurring in many places. Hence, we sample from a normal distribution. Also, central limit theorem works around means lying around normal distribution.

It is not taboo to use others if they are helpful to your network. But, a data from one distribution can transformed into other and if that is something that is required, the network will to do it since it can approximate anything (although a weak assumption, but, hey, its working right?).

",37203,,,,,3/17/2021 11:54,,,,5,,,,CC BY-SA 4.0 26869,2,,26709,3/17/2021 12:46,,2,,"

Yes, neural networks learn features themselves freeing you from the need to manually engineer them. I will illustrate it here with a toy problem.

Let's assume that we want to learn the areas of parallelograms built on pairs of vectors:

The input data are six coordinates: $(x_1, y_1, x_2, y_2, x_3, y_3)$.

import numpy as np

n_tr = 1000 # training data
x_tr = np.random.uniform(low=-1.0, high=1.0, size=(n_tr, 6))

n_ts = 100 # test data                                        
x_ts = np.random.uniform(low=-1.0, high=1.0, size=(n_ts, 6)) 

The targets (areas) are $y = |ad-bc|$, where $a=x_3-x_1$, $b=y_3-y_1$, $c=x_2-x_1$, $d=y_2-y_1$.

a_tr = x_tr[:,4] - x_tr[:,0] # x_3 - x_1
b_tr = x_tr[:,5] - x_tr[:,1] # y_3 - y_1
c_tr = x_tr[:,2] - x_tr[:,0] # x_2 - x_1
d_tr = x_tr[:,3] - x_tr[:,1] # y_2 - y_1
y_tr = np.abs(a_tr*d_tr-b_tr*c_tr)

a_ts = x_ts[:,4] - x_ts[:,0] # x_3 - x_1
b_ts = x_ts[:,5] - x_ts[:,1] # y_3 - y_1
c_ts = x_ts[:,2] - x_ts[:,0] # x_2 - x_1
d_ts = x_ts[:,3] - x_ts[:,1] # y_2 - y_1
y_ts = np.abs(a_ts*d_ts-b_ts*c_ts)

To learn the areas from coordinates, I will use my favorite machine learning library super_magic_learn

from super_magic_learn import wonder_network

wonder_network.init()

It will initialize a network with random activation functions in neurons, and random connections between them having random weights. It also randomly assigns some neurons as inputs, while other outputs or internal ones.

Then I train my network

wonder_network.fit(x_tr, y_tr, use_wand=True)

During training, the activation functions inside neurons change, the connections between neurons form, disappear, and form again, and their weights are adjusted. Some neurons organize in layers, the number of neurons in each layer changes, and finally the trained network is as follows:

It solves the task with 100% accuracy for both the training and test data, and it solves it using only raw data: coordinates. No need to engineer features.

However, you probably don't have access to the library super_magic_learn. Let's see what can we do with a slightly more inferior tensorflow

import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

model = keras.Sequential(
    [
        layers.Dense(64, activation="tanh", input_dim=6),
        layers.Dense(4, activation="tanh"),
        layers.Dense(1),
    ]
)

optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.001)

model.compile(loss='mean_squared_error',
                optimizer=optimizer,
                metrics=['mean_squared_error'])

model.fit(x_tr, y_tr, epochs=512, batch_size=64, validation_split = 0.2, verbose=1)
y_pr = model.predict(x_ts)

Now calculate the performance on the test set

from sklearn.metrics import r2_score
print('Rsq:', r2_score(y_ts,y_pr))

$R^2$:

Rsq: 0.4802872880495598

Not good. What will happen, if I engineer some features?

Let's train the same model but instead of feeding it with raw data, the inputs will be the following manually engineered features: $a=x_3-x_1$, $b=y_3-y_1$, $c=x_2-x_1$, $d=y_2-y_1$ (don't forget to change input_dim=4 in the first layer).

x_tr = np.c_[a_tr, b_tr, c_tr, d_tr]
x_ts = np.c_[a_ts, b_ts, c_ts, d_ts]

$R^2$:

Rsq: 0.8841499533897564

Now it is much better. Less than 100% though.

Why neural network in tensorflow performs poorly on raw data and needs feature engineering while the super_magic_learn works perfectly on raw data and does not need any feature engineering?

The reason is that tensorflow or any other library that I know, is much more restricted than my beloved super_magic_learn. The restrictions are as follows (note a very small problem though: super_magic_learn does not exist but I wish it were):

  • You have only a very small number of activation functions to choose from, like tanh, relu and a handful of others.
  • The activation functions stay fixed during training. You cannot change them.
  • You cannot add/remove layers.
  • You cannot change the number of neurons in the layers.
  • You cannot add/remove connections between the neurons.
  • You have to organize your neurons only in layers, no other arrangement is allowed.
  • During training, the network cannot learn the most suitable architecture for the task. E.g., it cannot reorganize itself taking into account the symmetries of the problem.
  • etc ...
  • Basically, the only thing you can do during training is to learn the weights.

The textbooks are right: ideally, a neural network should learn just from the raw data. But this is true only about my idealized library and not so much about existing real-world implementations.

To make a network really learn features for any task, it should be freed from these restrictions.

If you put so many restrictions on the architecture, activation functions, and other parameters, so that they cannot be learned from the data during training, then you have to engineer them yourself and adjust them manually for your task. If you engineer them correctly then your network will learn happily from the raw data. But it might perform poorly on other tasks.

Such is the case with convolutional neural networks. They were designed taking into account transnational equivariance of features in images that's why they can learn features from raw image data. However, they don't necessarily perform well in other domains.

",15524,,,,,3/17/2021 12:46,,,,4,,,,CC BY-SA 4.0 26870,1,,,3/17/2021 13:03,,1,122,"

Hello I am currently doing research on the effect of altering a neural network's structure. Particularly I am investigating what affect would putting a random DAG (directed acyclic graph) in the hidden layer of a network instead of a usual fully connected bipartite graph.

For instance my neural network would look something like this:

Basically I want the ability to create any structure in my hidden layer as long as it remains a DAG [add any edge between any node regardless of layers]. I have tried creating my own library to do so but it proved to be much more tedious than anticipated therefore I am looking for ways to do this on existing libraries such as Keras, pytorch, or tensorflow.

",45499,,40434,,6/4/2021 5:55,10/27/2022 11:02,How can I model any structure for a neural network?,,3,1,,,,CC BY-SA 4.0 26871,1,,,3/17/2021 13:29,,1,284,"

I've been trying to understand the meta-learning paradigm, more precisely, the optimization-based models, such as MAML, but I have a hard time understanding how I should correctly split my data to train such models.

For example, let's consider we have 4 traffic datasets, and we would like to use 3 of them as source datasets (to train the model) and the remaining one as target (to fine-tune on it and then test the model performance). As far as I understood, for each source dataset, I need to split it into train and validation. During training, I would randomly select 2 batches of samples from the training datasets, use one batch to compute the task-specific parameters and the other one to compute the loss. Then repeat the same process with the validation dataset, such that I can select the best candidate model. After the training is done, I need to fine-tune the model on the target dataset. I assume the process is the same, but I need to use the target dataset instead.

During testing (after the model is fully learned and fine-tuned), how exactly do I test it? It is the same procedure as if I was training a supervised model? I would like to know if the setup I described is correct and it fits the meta-learning paradigm.

",20430,,2444,,6/8/2021 2:08,6/8/2021 2:08,How to split data for meta-learning?,,1,0,,,,CC BY-SA 4.0 26872,2,,26870,3/17/2021 14:34,,0,,"

You seem to be seeking an implementation of a Residual Neural Network (https://en.m.wikipedia.org/wiki/Residual_neural_network), or ResNET for short. If you want some premade networks, the module tf.keras.applications.resnet from tensorflow (do check TF's documentation) might help you.

",45475,,,,,3/17/2021 14:34,,,,0,,,,CC BY-SA 4.0 26873,2,,26870,3/17/2021 14:42,,0,,"

Mentioned frameworks don't restrict you with linear layers sequence, you could do any acyclic sequence. I.e. very popular resnet architecture based on skip-connections that jumps over the layers.

I.e. simple example on pytorch

import torch
from torch import nn
import torch.nn.functional as F

class Custom(nn.Module):
    def __init__(self):
        super().__init__()
        self.a = nn.Parameter(torch.tensor(1.))
        self.b = nn.Parameter(torch.tensor(2.))
        self.c = nn.Parameter(torch.tensor(3.))


    def forward(self, x):
        x1 = F.relu(self.a * x)
        # take note, we skip
        out = F.relu(self.b * x1 + self.c*x)
        return out

model = Custom()
print('before', model.a)
# You could do pretty much the same training
x = torch.tensor([2])
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-2)
prediction = model(x)
loss = criterion(prediction, torch.tensor([20.]))
loss.backward()
optimizer.step()
print('after', model.a) 
",16940,,,,,3/17/2021 14:42,,,,0,,,,CC BY-SA 4.0 26874,2,,26854,3/17/2021 15:12,,1,,"

In general, multi output models is not that different. I.e.

  1. As Raghu mentioned in commentary, you could train separate model for each output. There is even helper module in sklearn for that (MultiOutputRegressor)
  2. DecisionTreeRegressor from sklearn allows multiple outputs out-of-the box
  3. Any neural network framework allows any number of outputs

In your particular case there are two much bigger factors.

  1. It's timeseries. Timeseries requires whole another set of models then other data, i.e ANOVA, RNN or transformers.
  2. You data is not well scaled. The reason is that the change between points is insignificant in comparison with their absolute values. You need to deal with it -- but exact solution would depend on your particular task. You could try to substract the mean value on train set or predict the difference, not absolute value.

Edit: here is simple example

import numpy as np
raw = np.array([
[414049364,21773560],
[414049656,21773926],
[414049938,21774287],
[414050204,21774638],
[414050453,21774975],
[414050682,21775296],
[414050895,21775597],
[414051093,21775874],
[414051278,21776125],
[414051453,21776344],
[414051620,21776530],
[414051780,21776678],
[414051935,21776785],
[414052089,21776849],
[414052242,21776865]
])

X = raw[:-1]
y = raw[1:]
from sklearn.linear_model import LinearRegression
model = LinearRegression()
# Train the model
model.fit(X, y)
# Try to predict next step
model.predict([[414051780,21776678]])
# The predicted value [414051933,  21776806] and the true value would be [414051935,21776785], not so far

Next you could normalize the values, use few previous steps as a features, use more complex algorithm then linear regression, and make train/eval split for better model evaluation.

",16940,,16940,,3/19/2021 13:47,3/19/2021 13:47,,,,5,,,,CC BY-SA 4.0 26875,1,26968,,3/17/2021 16:39,,-1,66,"

I would like to have an order of magnitude of ressources required to build an image recognition system.

Let say you want to build a startup company which main product will have to distinguish 20 different kinds of objects (bottle, dogs, car, flowers...). Images are already tagged.

  • How many images are needed as a learning set ? 1k, 10k, 100k, 1 million ?
  • What kind of hardware and how long will the learning process take ?
  • How many developers, how much time ​?
  • Does it changes a lot if the number of target output is reduced to two kinds, or increased to one thousands ?

​A link to a real life paper would be perfect. Thank you

",15283,,,,,3/23/2021 17:57,What amount of ressources is involved in building an image recognition system?,,1,1,,12/22/2021 22:00,,CC BY-SA 4.0 26876,1,26894,,3/17/2021 17:08,,0,106,"

I have the following code snippet which takes in a single column of value i.e. 1 feature. How do I modify the LSTM model such that it accepts 3 features?

import numpy as np
from keras.models import Sequential
from keras.layers import LSTM, Input, Dropout
from keras.layers import Dense
from keras.layers import RepeatVector
from keras.layers import TimeDistributed
import pandas as pd
from matplotlib import pyplot as plt
from sklearn.preprocessing import MinMaxScaler, StandardScaler
from keras.models import Model
import seaborn as sns    

dataframe = pd.read_csv('GE.csv')
dataframe.head()

df = dataframe[['Date', 'EnergyInWatts']]
df['Date'] = pd.to_datetime(df['Date'])
sns.lineplot(x=df['Date'], y=df['EnergyInWatts'])

#train, test = df.loc[df['Date'] <= '2003-12-31'], df.loc[df['Date'] > '2003-12-31']
train = df.loc[df['Date'] <= '2003-12-31']
test = df.loc[df['Date'] > '2003-12-31']

scaler = StandardScaler()

scaler = scaler.fit(train[['EnergyInWatts']])

train['EnergyInWatts'] = scaler.transform(train[['EnergyInWatts']])
test['EnergyInWatts'] = scaler.transform(test[['EnergyInWatts']])

seq_size = 30 


def to_sequences(x, y, seq_size=1):
    x_values = []
    y_values = []

    for i in range(len(x)-seq_size):
        #print(i)
        x_values.append(x.iloc[i:(i+seq_size)].values)
        y_values.append(y.iloc[i+seq_size])
        
    return np.array(x_values), np.array(y_values)

trainX, trainY = to_sequences(train[['EnergyInWatts']], train['EnergyInWatts'], seq_size)
testX, testY = to_sequences(test[['EnergyInWatts']], test['EnergyInWatts'], seq_size)


model = Sequential()
model.add(LSTM(128, input_shape=(trainX.shape[1], trainX.shape[2])))
model.add(Dropout(rate=0.2))
model.add(RepeatVector(trainX.shape[1]))
model.add(LSTM(128, return_sequences=True))
model.add(Dropout(rate=0.2))
model.add(TimeDistributed(Dense(trainX.shape[2])))
model.compile(optimizer='adam', loss='mae')
model.summary()

history = model.fit(trainX, trainY, epochs=10, batch_size=32, validation_split=0.1, verbose=1)
```
",45440,,6362,,3/18/2021 1:43,3/18/2021 13:16,Convert LSTM univariate Autoencoder to multivariate Autoencoder,,1,0,,,,CC BY-SA 4.0 26877,1,27020,,3/17/2021 17:49,,2,243,"

My intuition is that there is some overlap between understanding language and symbolic mathematics (e.g. algebra). The rules of algebra are somewhat like grammar, and the step-by-step arguments get you something like a narrative. If one buys this premise, it might be worth training an AI to do algebra (solve for x, derive this equation, etc).

Moreover, when variables represent "real" numbers (as seen in physics, for example) algebraic equations describe the real world in an abstracted, "linear," way somewhat similar to natural language.

Finally, there are exercises in algebra, like simplifying, deriving useful equations, etcetera which edge into the realm of the subjective, yet it is still much more structured and consistent than language. It seems like this could be a stepping stone towards the ambiguities of natural language.

Can anyone speak to whether this has either (1) been explored or (2) is a totally bogus idea?

",37344,,2444,,3/18/2021 13:38,6/12/2022 19:49,Is there a relationship between Computer Algebra and NLP?,,4,0,,,,CC BY-SA 4.0 26878,1,,,3/17/2021 18:04,,0,77,"

I'm working my way through the Bayesian world. So far, I've understood that the MLE or the MPA are point estimates, therefore using such models just output one specific value and not a distribution.

Moreover, vanilla neuronal networks do in fact s.th. like MLE, because minimizing the squared-loss or the cross-entropy is similar to finding parameters that maximize the likelihood. Moreover, using neural networks with regularisation is comparable to the MAP estimates, as the prior works like the penalty term in error functions.

However, I've found this work. It shows that the weights $W_{PLS}$ gained from a penalized least-squared are the same as the weights $W_{MAP}$ gained through maximum a posteriori:

However, the paper says:

The first two approaches result in similar predictions, although the MAP Bayesian model does give a probability distribution for $t_*$ The mean of this distribution is the same as that of the classical predictor $y(x_*; W_{PLS})$, since $W_{PLS} = W_{MAP}$

What I don't get here is how can the MAP Bayesian give a proability distribution over $t_*$, when it is only a point estimate?

Consider a neuronal network - a point estimate would mean some fixed weights, so how can there be a output probability distribution? I thought that this is only achieved in the true Bayesian, where we integrate out the unknown weights, therefore building something like the weight averaged of all outcomes, using all possible weights.

Can you help me?

",45494,,2444,,12/13/2021 9:06,1/7/2023 13:02,Does the Bayesian MAP give a probability distribution over unseen t*?,,1,0,,,,CC BY-SA 4.0 26880,2,,26865,3/17/2021 21:48,,2,,"

In short it looks like you have constructed a valid reinforcement learning method, but it does not have much in common with Monte Carlo Tree Search. It may have some weaknesses compared to more established methods, that means it will work better in some environments rather than others.

Your approach may be novel, in that you have combined ideas into a method which has not been used in this exact form before. However, it is following the principles of general policy improvement (GPI):

  • Estimate action values of a current policy.

  • Create a new policy by maximising action choices with respect to latest estimates.

  • Set current policy to new policy and repeat.

Your method covers exploration with a deterministic policy by sweeping through a list of state and action pairs. This resembles policy iteration, or perhaps exploring starts in Monte Carlo control. It evaluates each state/action pair by following the current policy up to a time step horizon. This has some weakness, but it may work well enough in some cases.

The depth iteration is more complex to analyse. It is not doing what you suggest - i.e. it does not make the whole algorithm equivalent somehow to a tree search. Technically the value estimates with a short horizon will be poor, biased estimates of true value in the general case, but on the plus side they may still help differentiate between good and bad action choices in some situations. It may even help as a form of curriculum learning by gathering and training on data about immediately obvious decisions first (again I expect that would be situational, not something you could rely on).

Overall, although you seem to have found a nice solution for your racing game, I think your method will work unreliably in a general case. Environments with stochastic state transitions and rewards would be a major problem, and this is not something you could fix without major changes to the algorithm.

I could suggest that you to try one or more standard published methods, such as DQN, A3C etc, and compare them with your approach on the same environments in different ways. E.g. how much time and computation it takes to train to an acceptable level, or how close to optimal each method can get to.

The main comparison with Monte Carlo Tree Search is that you evaluate each state, action pair with a kind of rollout. There is lots more going on in MCTS that you are not doing though, and the iteration with longer rollouts each time is not like anything in MCTS and does not compensate for missing parts of MCTS in your algorithm.

",1847,,1847,,3/17/2021 21:57,3/17/2021 21:57,,,,2,,,,CC BY-SA 4.0 26882,1,26931,,3/18/2021 0:22,,0,115,"

Goal

To build an RNN which would receive a word as an input, and output the probability that the word is in English (or at least would be English sounding).

Example

input:  hello 
output: 100%

input:  nmnmn 
output: 0%

Approach

Here is my approach.

RNN

I have built an RNN with the following specifications: (the subscript $i$ means a specific time step)

The vectors (neurons):

$$ x_i \in \mathbb{R}^n \\ s_i \in \mathbb{R}^m \\ h_i \in \mathbb{R}^m \\ b_i \in \mathbb{R}^n \\ y_i \in \mathbb{R}^n \\ $$

The matrices (weights): $$ U \in \mathbb{R}^{m \times n} \\ W \in \mathbb{R}^{m \times m} \\ V \in \mathbb{R}^{n \times m} \\ $$

This is how each time step is being fed forward:

$$ y_i = softmax(b_i) \\ b_i = V h_i \\ h_i = f(s_i) \\ s_i = U x_i + W h_{i-1} \\ $$ Note that the $ + W h_{i-1}$ will not be used on the first layer.

Losses

Then, for the loss of each layer, I used cross entropy ($t_i$ is the target, or expected output at time $i$): $$ L_i = -\sum_{j=1}^{n} t_{i,j} \ln(y_{i,j}) $$

Then, the total loss of the network: $$ L = \sum L_i $$

RNN diagram

Here is a picture of the network that I drew:

Data pre-processing

Here is how data is fed into the network:

Each word is split into characters, and every character is split into a one-hot vector. Two special tokens START and END are being appended to the word from the beginning and the end. Then the input at each time step will be every sequential character without END, and the output at each time step will be the following character to the input.

Example

Here is an example:

  1. Start with a word: "cat"
  2. Split it into characters and append the special tags: START c a t END
  3. Transform into one-hot vectors: $v_1, v_2, v_3, v_4, v_5$
  4. Then the input is $v_1, v_2, v_3, v_4$ and the output $v_2, v_3, v_4, v_5$

Dataset

For the dataset, I used a list of English words.

Since I am working with English characters, the size of the input and output is $n=26+2=28$ (the $+2$ is for the extra START and END tags).

Hyper-parameters

Here are some more specifications:

  • Hidden size: $m=100$
  • Learning rate: $0.001$
  • Number of training cycles: $15000$ (each cycle is a loss calculation and backpropagation of a random word)
  • Activation function: $f(x) = \tanh(x)$

Problem/question

However, when I run my model, I get that the probability of some word being valid is about 0.9 regardless of the input.

For the probability of a word begin valid, I used the value at the last layer of the RNN at the position of END tag after feeding forward the word.

I wrote a gradient checking algorithm and the gradients seem to check up.

Is there conceptually something wrong with my neural network?

I played a bit with $m$, the learning rate, and the number of cycles, but nothing really improved the performance.

",45508,,2444,,3/23/2021 9:24,3/23/2021 9:24,Is my approach to building an RNN to predict the probability that the word is in English appropriate?,,2,2,,,,CC BY-SA 4.0 26884,1,,,3/18/2021 1:28,,4,701,"

Say I'm training a model for multiple tasks by trying to minimize sum of losses $L_1 + L_2$ via gradient descent.

If these losses are on a different scale, the one whose range is greater will dominate the optimization. I'm currently trying to fix this problem by introducing a hyperparameter $\lambda$, and trying to bring these losses to the same scale by tuning it, i.e., I try to minimize $L_1 +\lambda \cdot L_2$ where $\lambda > 0 $.

However, I'm not sure if this is a good approach. In short, what are some strategies to deal with losses having different scales when doing multi-task learning? I'm particularly interested in deep learning scenarios.

",32621,,2444,,3/18/2021 10:42,11/25/2021 19:22,How to deal with losses on different scales in multi-task learning?,,1,1,,,,CC BY-SA 4.0 26885,1,,,3/18/2021 5:12,,1,162,"

I am facing difficulty in understanding the bolded portion of the following statement from this paper

GANs are defined by a min-max two-player game between a discriminative network $D_\Psi(x)$ and generative network $G_\theta(z)$. While the discriminator tries to distinguish between real data point and data points produced by the generator, the generator tries to fool the discriminator. It can be shown that if both the generator and discriminator are powerful enough to approximate any real-valued function, the unique Nash-equilibrium of this two-player game is given by a generator that produces the true data distribution and a discriminator which is 0 everywhere on the data distribution.

My understanding is that discriminator gives $\dfrac{1}{2}$ for any further inputs after training. But, what is the $0$ mentioned?

",18758,,18758,,8/1/2021 1:11,8/1/2021 1:11,Why does this paper say that the Nash-equilibrium of GAN is given by a discriminator which is 0 everywhere on the data distribution?,,1,0,,,,CC BY-SA 4.0 26886,2,,11436,3/18/2021 7:17,,0,,"

Tl;dr max-pool

You can see in the diagram, everywhere there are a variable number of inputs (pickups, units, hero modifiers/abilities/items), a max-pool follows, though I don't know the specifics of the max-pool implementation.

From https://neuro.cs.ut.ee/the-use-of-embeddings-in-openai-five :

Notice that while the number of modifiers, abilities and items is variable, the network max-pools over each of those lists. This means that only the highest value in all those dimensions actually gets through. At first it does not seem to make sense – it might give the impression that you have an ability that is combination of all existing abilities, e.g. ranged passive heal. But it seems to work for them.

Above processing is done separately for each of the nearby units, the results from general attributes, hero modifiers, abilities and items are all concatenated together. Then different post-processing is applied depending if it was enemy non-hero, allied non-hero, neutral, allied hero or enemy hero.

Finally the results of post-processing are max-pooled over all units of that type. Again this seems to be questionable at the first sight, because different qualities of nearby units would be combined, e.g. if one of the dimensions would represent the health of a unit, then the networks sees only the maximum health over the same type of units. But, again, it seems to work fine.

",45512,,,,,3/18/2021 7:17,,,,0,,,,CC BY-SA 4.0 26887,2,,24493,3/18/2021 7:20,,2,,"

The lower bound in MINE is as follows:

$$\widehat{I(X;Z)}_n = \sup_{\theta\in\Theta} \mathbb{E}_{\mathbb{P}_{XZ}^{(n)}}[T_\theta] - \log{\mathbb{E}_{\mathbb{P}_X^{(n)} \otimes \hat{\mathbb{P}}_Z^{(n)}}[e^{T_\theta}]}$$

Here $\mathbb{\hat{P}^{(n)}}$ denotes the empirical distribution that we get from n i.i.d samples of $\mathbb{P}.$

Note that in the above equation, the first term is calculated from the joint distribution while the second term from the marginals of $X$ and $Z$. In the implementation of MINE, these statistics are calculated over the data from a minibatch. The marginal distribution is obtained by shuffling the values of Z (or X) along the batch dimension. Hence, in this case, the gradient is as follows.

$$\hat{G}_B = \mathbb{E}_B[\nabla_{\theta}T_{\theta}] - \frac{\mathbb{E_B}[\nabla_{\theta}T_{\theta}e^{T_{\theta}}]}{\mathbb{E}_B[e^{T_{\theta}}]},$$

  1. As mentioned, the expectation over the marginals is not calculated over the true marginal distribution (i.e. over the entire dataset) but from the shuffled samples in the minibatch. Hence, the above gradient $G_B$ is biased.
  2. When we maintain an exponential moving average of $\mathbb{E}_B[e^{T_{\theta}}]$, we kind of incorporate statistics from outside the current minibatch also (i.e. over the entire dataset). This is an attempt to get an approximation of the true marginal estimate. The denominator term in the gradient allows for such a computationally inexpensive bias reduction trick.
",44852,,,,,3/18/2021 7:20,,,,0,,,,CC BY-SA 4.0 26893,2,,26885,3/18/2021 12:48,,1,,"

What this paper is not saying is that the discriminator, $D_{\phi}$, always returns a scalar value of zero. What they are saying is that the generator, $G_{\theta}(z)$, has accurately learned the distribution of the input data, and that the discriminator produces the correct answer for each input from the generator. It's a mathematical description of the global optimum for a GAN. This Medium article by Jonathan Hui talks more about Nash equilibria, the Kullback-Leibler, and more.

",19703,,,,,3/18/2021 12:48,,,,0,,,,CC BY-SA 4.0 26894,2,,26876,3/18/2021 13:16,,0,,"

If each of your three features is a scalar then my first attempt would be to combine them into a vector for each step in the sequence. So instead of LSTM(128, input_shape=(30,1)) for a length-30 univariate sequence you would say LSTM(128, input_shape=(30,3)) for a multivariate (3) sequence. Similarly your output would become TimeDistributed(Dense(3, activation='linear')).

If your input features are each a vector then you should consider switching from the sequential API to the functional API. You would need three separate Input objects followed by a Concatenate layer before the LSTM layer. You would have three Dense layers to be your corresponding outputs. I tried adapting your code:

in0 = Input(shape=(trainX0.shape[1], trainX0.shape[2]))
in1 = Input(shape=(trainX1.shape[1], trainX1.shape[2]))
in2 = Input(shape=(trainX2.shape[1], trainX2.shape[2]))
concat = Concatenate()([in0, in1, in2])
lstm_enc = LSTM(128)(concat)
lstm_enc_drop = Dropout(rate=0.2)(lstm_enc)
rep = RepeatVector(trainX0.shape[1])(lstm_enc_drop)
lstm_dec = LSTM(128, return_sequences=True)(rep)
lstm_dec_drop = Dropout(rate=0.2)(lstm_dec)
out0 = TimeDistributed(Dense(trainX0.shape[2], activation='linear'))(lstm_dec_drop)
out1 = TimeDistributed(Dense(trainX1.shape[2], activation='linear'))(lstm_dec_drop)
out2 = TimeDistributed(Dense(trainX2.shape[2], activation='linear'))(lstm_dec_drop)
model = Model(inputs=[in0, in1, in2], outputs=[out0, out1, out2])
model.compile(optimizer='adam', loss='mae')
model.summary()
",19703,,,,,3/18/2021 13:16,,,,2,,,,CC BY-SA 4.0 26895,2,,23666,3/18/2021 13:39,,1,,"

In my experience, neural networks with convolutional layers take much much longer to train, so try increasing the number of iterations (time steps). After running, save the network model (I dont know how to do it in torch, but in tensorflow it was model.save("filename"+".h5") ).

Then, load this saved model file and do a test run to see if it worked. In this case, you should notice pretty easily if it learned or not).

",45475,,,,,3/18/2021 13:39,,,,0,,,,CC BY-SA 4.0 26896,1,26926,,3/18/2021 13:40,,2,95,"

In this lecture (minute 42), the professor says that the number of training examples we need to densely cover the space of training vectors grows exponentially with the dimension of the space. So we need $4^2=16$ training data points if we're working on $2D$ space. I'd like to ask why this is true and how is it proved/achieved? The professor was talking before about K-Nearest Neighbors and he was using $L^{1}$ and $L^{2}$ metrics. I don't think these metrics induce a topology that makes a discrete set of points dense in the ambient space.

",44965,,2444,,3/19/2021 13:13,3/19/2021 19:23,Why the number of training points to densely cover the space grows exponentially with the dimension?,,1,1,,,,CC BY-SA 4.0 26898,2,,26819,3/18/2021 14:18,,2,,"

The fundamental idea behind policy gradient is just to maximise the return averaged across all probably trajectories, i.e

$$\begin{align} J(\theta) &= E[\sum\limits_{t=1}^{\tau}r(s_t,a_t)]\\ &=E_{\tau\sim p(\tau)}[R(\tau)] \end{align}$$

Where $\tau$ represents the probability of selecting a particular trajectory, if the trajectories all have fixed length then $\tau$ only has non-zero probability for trajectories of the specified length which in this case is 1.

The REINFORCE algorithm takes this expression and with some simplifications (causality to improve variance) and manipulation (log trick) obtains the gradient, pretty much as simple as that.


Intuition of algorithm in paper

In the algorithm they denote the $\pi_\theta$ (which is typically reserved for describing the policy) as the probability of the selection vector. By considering instead that the function $h_{\theta}$ as the policy instead I think it can be seen that they are actually averaging over multiple trajectories.

So we instead think of the data points as state-action pairs that we pass into $h_\theta$ to get the probability of selecting said action for a given state. These probabilities then dictates whether we choose the action or not. An alternative way to interpret this as if we imagine the action space for each state as binary then we can think of the "other action" as not impacting the predictor.

The gradient used for updating the parameters associated with the policy uses the log probability of $\pi_\theta$ which if we expand it, expressing it using $h_\theta$, (as done on page 4) we can see it's the sum of the log probabilities of selecting (or not selecting) each data point.

By considering the return for each data point constant (each data point has the same loss incurred by predictor model on the validation set) and absorbing the average over batch size into the step size $\beta$ it could be interpreted as an average


Stability issues

RL is plagued with stability issues, be it selection of hyper parameters, random seeding etc. It's hard to often pinpoint why results aren't exact but as long as you get something in a similar ball park i'd say thats pretty good going

",42514,,,,,3/18/2021 14:18,,,,0,,,,CC BY-SA 4.0 26900,2,,10180,3/18/2021 14:33,,2,,"

What is planning?

Planning (aka automated planning or AI planning) is the process of searching for a plan, which is a sequence of actions that bring the agent/world from an initial state to one or more goal states, or a policy, which a function from states to actions (or probability distributions over actions). Planning is not just the use of a search algorithm (e.g. a state-space search algorithm, such as A*) to find the plan, but planning also involves the modelling of a problem by describing it in some way, e.g. with an action language, such as PDDL or ADL, or even with propositional logic.

There are different planning problems, such as

  • classical planning
  • planning in Markov Decision Processes (which are mathematical models that describe stochastic environments/problems, i.e. environments where an action $a$ may not always lead to the same outcome)
  • temporal planning (i.e. when multiple actions can be taken at the same time and/or their duration may be different/variable)
  • hierarchical vs non-hierarchical planning

There are different planning approaches, such as

  • planning by state-space search (with or without heuristics), i.e. formulate the planning problem so that you can apply a state-space search algorithm (examples of state-space search algorithms are A*, IDA*, WA*, DFS or BFS); this is probably the approach to planning that makes people confuse planning with search (which typically implicitly refers to state-space search)

  • planning by SAT, i.e. formulate the planning problem so that you can you a SAT solver to solve it (e.g. SATPLAN)

  • planning by solving constraint satisfaction problems (similar to SAT)

  • planning by "symbolic" search methods (such as Binary Decision Diagrams)

There are many different planning algorithms or planners (of course, each of these corresponds to some approach or is used to solve a particular planning problem), such as

  • policy iteration in the context of Markov decision processes
  • GraphPlan
  • STRIPS (which stands for Stanford Research Institute Problem Solver, but note that STRIPS also and maybe more commonly refers to the corresponding description language used to describe the planning problem)

What is search?

In the context of AI, when people use the word search, they can either implicitly refer to state-space search (with algorithms like A*) or, more vaguely/generally, to any type of search, e.g. gradient descent can also be viewed as a search algorithm, typically, in a continuous state space (rather than discrete state space), e.g. the space of real-valued vectors of parameters for a certain model (e.g. a neural network).

What is the difference between planning and search?

The difference should more or less be clear from my first paragraph above that describes planning: search is used for planning, but the planning process/study also includes other things, such as describing the problem with an action language, such as PDDL. You can also view planning as an application of search algorithms of different types. In any case, planning is about searching for plans or policies. You should probably check this course for more info. This definition of (automated) planning is consistent with our usual notion of planning.

",2444,,2444,,3/19/2021 16:59,3/19/2021 16:59,,,,0,,,,CC BY-SA 4.0 26902,1,26903,,3/18/2021 17:23,,0,180,"

I am using a U-Net to segment cancer cells in images of patients' arms. I would like to add patient data to it in order to see if it is possible to enhance the segmentation (patient data comes in the form of a table containing features such as gender, age, etc.). So far, my researches have led me nowhere. What can I do in order to achieve this?

",44898,,2444,,3/19/2021 11:50,1/16/2023 5:06,How should I incorporate numerical and categorical data as part of the inputs to the U-net for semantic segmentation?,,1,0,,,,CC BY-SA 4.0 26903,2,,26902,3/18/2021 17:29,,2,,"

What you want to do is called multi-task learning. Here's what you do:

  1. Create a second Input.
  2. Attach it to 1D CNN (2-3 layers), so it aggregates this tabular information.
  3. Concatenate this feature with the intermediate feature generated by the U-Net using Concatenate layer.
  4. Put a dense layer of 2 after this.
  5. Put softmax with units = number of classes.
  6. Add CE loss calculated with this one and the ground labels to the loss of U-Net.

This is in regards to TensorFlow. The same can be done in PyTorch easily.

",37203,,,,,3/18/2021 17:29,,,,0,,,,CC BY-SA 4.0 26904,2,,26878,3/18/2021 18:14,,0,,"

You're correct: the MAP estimate is a point estimate (specifically, MAP is used to estimate the mode of a probability distribution).

I think that the paper is referring to the (output) probability distribution over the possible targets/labels, given the point estimate. However, in the case of MLE, you can also have that probability distribution, so I'm not sure why the paper's author is emphasizing that with MAP you can build that probability distribution (so maybe this/my interpretation of that excerpt is wrong!).

That table also shows that the MAP estimate is used to produce a probability distribution, but the $\sigma$ there should be unknown, but I didn't read that article, so maybe I am missing some info or assumption.

In any case, you could also find a point estimate of a parameter of a probability distribution, but this does not imply that MAP produces a probability distribution. For instance, you can show that, if you place a Gaussian prior on the weights of a neural network, this leads to the $L_2$ loss function, but training a (normal) neural network with $L_2$ does not lead to a probability distribution over the weights.

You should try reading only information from reliable references and books. For MAP, check out chapter 5. (p. 149) of the book "Machine Learning: A Probabilistic Perspective" by Murphy.

",2444,,2444,,3/18/2021 18:25,3/18/2021 18:25,,,,0,,,,CC BY-SA 4.0 26906,1,,,3/18/2021 19:54,,1,167,"

Can Facebook's LASER be fine-tuned like BERT for Question Answering tasks or Sentiment Analysis?

From my understanding, they created an embedding that allows for similar words in different languages to be close to each other. I just don't understand if this can be used for fine-tuning tasks like BERT can.

",44695,,2444,,3/19/2021 11:44,12/12/2022 13:08,Can Facebook's LASER be used like BERT?,,1,0,,,,CC BY-SA 4.0 26909,2,,26705,3/18/2021 20:30,,1,,"

After working on it for a while this is what I got.

Concerning proposition 1 in the paper, a rigorous statement could be the following version of the Gradient Theorem for line integrals:

Proposition 1. (Gradient Theorem for Lipschitz Continuous Functions). Let $U$ be an open subset of $\mathbb{R}^n$. If $F : U \to \mathbb{R}$ is Lipschitz continuous, and $\gamma : [0,1] \to U$ is a smooth path such that $F$ is differentiable at $\gamma(t)$ for almost every $t\in [0,1]$, then $$ \int_{\gamma} \nabla F(\mathbf{x}) \cdot d\mathbf{x} = F(\gamma(1)) - F(\gamma(0)) \,. $$

I found theorem 1 about symmetry-preserving harder to state rigurously while still fully capturing its intended meaning - this is the best I got:

Theorem 1. Given $i,j\in \{1,\dots,n\}$, $i\neq j$, real numbers $a<b$, and a strictly monotonic smooth path $\gamma : [0,1] \to (a,b)^n$ such that $\gamma_i(0)=\gamma_j(0)$ and $\gamma_i(1)=\gamma_j(1)$, then the following statements are equivalent:

(1) For every $t\in [0,1]$, $\gamma_i(t) = \gamma_j(t)$.

(2) For every function $F : [a,b]^n \to \mathbb{R}$ symmetric in $x_i$ and $x_j$ and verifying the premises of proposition 1 with $U=(a,b)^n$ we have $\int_{\gamma} \frac{\partial F(\mathbf{x})}{\partial x_i} \, dx_i = \int_{\gamma} \frac{\partial F(\mathbf{x})}{\partial x_j} \, dx_j$.

Further details and proofs: Symmetry-Preserving Paths in Integrated Gradients

",45259,,,,,3/18/2021 20:30,,,,0,,,,CC BY-SA 4.0 26910,1,,,3/18/2021 20:39,,2,29,"

I've been looking at neural networks for control applications. Let's say I used an RL algorithm to train a controller for the cart pole balancing problem.

Assuming the neural network is simple and very small, I can pretty much deduce what exactly the network is doing. For instance, if the network takes inputs for pole angle and cart position and outputs a motor force, the neural network is approximating a function that will move the cart left if the pole is falling left etc. and I can forward propagate through the network manually, again assuming that it is simple. In this case however, I could say that the neural network isn't truly learning a behavior, and instead is just mapping the problem space.

However, what if I trained another, larger network for a similar problem, where there are environmental uncertainties that randomly occur (ie. oil patch on the ground so the cart dynamics change, or the ground is made of ice, or there are stochastic disturbances that simulate someone bumping the cart). If the training is successful, the resulting neural network would be learning the behaviour of balancing the cart for a variety of situations (robustness), instead of just pushing it left or right depending on the pole angle.

The cart pole problem may not be the best example for this since it's a relatively simple control problem, but for more complex behaviors (ie. autonomous driving), where does this inflection point between learning and mapping exist?

Is this even a valid question, or am I just completely mistaken and everything is technically just a function approximation and there is never any "true" robust learning happening?

",45349,,,,,3/18/2021 20:39,Where is the difference between a neural network mapping a problem space and learning a behaviour?,,0,0,,,,CC BY-SA 4.0 26912,2,,22888,3/18/2021 22:55,,1,,"

The premise of this question is somewhat misleading. There is a deterministic optimal policy for a MDP, but this does not mean a stochastic optimal policy never exists. Talking about the optimal policy can be misleading, as there may be many different optimal policies.

For example, certainly we could imagine an MDP where $Q^*(s,a_0) = Q^*(s,a_1)$ for two different actions $a_1$ and $a_2$ that both maximize the optimal action-value function $Q^*$ at some state $s$. Then a stochastic policy choosing randomly between $a_1$ and $a_2$ at $s$ is optimal, but so is a deterministic policy that always picks $a_1$ at $s$, and a deterministic policy that always picks $a_2$ at $s$.

",45529,,,,,3/18/2021 22:55,,,,1,,,,CC BY-SA 4.0 26914,2,,10180,3/19/2021 6:00,,-1,,"

Often terminology is vague and if this will be a test item for school I would circle back with teacher with my response before accepting it.

They key difference is prior knowledge.

Search - when one searches they explore without prior knowledge. This would likely be a stochastic process of trial and error. There is no expectation about state transitions. Example is exploring an abandoned house. You have no idea what you'll find next.

Plan - you have a model of state transitions and can make predictions. One step or multi step prediction capabilities allow you to predict an outcome and therefore choose an outcome from several options.

",40779,,,,,3/19/2021 6:00,,,,5,,,,CC BY-SA 4.0 26920,1,,,3/19/2021 12:04,,1,135,"

I am trying to code a two layered neural network simple NN as I have described here https://itisexplained.com/html/NN/ml/5_codingneuralnetwork/

I am getting stuck on the last step of updating the weights after calculating the gradients for the outer and inner layers via back-propagation

#---------------------------------------------------------------

# Two layered NW. Using from (1) and the equations we derived as explanations
# (1) http://iamtrask.github.io/2015/07/12/basic-python-network/
#---------------------------------------------------------------

import numpy as np
# seed random numbers to make calculation deterministic 
np.random.seed(1)

# pretty print numpy array
np.set_printoptions(formatter={'float': '{: 0.3f}'.format})

# let us code our sigmoid funciton
def sigmoid(x):
    return 1/(1+np.exp(-x))

# let us add a method that takes the derivative of x as well
def derv_sigmoid(x):
   return x*(1-x)

# set learning rate as 1 for this toy example
learningRate =  1

# input x, also used as the training set here
x = np.array([ [0,0,1],[0,1,1],[1,0,1],[1,1,1]  ])

# desired output for each of the training set above
y = np.array([[0,1,1,0]]).T

# Explanaiton - as long as input has two ones, but not three, ouput is One
"""
Input [0,0,1]  Output = 0
Input [0,1,1]  Output = 1
Input [1,0,1]  Output = 1
Input [1,1,1]  Output = 0
"""

input_rows = 4
# Randomly initalised weights
weight1 =  np.random.random((3,input_rows))
weight2 =  np.random.random((input_rows,1))

print("Shape weight1",np.shape(weight1)) #debug
print("Shape weight2",np.shape(weight2)) #debug

# Activation to layer 0 is taken as input x
a0 = x

iterations = 1000
for iter in range(0,iterations):

  # Forward pass - Straight Forward
  z1= x @ weight1
  a1 = sigmoid(z1) 
  z2= a1 @ weight2
  a2 = sigmoid(z2) 

  # Backward Pass - Backpropagation 
  delta2  = (y-a2)
  #---------------------------------------------------------------
  # Calcluating change of Cost/Loss wrto weight of 2nd/last layer
  # Eq (A) ---> dC_dw2 = delta2*derv_sigmoid(z2)
  #---------------------------------------------------------------

  dC_dw2  = delta2 * derv_sigmoid(a2)

  if iter == 0:
    print("Shape dC_dw2",np.shape(dC_dw2)) #debug
  
  #---------------------------------------------------------------
  # Calcluating change of Cost/Loss wrto weight of 2nd/last layer
  # Eq (B)---> dC_dw1 = derv_sigmoid(a1)*delta2*derv_sigmoid(a2)*weight2
  # note  delta2*derv_sigmoid(a2) == dC_dw2 
  # dC_dw1 = derv_sigmoid(a1)*dC_dw2*weight2
  #---------------------------------------------------------------
  
  dC_dw1 =  (np.multiply(dC_dw2,weight2.T)) * derv_sigmoid(a1)
  if iter == 0:
    print("Shape dC_dw1",np.shape(dC_dw1)) #debug
  

  #---------------------------------------------------------------
  #Gradinent descent
  #---------------------------------------------------------------
 
  #weight2 = weight2 - learningRate*dC_dw2 --> these are what the textbook tells
  #weight1 = weight1 - learningRate*dC_dw1 

  weight2 = weight2 + learningRate*np.dot(a1.T,dC_dw2) # this is what works
  weight1 = weight1 + learningRate*np.dot(a0.T,dC_dw1) 
  

print("New ouput\n",a2)

Why is

  weight2 = weight2 + learningRate*np.dot(a1.T,dC_dw2)
  weight1 = weight1 + learningRate*np.dot(a0.T,dC_dw1) 

done instead of

  #weight2 = weight2 - learningRate*dC_dw2
  #weight1 = weight1 - learningRate*dC_dw1 

I am not getting the source of the equation of updating the weights by multiplying with the activation of the previous layer

As per gradient descent, the weight update should be

$$ W^{l}_{new} = W^{l}_{old} - \gamma * \frac{\delta C_0}{\delta w^{l}} $$

However, what works in practice is

$$ W^{l}_{new} = W^{l}_{old} - \gamma * \sigma(z^{l-1})\frac{\delta C_0}{ \delta w^{l}}, $$

where $\gamma$ is the learning rate.

",33650,,2444,,3/21/2021 10:03,3/27/2021 4:32,"In gradient descent's update rule, why do we use $\sigma(z^{l-1})\frac{\delta C_0}{ \delta w^{l}}$ instead of $\frac{\delta C_0}{\delta w^{l}}$?",,1,6,,,,CC BY-SA 4.0 26923,1,,,3/19/2021 14:13,,0,43,"

In the soft q-learning paper, they provide an expression for the maximum entropy objective that takes discounting into account.

My main question is: can someone explain how they incorporated discounting into the objective?

I've also got a few other questions related to the form of the discounted objective as well.

The first one being is: they first define the objective in way of obtaining $\pi_{\text{MaxEnt}}^*$.

In this first expression,

$$ \pi_{\mathrm{MaxEnt}}^{*}=\arg \max _{\pi} \sum_{t} \mathbb{E}_{\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right) \sim \rho_{\pi}}\left[\sum_{l=t}^{\infty} \gamma^{l-t} \mathbb{E}_{\left(\mathbf{s}_{l}, \mathbf{a}_{l}\right)}\left[r\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)+\alpha \mathcal{H}\left(\pi\left(\cdot \mid \mathbf{s}_{t}\right)\right) \mid \mathbf{s}_{t}, \mathbf{a}_{t}\right]\right], $$

I don't really understand the purpose of the inner expectation. If it's an expectation over $(s_l,a_l)$, the terms within the expectation are constants, so they can be taken out of the expectation and even the inner sum too. So, I think the subscript might be wrong, but was hoping someone could confirm this.

My second issue is: they rewrite the maximum entropy objective using $Q_{soft}$ in (16)

$$J(\pi) \triangleq \sum_{t} \mathbb{E}_{\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right) \sim \rho_{\pi}}\left[Q_{\mathrm{soft}}^{\pi}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)+\alpha \mathcal{H}\left(\pi\left(\cdot \mid \mathbf{s}_{t}\right)\right)\right]$$

I'm not sure how they do this. If someone could provide a proof of this connection, that would be much appreciated.

",42514,,2444,,3/21/2021 9:58,3/21/2021 9:58,How is the discounted maximum entropy objective obtained for soft-q-learning and SAC,,0,2,,,,CC BY-SA 4.0 26926,2,,26896,3/19/2021 19:23,,2,,"

First, let's try to build some intuition for what we mean when we say that we want to "densely cover" a $d$-dimensional space $\mathbb{R}^d$ of real numbers. For simplicity, let's assume that all values in all dimensions are restricted to lie in $[0, 1]$. Even with just a single dimension $d=1$, there are actually already infinitely many different possible values even in such a restricted $[0, 1]$ range.

But generally we don't actually care about literally covering every single possible value. Generally, we expect that points in this $d$-dimensional space that are "close" to each other also "behave" similarly, that there's some level of "continuity". Hence, to get "sufficient" or "good" or "dense" coverage of the space, you can somewhat informally assume that every data point you have occupies some space around it. This is the intuition behind Lutz Lehmann's comment under your question: you can think of every point as being a $d$-dimensional cube occupying some volume of your $d$-dimensional space.

Now, if you have a $d$-dimensional space of size $[0, 1]$ along every dimension, and you have little cubes that occupy a part of that space (for instance, cubes of size $0.1$ in every dimension), you will indeed find that the number of cubes you need to fill up your space scales exponentially in $d$. The basic idea is: if some number of cubes $K$ is sufficient to fill up the $d$-dimensional space, and if you then increase the dimensionality to $d+1$, you'll need $dK$ cubes to fill the new space. When you add a new dimension, the complete previous space becomes essentially just one "slice" of the new space.

For dimensions $d = 1, 2, 3$, this is fairly easy to visualise. If you have $d=1$, your space is really just a line, or a line segment if you constrain the values to lie in $[0, 1]$. If you have a $[0, 1]$ line segment, and you have little cubes of length $0.1$, you'll need just ten of them to fill up the line.

Now imagine that you add the second dimension. Suddenly your line becomes an entire plane, or a $10\times10$ square grid. The $10$ cubes are now only sufficient to fill up a single row, and you'll have to repeat this $10$ times over to fill up the entire $2$D space; you need $10^2 = 100$ cubes.

Now imagine that you add the third dimension. What used to be a plane gets "pulled out" into an entire three-dimensional cube -- a large cube, which will require many little cubes to fill! The plane that we had previously is again just a flat slice in this larger $3$D space, and the entire strategy for filling up a plane will have to be repeated $10$ times over to fill up $10$ such slices of the $3$D space; this now requires $10^3 = 1000$ cubes.

Past $3$ dimensions, the story continues in exactly the same way, but is a bit harder for us humans to visualise.

",1641,,,,,3/19/2021 19:23,,,,3,,,,CC BY-SA 4.0 26927,2,,26877,3/19/2021 21:22,,2,,"

A lot of natural language processing software are (in 2021) using statistical approaches.

Read The Deep Learning Revolution (by T.Sejnowski), Artificial Beings, the Conscience of a conscient machine (by J.Pitrat), Introduction to Deep Learning (by E.Charniak).

However, mixed approaches (like in RefPerSys) can also be used.

The so-called frame-based approaches can, and have been used, in NLP. From an abstract point of view, they are close to algebra. And probably are still one of the best approaches for generating natural language sentences (in written form).

Pitrat has for dozen of years taught and advocated a symbolic AI approach to NLP. You could read his French book Métaconnaissances, futur de l'IA (metaknowledge, future of AI).

Read also books like Knowledge Representation and Reasoning (Braqueman & Leveque)

You might also read (if you can read French) Nicolas Bourbaki's Théorie des Ensembles or (in English) Interactive Theorem Proving and Program Development (by Bertot & Castéran).

Symbolic AI approaches are definitely "algebra-like". In practice, they don't need powerful GPGPUs, or even don't do a lot of floating point operations.

I assume you want to process human language text available as textual files (e.g. HTML, XML, UTF8 encoded plain text, etc...). If you want to process sounds and speech (for example, an audio signal coming from a microphone), you do need much more signal processing and floating-point operations. If you want to process images (e.g. handwritten text), it is also different. In France, I would recommend Professor Mohamed Daoudi.

",3335,,3335,,12/27/2021 13:53,12/27/2021 13:53,,,,3,,,,CC BY-SA 4.0 26928,1,,,3/20/2021 8:11,,2,82,"

I learned from this blog post Self-Supervised Learning: The Dark Matter of Intelligence that

We believe that self-supervised learning (SSL) is one of the most promising ways to build such background knowledge and approximate a form of common sense in AI systems.

What are other most promising ways which would be competitive to self-supervised learning?

I only know the knowledge base, but I don't think it would be that promising due to the curation problem of the large scale Automated Knowledge Base Construction.

",5351,,2444,,3/20/2021 23:16,12/17/2021 15:53,What are some most promising ways to approximate common sense and background knowledge?,,1,0,,,,CC BY-SA 4.0 26931,2,,26882,3/20/2021 21:17,,0,,"

while using a neural network for this type of problem is not the ideal use-case, it is a good exercise.

In terms of conceptual issues, the most concerning that I see is the loss: $\sum_{i=1}^N L_i$.

First issue, is that it validates loss at each time step equivalently. This is probably not ideal because in the example (cat), we dont expect it to know its English from just c or ca, but at cat. The quickest fix and probably the best, is to just use $L = L_N$. Though an argument would be that the model should become more and more aware as it gets more letters, and this loss function doesnt achieve that, so another solution would be to add another fixed parameter that you can play with: $L = \sum_i r^{-i}L_i$ where $0 \lt r \lt 1$. Note that $r^{-i}$ can be replaced with any function that increases in size.

Another issue with the loss is it doesnt normalize, meaning on average, larger words will hold more weight to the model than smaller ones, while this may be intended it should be noted and considered (also note if you do end up going with $L=L_N$ this will no longer be a concern.

Hope this helps

",25496,,,,,3/20/2021 21:17,,,,5,,,,CC BY-SA 4.0 26932,1,26933,,3/21/2021 4:54,,0,43,"

I have a dataset of movie reviews annotated by 3 persons. The following example contains one sentence with corresponding annotations from 3 different persons.

sentence = ['I', 'like', 'action', 'movies','!']
annotator_1 = ['O','O', 'B_A', 'I_A', 'O'] 
annotator_2 = ['O','O', 'B_A', 'I_A', 'O'] 
annotator_3 = ['O','O', 'B_A', 'O', 'O']

The labels follow the BIO format. That is, B_A means the beginning of aspect-term (action) and I_A indicates inside of aspect-term (movie). Unfortunately, the annotators do not agree always together. While the first two persons assigned the right labels for aspect-term (action movie), the last one mislabeled the token (movies).

I am using Bi-LSTM-CRF sequence tagger to train the model. However, I am not sure if am using the training data correctly.

Is it correct to feed the model the same sentence with annotations from 3 persons? Then test it in the same way, i.e., the same sentence with different annotations?

Another question.

I merged the annotations in one final list of labels as follows:

final_annotation = ['O','O', 'B_A', 'I_A', 'O']

In this case, the final label is chosen based on the majority of labels among three annotators.

Is it right to feed the model the same sentence with corresponding annotations from all users during the testing phase?

",44718,,2444,,3/22/2021 9:31,3/22/2021 9:31,How to train a sequence labeling model with annotations from three annotators?,,1,0,,,,CC BY-SA 4.0 26933,2,,26932,3/21/2021 15:42,,1,,"

Both ways are valid. It depends on what you want from the model and expect from the data. Generally though I would use 1 assumption and stick with it (unless there was a specific reason not to), so I would use all lines for test if training done that way, and same for majority.

Also note if you ever get more than 3 people, you can choose to do a variance based approach (use data if only x% agree, throw away otherwise (or you could even weigh controversial labels lower))

",25496,,,,,3/21/2021 15:42,,,,2,,,,CC BY-SA 4.0 26934,1,,,3/21/2021 21:02,,0,112,"

I've seen several people say that sigmoids are like a saturating firing rate of a neuron but I don't see how or why they interpret it as such. I especially don't see the relationship between a "rate" (so a number of something over time, I guess here it's the number that a neuron activates in a unit of time) and the sigmoid graph. For me it resembles more to the voltage output of an operational amplifier in some cases.

",44965,,,,,3/21/2021 21:02,Why is the sigmoid function interpreted as a saturating firing rate of a neuron?,,0,4,,,,CC BY-SA 4.0 26935,1,,,3/21/2021 23:03,,1,25,"

I want to implement a DDPG method and obviously, the action space will be continuous. I have three outputs. The first output should be zero or a value between 200 and 400, and the other outputs have similar conditions. I don't know how can I implement this condition in the layers and activation functions. Should I use a binary activation before the scaled sigmoid function? How can I scale the activation function for this example?

(a1 = 0) or (200 < a1 < 400)
(a2 = 0) or (100 < a2 < 500)
(a3 = 0) or (200 < a3 < 1000)

",45572,,,,,3/22/2021 3:13,How to have zero value or a value between 200 and 400 in the output of a deep learning model?,,1,0,,,,CC BY-SA 4.0 26936,2,,26935,3/22/2021 3:13,,2,,"

generally the approach is to have a separate head. For example, imagine you have latent vector $z_k$, you would output two values: $h(z_k)$ and $f(z_k)$ where $0 \leq h \leq 1$ and $b_0 \leq f \leq b_1$ where $b_0$ and $b_1$ are your bounds.

In thios setup, during inference you would check $h_k$ and if its greater than some threshold (usually .5), youd evaluate/output $f_k$.

In this case your loss would look something like $L_i = L_i^{(1)}(h_i, y_i)+y_i*L_i^{(2)}(f_i,v_i)$ where $(y,v)$ are your labels, and the $L$'s are your loss of choice.

",25496,,,,,3/22/2021 3:13,,,,0,,,,CC BY-SA 4.0 26937,1,,,3/22/2021 5:21,,1,74,"

From what I learn from CS285 and OpenAI's spinning up, a trajectory in RL is a sequence of state-action pairs:

$$\tau = \{s_0, a_0, ..., s_t, a_t\}$$

And the resulting trajectory probability is:

$$ P(\tau \mid \pi)=\rho_{0}\left(s_{0}\right) \prod_{t=0}^{T-1} P\left(s_{t+1} \mid s_{t}, a_{t}\right) \pi\left(a_{t} \mid s_{t}\right) $$

  1. From CS285: http://rail.eecs.berkeley.edu/deeprlcourse/static/slides/lec-4.pdf

  2. From spinning up: https://spinningup.openai.com/en/latest/spinningup/rl_intro.html#trajectories

However, from my derivation, the above trajectory probability actually corresponds to the following sequence where the last action $a_t$ is absent:

$$ \tau = \{s_0, a_0, ..., s_t\} $$

Can someone please help me clarify this confusion?

",9180,,2444,,3/22/2021 9:28,3/22/2021 10:20,Does a trajectory in reinforcement learning contain the last action?,,1,0,,,,CC BY-SA 4.0 26940,2,,26906,3/22/2021 9:25,,0,,"

LASER creates multilingual contextualized word embeddings, what you do with them is up to you. You can use this as a feature extraction and add whatever you want to the end of the network.

I believe the implementation by facebook does not let you change the weights of the LASER model itself, they are froozen to the best of my knowledge.

So, yes, you can use LASER for downstream tasks, but you cannot change the actual weights of the LASER model.

",41319,,,,,3/22/2021 9:25,,,,0,,,,CC BY-SA 4.0 26941,2,,26937,3/22/2021 10:20,,1,,"

If we use $T$ as the notation for the terminal state, then the last action is $a_{T-1}$. This is because when you reach state $s_T$ you don't take another action, which would be $a_T$, because the episode is finished upon reaching the terminal state.

",36821,,,,,3/22/2021 10:20,,,,0,,,,CC BY-SA 4.0 26942,1,26943,,3/22/2021 10:37,,2,323,"

I'm trying to understand the concept of Variational Inference for BNNs. My source is this work. The aim is to minimize the divergence between the approx. distribution and the true posterior

$$\text{KL}(q_{\theta}(w)||p(w|D) = \int q_{\theta}(w) \ log \frac{q_{\theta}(w)}{p(w \mid D)} \ dw$$

This can be expanded out as $$- F[q_{\theta}] + \log \ p(D)$$ where $$F[q_{\theta}] = -\text{KL}(q_{\theta}(w) || p(w)) + E[\log p(D \mid w)]$$

Because $log \ p(D)$ does not contain any variational parameters, the derivative will be zero. I really would like to summarize the concept of VI in words.

How can one explain the last formula in words intuitive and with it on the fact that one approximates a function without really knowing it / able to compute it?

My attempt would be: Minimizing the KL between the approximate distribution and the true posterior boils down in minimizing the KL between the approximate distribution and the prior (?) and maximizing the log-likelihood that the parameters of the approximate distribution resulted in the data. Is this somehow correct?

",45600,,2444,,3/23/2021 11:42,3/23/2021 11:42,What is the intuition behind variational inference for Bayesian neural networks?,,1,0,,,,CC BY-SA 4.0 26943,2,,26942,3/22/2021 11:09,,4,,"

Your description of what is going on is more or less correct, although I am not completely sure that you have really understood it, given your last question.

So, let me enumerate the steps.

  1. The computation of the posterior is often intractable (given that the evidence, i.e. the denominator of the right-hand side of the Bayes' rule, might be numerically expensive to approximate/compute or there's no closed-form solution)

  2. To address this intractability, you cast the Bayesian inference problem (i.e. the application of Bayes' rule) as an optimization problem

    1. You assume that you can approximate the posterior with another simpler distribution (e.g. a Gaussian), known as the variational distribution

    2. You formulate this optimization problem as the minimization of some notion of distance (e.g. the KL divergence) between the posterior and the VD

    3. However, the KL divergence between the posterior and the VD turns out to be intractable too, given that, if you expand it, you will find out that there's still an evidence term

    4. Therefore, you use a tractable surrogate (i.e. equivalent, up to some constant) objective function, which is known as the evidence lower bound (ELBO) (which is sometimes known as the variational free energy), which is the sum of 2 terms

      1. KL divergence between the VD and the prior
      2. the likelihood of the parameters given the data

To address your last doubt/question, the ELBO does not contain the posterior (i.e. what you really want to find), but only the variational distribution (you choose this!), the prior (which you also define/choose), and the likelihood (which, in practice, corresponds to the typical usage of the cross-entropy; so the only thing that you need more, with respect to the traditional neural networks, is the computation of the KL divergence): in other words, you originally formulate the problem as the minimization of the KL divergence between the posterior and the VD, but this is just a formulation.

",2444,,2444,,3/22/2021 14:05,3/22/2021 14:05,,,,4,,,,CC BY-SA 4.0 26944,1,,,3/22/2021 11:34,,2,113,"

I read this paper Text Compression as a Test for Artificial Intelligence, Mahoney, 1999.

So far I understood the following: Text Compression tests can be used as an alternative to Turing Tests for intelligence. The Bits per character score obtained from compression of a standard benchmark corpus, can be used as a quantitative measure for intelligence

My questions:

  1. Is my understanding of the topic correct?
  2. Does this mean that applications like 7zip/WinRar are intelligent?
  3. How are the ways a human compresses information (as in form of summary) and ways a computer compresses (using Huffman coding or something) are compatible? How can we compare that?
",45586,,45586,,3/23/2021 9:27,12/13/2022 17:12,Do text compression tests qualify winRar or 7zip as intelligent?,,2,0,,,,CC BY-SA 4.0 26946,2,,26944,3/22/2021 14:14,,1,,"

The paper suggests an alternative test to the famous Turing test, which tests a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

In this test, if winRar or 7zip will compress a file similarly to how a human would compress a file (how does a human compress a file?!), then, yes, those programs will pass the test and will be considered intelligent.

... thus compression ratio on a standard benchmark corpus could be used as an objective and quantitative alternative test for AI (Mahoney, 1999).

",43351,,2444,,3/22/2021 14:25,3/22/2021 14:25,,,,2,,,,CC BY-SA 4.0 26947,1,,,3/22/2021 17:20,,1,50,"

The reason I ask this question is because we humans tend to compartmentalize our sensory inputs, except in some individuals that experience synesthesia. If an Artificial Intelligence Entity (AIE) can correlate all sensory input (as a bunch of tensors), wouldn't that be the ultimate form of synesthesia?

An AIE geometry / color synesthesia might lead to an explanation of how Joan Miro colorized his doodles.

",45595,,45595,,3/22/2021 21:40,3/22/2021 21:40,Is it likely that a sentient AI experience synesthesia?,,0,0,,,,CC BY-SA 4.0 26948,1,,,3/22/2021 18:48,,0,34,"

Does it make sense to have the backpropagation of a neural network layer happen all at once if the learning rate is lowered? This would mean the new weights of that layer would be independent of each other, but, it would be extremely fast. Is this method feasible in any way for a neural network, or would it create a cost threshold which the network can't reach because of it's independent inaccuracy?

",40622,,,,,3/22/2021 18:48,Is vectorizing backpropagation feasible?,,0,4,,,,CC BY-SA 4.0 26949,2,,26882,3/22/2021 20:25,,0,,"

I think the problem is that you're only training the network on words. Every example in your training data has a desired label of "is a word," and so your network could achieve the lowest possible loss by simply giving a probability of 100% to "is a word" all of the time.

The most straightforward way to fix this would be to also include non-words in your training data. Of course, the words should have a target label of "is a word" and the non-words should have a target label of "is not a word."

",1834,,,,,3/22/2021 20:25,,,,0,,,,CC BY-SA 4.0 26950,1,,,3/22/2021 20:32,,0,148,"

I found a very interesting paper on the internet that tries to apply Bayesian inference with a gradient-free online-learning approach: Bayesian Perceptron: Towards fully Bayesian Neural Networks.

I would love to understand this work, but unfortunately I am reaching my limits with my Bayesian knowledge. Let us assume that we have the weights $\mathcal{w}$ of our model and observed the data $\mathcal{D}$. Using the Bayes rule, we obtain the posterior according to $$p(\mathcal{w}|D)=\frac{p(D|\mathcal{w})p(\mathcal{w})}{p(D)}$$.

In words: we update our prior belief over our weights by multiplying the prior with the likelihood and divide everything by the evidence. In order to calculate the true posterior, we would need to calculate the evidence by marginalizing over (intergrating out) our unknown parameters. This gives the integral $$p(D) = \int p(D|\mathbf{w})p(\mathbf{w})dw$$.

So far so good. Now I refer to the paper mentioned above. Here, the approach is presented exemplarily on a neuron whose weighted sum is called $a$, which is then given to the activation function $f(.)$. Moreover it is assumed that $\mathbf{w}\sim N (\mu_w, \mathbf{C}_w)$. Because of the linearity it can be exploited that also $\mathbf{a}\sim N (\mu_a, \mathbf{C}_a)$.

What I am confused about now is formula (14), which seems to show the compute the true posterior: $$p(w) = \int p(a, w|D_i)da = \int p(w|a, D_i)p(a|D_i)da$$

Why is $a$ integrated out here and not $w$? We want a distribution over $w$, don't we? But without marginalization over $w$ there is still uncertainty inside $w$

Glad about any help and food for thought;)

",45600,,45600,,3/23/2021 7:37,3/23/2021 15:53,Bayesian Perceptron: Why to marginalize over neuron's output instead of it's weights?,,1,0,,,,CC BY-SA 4.0 26951,1,26953,,3/22/2021 21:36,,1,145,"

I have two questions on the Dueling DQN paper. First, I have an issue on understanding the identifiability that Dueling DQN paper mentions:

Here is my question: If we have given Q-values $Q(s, a; \theta)$ for all actions, I assume we can get value for state $s$ by:

$$V(s) = \frac {1} {|Q|} \sum_{a \in \mathcal{Q}} Q(s, a; \theta)$$ and the advantage by: $$A(s,a) = Q(s, a; \theta) - V(s), ~~~ \forall ~a ~in ~\mathcal{A}(s)$$

in which $\mathcal{A}(s)$ is the action space for state $s$. If this is correct, why do we need to have two heads in the network to obtain value and advantage separately?

and then obtain Q-value using

$$Q(s, a; \theta, \alpha, \beta) = V(s; \theta, \beta) + \left( A(s, a; \theta, \alpha) - \max_{a' \in | \mathcal{A} |} A(s, a'; \theta, \alpha) \right). \tag{8}$$

or $$Q(s, a; \theta, \alpha, \beta) = V (s; \theta, \beta) + \left( A(s, a; \theta, \alpha) − \frac {1} {|A|} \sum_{a' \in \mathcal{A}} A(s, a'; \theta, \alpha) \right). \tag{9}$$

Am I missing something?

My second question is why Dueling DQN does not use the target network as it is used in the DQN paper?

",16912,,2444,,1/28/2023 13:15,1/28/2023 13:15,"Why do we need to have two heads in D3QN to obtain value and advantage separately, if V is the average of Q values?",,1,1,,,,CC BY-SA 4.0 26952,1,26954,,3/23/2021 2:39,,2,188,"

How would the performance of federated learning (FL) compare to the performance of centralized machine learning (ML), when the data is independent and identically distributed (i.i.d.)?

Moreover, what is the difference in the performance of FL when the data is i.i.d. as compared to non-i.i.d?

",45605,,2444,,3/23/2021 20:15,3/23/2021 20:15,How would the performance of federated learning compare to the performance of centralized machine learning when the data is i.i.d.?,,1,0,,,,CC BY-SA 4.0 26953,2,,26951,3/23/2021 4:19,,1,,"

Regarding your first question, $$V^{\pi}(s) = \sum_{a \in A}\pi(a|s)Q^{\pi}(s,a)$$ so recovering the value function from Q really depends on what policy $\pi$ you are using. Hence, you can't really recover the value function $V(s)$ from the $Q(s,a)$ values without knowing your policy distribution for state $s$.

However, you can recover $Q^{\pi}(s,a)$ values if we know $V^{\pi}(s)$ and $A^{\pi}(s,a)$. This is because $$A^{\pi}(s,a) = V^{\pi}(s,a) - Q^{\pi}(s,a)$$

by definition of advantage. And this is why you need 2 heads to recover the $Q$ values from the Value and Advantage functions. In the original paper, the author's do not use this direct equation to recover $Q^{\pi}(s,a)$ values due to "identifability" issue and the fact that both $V^{\pi}(s)$ and $Q^{\pi}(s,a)$ are only estimates.

Regarding your second question, I believe the author's appllied the Duelling architecture on Double Deep Q Networks, which is an improvement over the single DQN used by Minh et al in learning atari. I do think that you can still use a a target network as in the single DQN case if you wanted to.

",32780,,,,,3/23/2021 4:19,,,,3,,,,CC BY-SA 4.0 26954,2,,26952,3/23/2021 4:22,,1,,"

There are some works that do this comparison. Briefly, it's been observed that the performance of models trained via FL drops as data distributions between participating agents differ. When data is IID-like though, performance is comparable to centralized training. Some works that I'm aware of are as follows:

  1. Overcoming Forgetting in Federated Learning on Non-IID Data
  2. Improving Accuracy of Federated Learning in Non-IID Settings
  3. Federated Learning with Non-IID Data

There are probably many more around. It's an active area of research.

",32621,,2444,,3/23/2021 8:56,3/23/2021 8:56,,,,2,,,,CC BY-SA 4.0 26955,2,,17975,3/23/2021 5:56,,1,,"

Okay, I think it's better if we distinguish loss and accuracy first via Jeremy's answer, and I agree with him with the sentence "low or huge loss is a subjective metric".

The loss value is easy to affect by noise from data and significant increase with a few error data points. My advice in this case is to use more evaluation metrics, and understand correctly what you need from your model.

For example, with Cifar 10, and you need the more correct label the better, you can believe in accuracy. However, if you want your model to make sure its result is the correct, area under receiver operating characteristic curve (AUROC) maybe the better choice.

For example, classification problem with 3 class, correct label y = 1:

  • Good accuracy, bad AUROC: the output probability from softmax [0.3,0.4,0.3]
  • Good accuracy, good AUROC: [0.1,0.8,0.1]

And with imbalanced dataset, Precision, Recall and F1-score will be more suitable.

",41287,,,,,3/23/2021 5:56,,,,0,,,,CC BY-SA 4.0 26956,1,,,3/23/2021 6:18,,5,218,"

I came across several papers by M. Hutter & S. Legg. Especially this one: Universal Intelligence: A Definition of Machine Intelligence, Shane Legg, Marcus Hutter

Given that it was published back in 2007, how much recognition or agreement has it received? Has any other work better formalizing the idea of intelligence been done since? What is considered current standard on the topic in the field?

",45586,,,,,3/3/2022 4:31,How widely accepted is the definition of intelligence by Marcus Hutter & Shane Legg?,,3,2,,,,CC BY-SA 4.0 26957,1,27616,,3/23/2021 7:39,,1,183,"

I found a very interesting paper on the internet that tries to apply Bayesian inference with a gradient-free online-learning approach: [Bayesian Perceptron: Bayesian Perceptron: Towards fully Bayesian Neural Networks.

I would love to understand this work, but, unfortunately, I am reaching my limits with my Bayesian knowledge. Let us assume that we have the weights $\mathcal{w}$ of our model and observed the data $\mathcal{D}$. Using the Bayes rule, we obtain the posterior according to $$p(\mathcal{w}|D)=\frac{p(D|\mathcal{w})p(\mathcal{w})}{p(D)}$$.

In words: we update our prior belief over our weights by multiplying the prior with the likelihood and divide everything by the evidence. In order to calculate the true posterior, we would need to calculate the evidence by marginalizing over (intergrating out) our unknown parameters. This gives the integral $$p(D) = \int p(D|\mathbf{w})p(\mathbf{w})dw$$.

So far so good. Now I refer to the paper mentioned above. Here, the approach is presented exemplarily on a neuron whose weighted sum is called $a$, which is then given to the activation function $f(.)$. Moreover it is assumed that $\mathbf{w}\sim N (\mu_w, \mathbf{C}_w)$. Because of the linearity, it can be exploited that also $\mathbf{a}\sim N (\mu_a, \mathbf{C}_a)$.

What I am confused about now is formula (14), which seems to show the compute the true posterior: $$p(w) = \int p(a, w|D_i)da = \int p(w|a, D_i)p(a|D_i)da$$

How is this formula of the posterior compatible with the Bayes Theorem? Where is the evidence, likelihood and prior?

",45600,,2444,,3/23/2021 9:22,5/2/2021 18:12,Bayesian Perceptron: How is it compatible to Bayes Theorem?,,2,0,,,,CC BY-SA 4.0 26958,1,27051,,3/23/2021 7:47,,4,1219,"

In this lecture, the professor says that one problem with the sigmoid function is that its outputs aren't zero-centered. Are the explanation provided by the professor regarding why this is bad is that the gradient of our loss w.r.t. the weights $\frac{\partial L}{\partial w}$ which is equal to $\frac{\partial L}{\partial \sigma}\frac{\partial \sigma}{\partial w}$ will always be either negative or positive and we'll have a problem updating our weights as she shows in this slide, we won't be able to move in the direction of the vector $(1,-1)$. I don't understand why since she only talks about one component of our gradient and not the whole vector. if the components of the gradient of our loss will have different signs which will allow us to adjust to different directions I'm I wrong? But the thing that I don't understand is how this property generalizes to non zero-centered functions and non-zero centered data?

",44965,,36737,,3/28/2021 17:55,5/31/2022 0:29,Why is it a problem if the outputs of an activation function are not zero-centered?,,1,0,,,,CC BY-SA 4.0 26959,2,,26957,3/23/2021 10:36,,0,,"

You have two dependent variables $a$ and $w$. So, there is a joint distribution $p(w, a)$. You can make a marginalization by one of them, pretty much as you did in your second formula. $$p(w) = \int p(w, a)da$$ $$p(w) = \int p(w | a)p(a)da$$

The only difference in this case, the calculation made for the specific point $x_i, y_i$, which is empathized by sub-index on $p_i$ and conditioning on $D_i$

The key thing is that we can calculate the target distribution in many ways. With likelihood, evidence and prior you could indeed find posterior, but they are not always tractable/available. So, that's why in literature we usually differentiate true posterior and approximate posterior (or just posterior). Usually we get $p(w|d)$ with some form of approximation, but in the paper authors decide to get closed-form solution. That's why it was useful to represent in another way with intermediate distribution $a$. This would allow them to get closed-form for different activation functions.

So it's posterior in the sense of general context, not that specific formula, I think your limits of Bayesian knowledge is just fine

",16940,,16940,,3/23/2021 14:33,3/23/2021 14:33,,,,5,,,,CC BY-SA 4.0 26961,2,,26950,3/23/2021 11:08,,1,,"

We want a distribution over $w$, don't we?

Yes. You want to obtain a distribution over the parameters, which models the uncertainty about the parameters. This distribution over the parameters can induce a probability distribution over the possible functions consistent with your data.

Why is $a$ integrated out here and not $w$?

This is just the definition of marginalization.

For simplicity, consider 2 discrete random variables $X$ and $Y$, which can take values $x_i$ and $y_i$, respectively, and for which you can have (at least conceptually) a joint distribution $p_{(X, Y)}(x, y)$. Then the marginal probability that $X$ takes the value $x_i$ independently of any outcome of the r.v. $Y$ (and this is the intuition behind the marginal distribution!) is given by

$${\displaystyle p_{X}(x_{i})=\sum _{j}p(x_{i},y_{j})}, \tag{1}\label{1}$$

So, in the case of equation 14 of the paper, there's the assumption that you have $p(w, a)$. Then $p(w)$ is a marginal distribution over $w$ (the parameters or weights of the neural network), independently of the values of the activations. Of course, $w$ and $a$ can be dependent or correlated, and that's why you also have $p(w, a)$, i.e. the probability distribution that describes how these two r.v.s vary together, but what you want to know is the probability distribution over $w$ independently of $a$, so you "marginalize out" the possible values of $a$ (not $w$).

Remember that $p(x, y) = p(x \mid y) p(y) = p(y \mid x) p(x)$ (this is known as the chain rule of probability for two r.v.s, which leads to the Bayes' theorem), so you could write equation (\ref{1}) as follows

$${\displaystyle p_{X}(x_{i})=\sum _{j}p(x_{i},y_{j})} = \sum _{j}p(x_{i} \mid y_{j}) p(y_j), \tag{2}\label{2}$$

In the case of real-valued r.v.s, you will have an integral rather than a summation, but the idea is roughly the same.

But without marginalization over $w$ there is still uncertainty inside $w$.

What we want to know is $p(w)$, i.e. a distribution over $w$, which models the uncertainty about the possible values that $w$ can take. So, once you know $p(w)$, depending on its shape, we will have more or less uncertainty about the possible values of $w$. For example, let's say $p(w)$ is a Gaussian distribution. Then, depending on the variance, you will have more or less uncertainty about the possible values of $w$.

Your doubt seems to be that you thought that marginalization is a way to not have uncertainty, but that's not true. So, the goal in Bayesian machine/deep learning is not to be certain about the values of the parameters, but to model the uncertainty about the values of the parameters.

",2444,,2444,,3/23/2021 15:53,3/23/2021 15:53,,,,1,,,,CC BY-SA 4.0 26962,2,,18353,3/23/2021 12:50,,0,,"

We basically distinguish between 3 forms of batch training: $$Loss_{minibatch} = \sum_{m} l_m(\mathbf{W},t_m) \;\;\; with \;m \;\epsilon \; M$$ where M is a (random) subset of the whole dataset.

$$Loss_{batch} = \sum_{b} l_m(\mathbf{W},t_b) \;\;\; with \;b \;\epsilon \; B$$ where B is the whole dataset.

$$Loss_{stochastic} = l_i(\mathbf{W},t_i) $$ where i is a single sample from the whole dataset.

Here t is the target/label of a sample m,b,i and W are the network weights. The most common case today is usually minibatch training.

When we are training(updating the weights of the neural network to optimize towards a lower Loss) we take the derivative of this loss function with respect to the weights W. This will give us the gradient of the NN which tells us how much and in what direction we should update each weight.

$$ \nabla L = \frac{dLoss_{minibatch}}{d \mathbf{W}} = \frac{d\sum_m l_m(\mathbf{W},t_m)}{d \mathbf{W}} = \sum_m \frac{d l_m(\mathbf{W},t_m)}{d \mathbf{W}} = \sum_m \nabla l_m$$

As you can see here in the example of the minibatch case: the total gradient is the sum of the gradient of each sample in the minibatch. So why do you think the first elemnts do not have an impact on the weight update? Or do I understand you wrong?

",13104,,13104,,3/23/2021 12:58,3/23/2021 12:58,,,,0,,,,CC BY-SA 4.0 26963,2,,20820,3/23/2021 14:24,,1,,"

I think your notations are unclear, but I can give an answer based on what you probably meant. For example, $\frac{\partial{L}}{\partial{W^x}}$ should be replaced by $(\nabla_{W^x_{j:}}L)_{j=1, ...,n}$ (assuming everything stays in $\mathbb{R}^n$). Also your expression for $\frac{\partial{L}}{\partial{W^x}}$ is wrong, even accounting for the notation.

Since $W^x_{j:}$ affects the loss through $H_{1,j}$ and $H_{2,j}$, it would be better to treat the math in this way: $$\nabla_{W^x_{j:}}L=\frac{\partial{L}}{\partial{H_{1,j}}}\nabla_{W^x_{j:}}H_{1,j}+\frac{\partial{L}}{\partial{H_{2,j}}}\nabla_{W^x_{j:}}H_{2,j}$$ Now, $H_{1, j}$ affects the loss though $H_{2,k}\ \forall\ k=1,...,n.$ So, $$\frac{\partial{L}}{\partial{H_{1,j}}}=\sum_{k=1}^{n}\frac{\partial{L}} {\partial{H_{2,k}}}W^x_{kj}$$ And, $$\frac{\partial{L}}{\partial{H_{2,j}}}=\sum_{k=1}^{n}\frac{\partial{L}} {\partial{Y_{k}}}W^y_{kj}$$ Similarly, $\nabla_YL$ can be computed.

",45625,,,,,3/23/2021 14:24,,,,0,,,,CC BY-SA 4.0 26966,1,,,3/23/2021 16:39,,2,87,"

I don't know anything about ML or NLP, but I was asked by someone to create brand new statutes (written laws) that resemble the ones currently in effect in my country. I have already gathered the laws, and have 5000 html files now, one per law.

The average size of each html file is 49 kB. The entire corpus is 300 MB.

I have two alternative goals (doing both would be perfect of course):

  • Generate a new, complete HTML file, that would imitate the 5000 existing ones (it would typically have 1 big heading at the top, sub-headings, articles with their own title and number, etc.)

  • Generate sentences that sound as if they could be found in a typical law (the laws are written in French)

Is any of those goals feasible, with such a small corpus (~300 MB total)?

Should I try and fine-tune an existing model (but in that case, wouldn't the small size of my corpus be a problem? Wouldn't it be "drowned out" in the rest of the training data?), or should I create one from scratch?

I've tried following guides on huggingface, but between the obsolete files, the undocumented flags and my general lack of knowledge of the subject, I'm completely lost.

Thanks in advance.

BTW, if you want to take a peek at the data, there it is: https://github.com/Biganon/rs/

",45627,,,,,8/23/2021 13:02,"I have 5000 html files (structured text), how can I generate a new one that ""resembles"" those?",,1,3,,,,CC BY-SA 4.0 26968,2,,26875,3/23/2021 17:57,,3,,"

One answer is infinite amount of time because it can always be better.

Another answer is:

  • 10k for training set
  • A PC with a GPU (3~4k USD), google colab (10 USD per month), or other cloud service (probably more expensive than colab)
  • One developer, 1 day lol
  • Two kinds is easier than multiple kinds
  • There is no paper that seeks to answer your question the way you put it. I wouldn't even recommend a paper. In fact, for you I'd recommend an AutoML tutorial. Check these. * no offence if I've misjudged your knowledge/skill level.
  • Here's a paper anyway :) https://paperswithcode.com/lib/torchvision/alexnet

To conclude, please be aware that your question is super open ended, and my answer is bad (but good enough for now maybe), but a good answer doesn't really exist. It's always going to be context dependent. For instance, you never said whether you need 90% or 99% accuracy.

",16871,,,,,3/23/2021 17:57,,,,1,,,,CC BY-SA 4.0 26971,1,26976,,3/24/2021 1:43,,0,87,"

I have a binary classification problem.

My neural network is getting between 10% and 45% accuracy on the validation set and 80% on the training set. Now, if I have a 10% accuracy and I just take the opposite of the predicted class, I will get 90% accuracy.

I am going to add a KNN module that shuts down that process if the inputted data is, or is very similar to the data present in the data set.

Would this be a valid approach for my project (which is going to go on my resume)?

",32636,,2444,,3/26/2021 17:25,3/31/2021 22:20,Could I just choose the other (non-predicted) class when the accuracy is low?,,2,4,,,,CC BY-SA 4.0 26973,1,26980,,3/24/2021 7:48,,3,1129,"

There is plenty of information describing Transformers in a lot of detail how to use them for NLP tasks. Transformers can be applied for time series forecasting. See for example "Adversarial Sparse Transformer for Time Series Forecasting" by Wu et al.

For understanding it is best to replicate everything according to already existing examples. There is a very nice example for LSTM with flights dataset https://stackabuse.com/time-series-prediction-using-lstm-with-pytorch-in-python/.

I guess I would like to know how to implement transformers for at first univariate (flight dataset) and later for multivariate time series data. What should be removed from the Transformer architecture to form a model that would predict time series?

",45648,,5763,,3/24/2021 16:18,4/24/2021 19:11,How to construct Transformers to predict multidimensional time series?,,1,0,,,,CC BY-SA 4.0 26976,2,,26971,3/24/2021 13:00,,2,,"

The short answer is no, you shouldn't do that.

There is a "distribution shift" thing when you have different x-y relation on the validation set then on the train set. The distribution shift would deteriorate your model performance and you should try to avoid that. The reason it's bad - ok, you find the way to fix the model for validation data, but what about novel test data? Will it be like a train? Will it be like validation? You don't know and your model is worthless in fact.

What you can do

  1. You should redefine the train/validation scheme. I would recommend making cross-validation with at least three splits
  2. As mentioned in a comment, your model seems to overfit; try to apply some regularization techniques, like dropout, batch norm, data augmentation, model simplification and so on
  3. Add some data if it's possible, that would better cover all the possible cases.
",16940,,36737,,3/29/2021 22:23,3/29/2021 22:23,,,,3,,,,CC BY-SA 4.0 26977,1,26987,,3/24/2021 13:01,,0,79,"

I have asked a question here, and one of the comments suggested that this is a case of severe overfitting. I made a neural network, which uses residual boosting (which is done via a KNN), and I am still just able to get < 50% accuracy on the test set.

What should I do?

I tried everything from reducing the number of epochs to replacing some layers with dropout.

Here is the source code.

",32636,,2444,,3/25/2021 9:44,3/25/2021 9:44,What are possible ways to combat overfitting or improve the test accuracy in my case?,,1,0,,,,CC BY-SA 4.0 26978,1,,,3/24/2021 13:54,,3,261,"

Are there heuristics that play Klondike Solitaire well?

I know there are some good exhaustive search solvers for Klondike Solitaire. The best one that I know of is Solvitaire (2019) which uses DFS, (see paper, code).

I searched the web for heuristics that plays as a human would play, with no backwards moves, however, I found only one. In that paper, they report on a win-rate of 13.05%. In comparison, human experts reach 36.6% win-rate in thoughtful solitaire which is Klondike Solitaire where the location of all the cards is known. Source: Solitaire: Man Versus Machine (2005).

Are there any other published heuristics for Klondike Solitaire?

When determining if a heuristic is interesting, I would consider its win-rate and the similarity to how humans are playing.

",43351,,3171,,4/17/2021 21:16,4/17/2021 21:16,Are there heuristics that play Klondike Solitaire well?,,0,2,,,,CC BY-SA 4.0 26980,2,,26973,3/24/2021 16:15,,1,,"

There is an implementation of the paper ("Adversarial Sparse Transformer for Time Series Forecasting"), in Python using Pytorch, here. Although it has the training and evaluation functionality implemented, it appears to be lacking a function for running a prediction. Maybe you can fork it and extend it.

UPDATE

There is also a paper, "Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting", by Zhou et al., which does forecasts on univariate and multivariate data. Their code is here.

",5763,,5763,,3/25/2021 18:49,3/25/2021 18:49,,,,0,,,,CC BY-SA 4.0 26982,1,,,3/24/2021 18:16,,2,1051,"

I am reading a paper on the K-WL GCN. I did not complete the paper yet, but I just skimmed over it. There I am trying to understand the K-WL test (page 3 Weisfeiler-Leman Algorithm). I think my understanding is quite ambiguous, so I looking for an example problem that is solved using K-WL test. But I can't find any of them on the web.

Does anyone have any solved example problem on K-WL or can anyone explain to me how the K-WL test works?

Note: If anyone also explains how K-WL GCN uses the K-WL test, I will be thankful.

",28048,,2444,,10/19/2021 12:40,10/19/2021 12:40,How does the K-dimensional WL test work?,,1,0,,,,CC BY-SA 4.0 26984,1,,,3/24/2021 20:54,,1,71,"

For a random scattering of points, in a bounded area, the goal is to find the largest circle that can be drawn inside those same bounds that does not enclose any points. Solving this problem with a genetic algorithm requires deciding how to encode as a genome, information sufficient to represent any solution. In this case, the only information we need is the center point of the circle, so our genome of point $p_i$ will look like $(x_i, y_i)$, representing the Cartesian coordinates.

In this case, what does each of the genetic operators mean for this simplistic genome? Geometrically speaking, what would a 1-point crossover look like in this case? What about mutation?

This is my answer, but I am not sure.

Consider two individuals with 2 variables each (2 dimensions), $p_1=(x_1, y_1)$ and $p_2=(x_2, y_2)$. For each variable, the parent who contributes its variable to the offspring is chosen randomly with equal probability. Geometrically speaking, a 1-point crossover would represent a quadrilateral in the Cartesian space, where one of the diagonals is formed by the parents and the other one by the offsprings $c_1=(x_1, y_2)$ and $c_2=(x_2, y_1)$.

On the other hand, a mutation operator is an r-geometric mutation under the metric $d$ if all its offsprings are in the $d$-ball of radius $r$ centered in the parent.

The radius (fitness function) would be the distance between the center (genome) and the closest point (star) from the random points in the bounded area.

",37540,,37540,,3/26/2021 16:36,3/26/2021 16:36,How should the 1-point crossover and mutation be defined for the problem of finding the largest circle that does not enclose any point?,,0,11,,,,CC BY-SA 4.0 26985,2,,26971,3/24/2021 22:26,,0,,"

I'm new to all this so take what I say with a grain of salt and not as fact, I don't have any formal education or training. I believe when you're referring to inversion predictions, you're not overthinking you're underthinking. For anything to have value it must also have an inverse or else there's no way to cognitively perceive it (contrast) otherwise you're looking at a white paper against white paper. Now since you're referring to data set prediction, you need to define it linearly (x, y) or f(x) to scale and plot. Therefore x and y BOTH must retain inverse proportionate values (I made up that term) in order to, in the context of assigning value, exist. So you need to have 4 quadrants of data for predictive, so now you're looking at quantum data processing in order to facilitate predictions in a non-linear context. Use a matrix, I believe Diroches Matrix should be applicable here. Also, remember that predictions are always changing and updating based on empirical and real-time data, so don't get your programming stuck in an ONLY RIGHT NOW mindset, matrices are designed to be constantly moving and evolving. Therefore your z-axis should always retain a state of variability, or it should always be Z don't attach a value to it. Good luck. I'm jealous I would love to ACTUALLY be working on something cool :/

",45681,,36737,,3/31/2021 22:20,3/31/2021 22:20,,,,0,,,,CC BY-SA 4.0 26986,1,,,3/24/2021 23:43,,0,48,"

I read Yann LeCun's paper Efficient BackProp, which was published in 2000. I looked for similar but more recent papers on Arxiv, but I have not yet found any.

Are there relatively new research papers that describe how to make back-propagation more efficient?

So, I am looking for papers similar to Efficient Backprop by LeCun but newer. The papers could describe why ReLU now "dominates" tanh or even sigmoid (but tanh was Yann's favorite, as explained in the paper). ReLU is just one thing I am interested in, but the paper could also analyze e.g. the inputs from a statistical standpoint.

",45683,,2444,,3/26/2021 16:29,3/26/2021 16:29,Are there relatively new research papers that describe how to make back-propagation more efficient?,,1,1,,,,CC BY-SA 4.0 26987,2,,26977,3/24/2021 23:52,,3,,"

There are a few issues you need to address first.

  1. Normalise your data. You should try and keep your values for each input in a good range, otherwise you're never going to train anything useful. A simple way of doing this could be to divide each value by the maximum value for that input. This will ensure they are between 0 and 1, or you could divide by the average so you get approximately 1. Whatever the case, you need to do something about decreasing the magnitude of your inputs (also make sure your are consistently scaling your data, especially during validation and training, otherwise your data will likely be misinterpreted by your model)
  2. Optional: Augment your data. You have a very small dataset to work with here, if there's any way you can augment this to make it larger, you will likely get much better performance on validation and testing sets. Augmenting is just creating new data that is still representative of the task (as a visual example, you might augment your image dataset by flipping some images and including those in training as well as the unflipped versions)
  3. Split your data into Training, Validation and Testing sets. Training is used for what you expect - training. Validation is used to validate your model on unseen data. The reason for the testing set is obviously you are going to try your hardest to tweak and modify the model so it performs best on your validation set, but this incurs a kind of bias as you are only tweaking for this specific set of data, which is not completely representative of the real world. So try your hardest to perform best on the validation set by changing model hyper-parameters, but confirm it's real use on the testing set. As a recommended split for your entire data set, I have had success with 10% testing, 20% validation and 70% training, but you might be better off with a different split.
  4. Test different models. Train your network for a small amount of time (say only 10 or so epochs) and see how it performs based on the validation set. Keep in mind because your dataset is particularly small (only ~300 samples in your training set) you don't need to train for many epochs at all to get a good representation of how your model will perform (an epoch being a complete loop over all training data). Make sure when you're doing this you're training in batches.
  5. Try different regularization methods. One of the primary ones being good initialization. Make sure you're initializing your parameters with something like Xavier initialization of similar. Other suggestions include: dropout (though you said you already tried this, maybe try different values for dropout), weight decay and batch normalization layers. To be honest though, I think one of the 4 steps above will solve this issue, so you likely won't need to do any of these.

I may have made some points for things you have already done as there was a lot of code for me to look over in your source. But the biggest issue I noticed was that your data wasn't normalised at all and that usually creates many issues, so definitely try that first if you haven't already.

",26726,,,,,3/24/2021 23:52,,,,0,,,,CC BY-SA 4.0 26989,1,,,3/25/2021 2:11,,2,59,"

Given a history of belief states, is there a common method that backtracks the most likely path of ending up in the current belief state?

I have a Markov model which calculates belief states after every step. The belief state is a representation of the most likely states one could be in. A belief state may look like this:

$$b=[1,0,0,0,0],$$ where I am in the state $s_0$ with 100% certainty.

I can store the belief state history like $b_0, b_1, b_2,\dots, b_n$.

Is there a common way to represent and estimate the most likely states one has been in?

A naive approach could be to just look for the state with the highest value per belief state and take that as the node along the reverse path. But I am not confident enough, if that is a common and a good practice, as it is not considering the fuzziness, which comes with a belief state. But then again, if I would take all states that are bigger than 0, I might not know which state leads to which state and if that transition is even possible.

",27777,,2444,,1/1/2022 22:32,1/27/2023 2:05,Is there a way of path reconstruction using only the history of belief states?,,1,0,,,,CC BY-SA 4.0 26992,2,,26986,3/25/2021 5:59,,1,,"

Here is a paper that explains why ReLU rules.

What we want is to disentangle data of different classes. In order to do that, we need a discontinuous mapping for the data. ReLU easily allows for that. It is even better than LeakyReLU, sigmoid and tanh in that regard. Also, the reason any of the activations work is because of the floating point error, there is inadvertently a discontinuous mapping for the whole data. I have also explained it here.

",37203,,,,,3/25/2021 5:59,,,,4,,,,CC BY-SA 4.0 26993,1,26995,,3/25/2021 7:03,,0,93,"

This is a 600*800 image.

Which algorithm/model should I use to get an image like the one below, in which each key is detected and labeled by a rectangle?

I guess this is some kind of a segmentation problem where U-net is the most popular algorithm, though I don't know how to apply it to this particular problem.

",45689,,2444,,3/25/2021 9:54,4/22/2021 1:09,Should I use U-net to label keys in a keyboard image?,,2,0,,,,CC BY-SA 4.0 26995,2,,26993,3/25/2021 10:08,,1,,"

If you just need to draw a rectangle around each key, this is an object detection or template matching problem, so you can use any of the available models for object detection (e.g. YOLO) or any technique for multi-template template matching (e.g. you can use sequential RANSAC or t-linkage). In the first case, you will need a labeled dataset, while, in the second case, you will need the original image and the templates (in your case, a template would be an image of a key).

So, no, this is not a segmentation problem (which would be the task of classifying each pixel in the objects of interest, and not just locating the objects).

",2444,,2444,,4/22/2021 1:09,4/22/2021 1:09,,,,0,,,,CC BY-SA 4.0 26996,2,,26993,3/25/2021 10:26,,1,,"

There are two related problems for images

  1. Semantic segmentation, where you need to assign each pixel on the image some class. I.e. you have a satellite image and want to segmentate roads/forests/fields and so on
  2. Objects detection, where you need to detect different types of objects and draw a bounding box for each. I.e. there is a popular dataset MSCOCO for the task, where you need to localize all bikes/people/cats/etc on the image

U-net is good for the first task, but I would say you have a second. You can use something like YOLOv3 (if you need fast inference) or fast R-CNN if you need precision. If you need really good performance, you can browse top methods for the task on the paperswithcode.com: semantic segmentation, object detection

",16940,,16940,,3/25/2021 10:41,3/25/2021 10:41,,,,0,,,,CC BY-SA 4.0 26997,1,,,3/25/2021 10:28,,2,49,"

I just started to read the PGM book written by Daphne Koller.

In the chapter of Bayesian Network Representation(Chapter 3), there are some descriptions about the standard parameterization of the joint distribution corresponding to n-trial coin tosses.

The book also says,

Here I'm very confused about the meaning of $ 2^n parameters $. In terms of random variable or probability distribution, parameter means characteristic of the distribution. But parameter in this paragraph sounds like $O(2^n)$ space complexity. Because it also describes that we can reduce the space of all joint distribution to $n$-dimension by using expression $ \prod_{i} \theta_{x_{i}} $.

So, what's the meaning of parameter in this context? Does it mean space complexity for computation of the joint distribution?

",45696,,2444,,3/25/2021 10:42,10/4/2022 20:01,"In Probabilistic Graphical Model (written by Daphne Koller), what's the meaning of ""parameter"" in representation of the distribution?",,1,0,,,,CC BY-SA 4.0 26998,1,,,3/25/2021 10:44,,0,109,"

At work there is an idea of solving a problem with machine learning. I was assigned the task to have a look at this, since I'm quite good at both mathematics and programming. But I'm new to machine learning.

In the problem a box would be discretized into smaller boxes (e.g. $100 \times 100 \times 100$ or even more), which I will call 'cells'. Input data would then be a boolean for each cell, and output data would be a float for each cell. Thus both input and output have dimensions of order $10^6$ to $10^9$.

Do you have any recommendations about how to do this? I guess that it should be done with a ConvNet since the output depends on relations between close cells.

I have concerns about the huge dimensions, especially as our training data is not at all that large, but at most contains a few thousands of samples.


Motivation

It can be a bit sensitive to reveal information from a company, but since this is a common problem in computational fluid dynamics (CFD) and we already have a good solution, it might not be that sensitive.

The big boxes are virtual wind tunnels, the small boxes ('cells' or voxels) are a discretization of the tunnel. The input tells where a model is located and the output would give information about where the cells of a volume mesh need to be smaller.

",45694,,3171,,3/25/2021 14:41,3/27/2021 15:31,Huge dimensionality of input and output — any recommendations?,,0,11,,,,CC BY-SA 4.0 26999,2,,26989,3/25/2021 10:56,,1,,"

The belief state in a POMDP is a distribution over the hidden state given all past actions and observations, i.e., at time $k$, the belief state is $b_k(s_k) \triangleq P(s_k \mid a_{0:k-1}, z_{1:k})$, where $a$ and $z$ are the actions and observations, respectively.

What you are asking about and calling "backtracking" boils down to the question: "given that at time $k$, I know the history of actions $a_{0:k-1}$ and observations $z_{1:k}$, what is the distribution over states at some past time step $t<k$, that is, $P(s_t\mid a_{0:k-1},z_{1:k})$"?

This is commonly known as Bayesian smoothing. You might be familiar with Bayesian filtering, which aims to recover the belief state as defined in the first paragraph - an estimate of the current state given actions and observations up to the current time. Smoothing also uses information gained after time $t$ to estimate the state at time $t$, and hence it is only possible after those actions and observations become known. There is also prediction, where you estimate the distribution over the state in the future.

Smoothing is too broad of a topic to go into detail here, but below is a picture from the book "Bayesian Filtering and Smoothing" by Simo Särkkä that illustrates the concepts. You could start there for further information on the topic.

",45529,,,,,3/25/2021 10:56,,,,0,,,,CC BY-SA 4.0 27000,1,,,3/25/2021 11:49,,0,28,"

My task is to classify into two classes the time series like these shown in the figure.

The figure shows one class on the left sub-figure and second one on the right. The series are shown in pairs for more clarity, but each series on the left (and right) belongs to one of the respective classes. The scale of the right panel is reduced to show all series, but these series are of the same amplitude as the left ones.

Is it possible to apply RNN (or other methods) to classify the series of this kind into two classes?

I have never used neural networks, but I am just looking for an adequate method for this problem.

",41761,,2444,,3/26/2021 10:06,3/26/2021 10:06,Can RNNs be used to classify these time series into two classes?,,0,4,,,,CC BY-SA 4.0 27003,1,,,3/25/2021 13:18,,0,112,"

I am aware of similar questions that have been asked, and I have gone through many. I want to bring my case to SE to understand better what my results are.

I am working with a large dataset (around 75million records), but, for the purpose of testing techniques, I am actually using 2M records. I am working towards malicious traffic identification using NetFlow data. After employing some undersampling to have a balanced dataset according to my target variable (benign or attack) I have 1,240,950 of records in the training set and 310,238 in the validation set. Therefore I believe there is a good amount of data to train a Deep neural network properly.

After using Yeo-Yohnsons transform and standardizing the data, I train the network with a very basic model:

def basem():    
    
    model = Sequential()
    
    model.add(Dense(25, input_dim=38))
    model.add(Activation("relu"))
    
    model.add(Dense(50))
    model.add(Activation("relu"))
    
    model.add(Dense(50))
    model.add(Activation("relu"))
    
    model.add(Dense(25))
    model.add(Activation("relu"))
    
    model.add(Dense(1, activation='sigmoid'))
    model.compile(loss='binary_crossentropy', optimizer="adam", metrics=['accuracy'])
    return model


model_base = basem()
model_base._name = 'base'

history_base = model_base.fit(X_train, y_train, batch_size=2048, 
                    epochs=15, validation_data=(X_val,y_val), shuffle=True)

This gives me the following plot

It maybe because I am a newbie, but this plot looks too perfect. It is weird to see validation and training accuracy growing together, although I believe this is what we want right? But now I have the feeling it is overfitting. Therefore I use the model and a 5-fold cross validation to understand how well it generalizes. Results, mean accuracy and mean std(%), are:

test acc: 0.9816503485233088
test_prec: 0.9840033637114158
test_f1: 0.9816046990113001
test_recall: 0.9792384866432975
test_roc_auc: 0.9980004347946355

Dev acc: 0.052931962886091546
Dev prec: 0.2854656099314699
Dev f1: 0.057228805478181974
Dev recall: 0.3597811552056071
Dev roc auc: 0.0036456892671197097

If I understand correctly, accuracy is high which is generally good and the standard deviation is very low for each metric, the highest being 0.359% for recall. Does this mean my model generalizes well?

Edit

Adding dropout (0.3) to each layer yields the following:

Now, my validation accuracy is higher than my training. I can't make sense of any of this.

",45701,,2444,,3/26/2021 10:25,3/26/2021 10:25,Is it possible that the model is overfitting when the training and validation accuracy increase?,,1,0,,,,CC BY-SA 4.0 27004,1,27039,,3/25/2021 13:44,,2,182,"

I do PCA on the data points placed in the corners of a hexagon, and get the following principal components:

The PCA variance is $0.6$ and is the same for each component. Why is that? Shouldn't it be greater in the horizontal direction than in the vertical direction? The data is between $-1$ and $1$ in the $x$-direction but only between $-\sqrt{3}/2$ and $\sqrt{3}/2$ in the $y$-direction. Why PCA results in the equal length components?

The length of each vector in the picture is the twice the square root of the variance.

UPDATE: added more points, the variances changed to $0.477$ but still they are equal.

UPDATE 2: Added even more points, the variances changed to $0.44$ but still they are equal.

",15524,,15524,,3/31/2021 2:41,7/14/2022 15:38,Why does PCA of the vertices of a hexagon result in principal components of equal length?,,1,2,,,,CC BY-SA 4.0 27006,2,,26982,3/25/2021 14:54,,3,,"

I never used a k-WL in practice, but I did apply weisfeiler-lehman for my graph tasks. As you can know, the WL provides the coloring by interactive procedure that's assign each node a 'color' (basically some kind of label reflecting the node neighborhood). Counting colors allows to compare two graphs on isomorphism, but it's not that important here, the key is that you have kind of feature for each node.

The k-WL make similar thing, but it operates not on graph, but in space of k-tuples of nodes. Basically, it's pairs or triplets of nodes. Then you define neighborhood as all pairs/triplets that differ in one element from the target pair/triplet. So, now you have have some kind of objects and defined neihgborhood for each of them, that's basically enough to apply the WL coloring.

Ok, now to k-GNN part. You repeat k-WL procedure on pairs of nodes, gets color for each of pair. You use one-hot encoded colors as input features and make graph convolution for pairs (with neighborhood defined as above), then getting some kind of embeddings for pairs. To get the node classification, you make average pooling by all pairs containing the node and then use usual fully-connected neural net.

As of the examples, the paper author provided several in article repo, you can try them.

",16940,,,,,3/25/2021 14:54,,,,3,,,,CC BY-SA 4.0 27008,2,,27003,3/25/2021 17:05,,1,,"

I'll try to answer on more general questions

  1. Is it ok that model performs better on validation, then on train?

It's certainly fine if you use techniques like dropout or data augmentation and the difference is not that big. Because in case of dropout for train you use part of the network, and for validation the whole.

  1. I'm suspicious my model is too good. What could I do?

That a good point, because in my experience too good results sometime are reasoned by the flow in the training. The most common is data leakage, that's mean you give the model a way to fool around in unexpected and unwanted way.

Let me give an example. Lets imagine you try to detect malicious traffic and you have a data about sessions and you split records on train/val. Let's imagine one of the features is IP address and you could have multiple sessions from same IP. But if same IP would be in both train and val for the crooked users, then model could just remember the 'bad' IP and thus getting high score on both train and val.

So, to avoid it you need to think about things like that. If records could be group in some too meaningful way, make sure you put the whole group either in train or val. Another thing, that's it's usually good thing to split it by time as well, i.e. use for train records for 2018 and 2019 and for val for 2020. Thus you not only avoid data leakage, but you make sure your model is robust for future predictions.

More general, try to understand, why you getting such good result with such simple model. Try to squeeze the data even smaller, drop some features to understand, when it would go done.

Finally I would recommend the post by Andrej Karpathy, when he gave lots of really good advices on practical side of training NN. http://karpathy.github.io/2019/04/25/recipe/

",16940,,,,,3/25/2021 17:05,,,,1,,,,CC BY-SA 4.0 27009,1,,,3/25/2021 18:37,,0,162,"

Regarding the use of pre-processing techniques before using Transformers models, I read this post that apparently says that these measures are not so necessary nor interfere so much in the final result.

The arguments raised seemed to me quite convincing, but someone would know how to explain better, perhaps with a bibliographic reference, why is it not so necessary to use these techniques?

",38864,,,,,3/25/2021 18:37,Why (not) using pre-processing before using Transformer models?,,0,3,,,,CC BY-SA 4.0 27010,1,27253,,3/25/2021 18:55,,0,301,"

I'm trying to create a neural network to simulate an XOR gate.

Here's my dataset:

╔════════╦════════╗
║ x1, x2 ║ y1, y2 ║
╠════════╬════════╣
║  0, 0  ║  0, 1  ║
║  0, 1  ║  1, 0  ║
║  1, 0  ║  1, 0  ║
║  1, 1  ║  0, 1  ║
╚════════╩════════╝

And my neural network:

I use logistic loss to get the error between target $y_{k}$ and output $\hat{y}_{k}$:

$$ E(y_{k}, \hat{y}_{k}) = - y_{k} \cdot log(\hat{y}_{k}) + (1 - y_{k}) \cdot log(1 - \hat{y}_{k}) $$

And then use the chain rule to update the weights. For example weight $w_{3}$'s contribution to the error is:

$$ \sum_{k=1}^{2} \frac{\partial E(y_{k}, \hat{y}_{k})}{\partial w_{3}} = \sum_{k=1}^{2} \left(\frac{\partial E(y_{k}, \hat{y}_{k})}{\partial \hat{y}_{k}} \cdot \frac{\partial s_{k}}{\partial c_{1}}\right) \cdot \frac{\partial c_{1}}{\partial w_{3}} $$

Which in developed form is:

$$ \sum_{k=1}^{2} \frac{\partial E(y_{k}, \hat{y}_{k})}{\partial w_{3}} = \left( \left( - \frac{y_{1}}{\hat{y}_{1}} + \frac{1 - y_{1}}{1 - \hat{y}_{1}} \right) \cdot \hat{y}_{1} \cdot (1 - \hat{y}_{1}) + \left( - \frac{y_{2}}{\hat{y}_{2}} + \frac{1 - y_{2}}{1 - \hat{y}_{2}} \right) \cdot (- \hat{y}_{2}) \cdot\frac{c_{1}}{c_{1} + c_{2}} \right) \cdot s_{0} $$

My issue is that, after a couple epochs of training on the entire dataset, the network always outputs: $$ \hat{y}_{1} = \hat{y}_{2} = 0.5 $$

What am I doing wrong?

",45708,,2444,,12/12/2021 8:53,12/12/2021 8:54,Why does my neural network to solve the XOR problem always output 0.5?,,1,0,,,,CC BY-SA 4.0 27011,1,27012,,3/26/2021 4:12,,1,597,"

I will start working on a project where we want to optimize the production of a chemical unit through reinforcement learning approach. From the SME's, we already obtained a simulator code that can take some input and render us the output. A part of our output is our objective function that we want to maximize by tuning the input variables. From a reinforcement learning angle, the inputs will be the agent actions, while the state and reward can be obtained from the output. We are currently in the process of building a RL environment, the major part of which is the simulator code described above.

We were talking to a RL expert and she mentioned that one of the thing that we have here conceptually wrong is that our environment will not have the Markov property in the sense that it is really a 'one-step process' with the process not continuing from the previous state and there is no sort of continuity in state transitions. She is correct there. This made me think, how can we get around this then. Can we perhaps append some part of the current state to the next state etc. More importantly, I have seen RL applied to optimal control in other examples as well which are non-markovian ex. scheduling, tsp problems, process optimization etc. What is the explanation in such cases? Does one simply assumes process to be markovian with unknown transition function?

",28080,,11539,,3/11/2022 7:37,3/12/2022 10:50,Reinforcement Learning for an environment that is non-markovian,,2,1,,3/15/2022 18:00,,CC BY-SA 4.0 27012,2,,27011,3/26/2021 5:23,,2,,"

RL is currently being applied to environments which are definitely not markovian, maybe they are weakly markovian with decreasing dependency.

You need to provide details of your problem, if it is 1 step then any optimization system can be used.

",32390,,,,,3/26/2021 5:23,,,,1,,,,CC BY-SA 4.0 27013,1,27018,,3/26/2021 8:15,,3,83,"

I am reading Sutton & Bartos's Book "Introduction to reinforcement learning". In this book, the defined the optimal value function as:

$$v_*(s) = \max_{\pi} v_\pi(s),$$ for all $s \in \mathcal{S}$.

Do we take the max over all deterministic policies, or do we also look at stochastic policies (is there an example where a stochastic policy always performs better than a deterministic one?)

My intuition is that the value function of a stochastic policy is more or less a linear combination of the deterministic policies it tries to model, however, there are some self-references, so it is not mathematically true).

If we do look over all stochastic policies, shouldn't we take the supremum? Or do we know, that the supremum is achieved, and therefore it is truly a maximum?

",45213,,2444,,3/26/2021 12:01,3/26/2021 12:01,How is $v_*(s) = \max_{\pi} v_\pi(s)$ also applicable in the case of stochastic policies?,,1,0,,,,CC BY-SA 4.0 27017,2,,26877,3/26/2021 11:33,,3,,"

One of the ways to ask if this two problmes are related is to ask, could we solve math/algebra equations with NLP approaches, and the answer is yes, it's an absolutely valid idea and it was approached by many researchers.

For example in the "Deep learning for symbolic mathemathics" paper by facebook researchers, the NLP-based approach was used to solve calculus-level math.

Or in this paper authors propose a method to extract semantics from math problems in words and solve them.

In fact, even very simple approaches like a small LSTM network could work for simple strictly stated problems

",16940,,16940,,3/26/2021 17:00,3/26/2021 17:00,,,,1,,,,CC BY-SA 4.0 27018,2,,27013,3/26/2021 11:34,,2,,"

The value function is defined as $v_\pi(s) = \mathbb{E}_\pi[G_t | S_t = s]$ where $G_t$ are the (discounted) returns from time step $t$. The expectation is taken with respect to the policy $\pi$ and the transition dynamics of the MDP.

Now, as you pointed out the optimal value function is defined as $v_*(s) = \max_\pi v_\pi(s)\; ; \;\forall s \in \mathcal{S}$. All we are doing here is choosing a policy $\pi$ that maximises the value function; this can be a deterministic or a stochastic policy, though intuitively it is likely to be deterministic unless for some states that are two (or more) actions with the same expected value, in which case you can take any of said actions with equal probability, thus making the policy stochastic.

For a finite MDP (which is what I assumed above too), we know that an optimal value function exists (this is mentioned in the book) so taking the maximum is fine here.

",36821,,,,,3/26/2021 11:34,,,,4,,,,CC BY-SA 4.0 27019,2,,26966,3/26/2021 11:50,,1,,"

You could try to train a recurrent neural net on char level. Basically, you took GRU or LSTM and use a sequence of characters, not tags or words. In the blogpost "The Unreasonable Effectiveness of Recurrent Neural Networks" there are examples for Shakespeare, Linux source code on C and for papers in latex code, and the results are quite valid, produced from similar to your size training sets. The good thing in case of html pages is that modern browsers are very good at handling a slightly broken HTML, so this approach could work for both of your tasks.

",16940,,16940,,3/26/2021 11:56,3/26/2021 11:56,,,,0,,,,CC BY-SA 4.0 27020,2,,26877,3/26/2021 12:42,,4,,"

There isn't, really. Natural language is way more complex and irregular than algebra, which is far more formalised and unambiguous.

So far, in NLP, most success/progress has been made in little toy domains, which exclude most of the complexities of real life, including many ambiguities.

When you say the rules of algebra are somewhat like grammar, then that is because it is essentially a formal language, for which we can specify a grammar. There is currently no complete grammar for any human language (and I doubt there ever will be), let alone a formal one that can be processed by computer.

This was one of the reasons why the first AI boom, where a lot of over-hyped promises where made about being able to translate Russian into English automatically, failed abysmally: natural languages are more than just formal grammars of lexical items.

Stochastic approaches have gone some way towards pragmatic solutions, but when it comes to understanding language they are basically a fudge. And don't get me started on deep learning approaches to NLP.

So the only relationship is that we use the term 'grammar' for the descriptive formalisms in both cases; a formal grammar of algebra would be very different from a grammar for a human language.

This doesn't mean, however, that approaches developed in the field of NLP cannot be applied to algebra: even those which failed in NLP because they were overly limiting. To find out more about this, look for Chomsky Hierarchy -- that describes the different expressive powers of formal languages.

But I would argue that human language is outside of that, because it is not a formal language.

",2193,,,,,3/26/2021 12:42,,,,3,,,,CC BY-SA 4.0 27021,1,28756,,3/26/2021 14:27,,1,85,"

I am looking to write my master's thesis next year about brain-inspired computing. Hence, I am looking to get a good overview of this domain.

Do you know of any comprehensive book that reviews topics in the area of brain-inspired computing (such as spiking neural networks)?

In spirit and scope, it should be similar to Ian Goodfellow's book deep learning.

",45724,,2444,,3/26/2021 17:57,8/19/2021 18:05,Is there any comprehensive book that reviews topics in the area of brain-inspired computing?,,1,0,,,,CC BY-SA 4.0 27024,2,,26920,3/26/2021 18:25,,1,,"

Okay - the answer is here https://explained.ai/matrix-calculus/#sec6.2 and it is pretty involved. Basically, there is a difference when you derive the equation for one neuron and when you have to do practically for a set of neurons. The answer is matrix calculus. Here goes from what I could make out. Feel free to correct if I am wrong

Gradient Vector/Matrix/2D tensor of Loss function wrto Weight

$$ C = \frac{1}{2} \sum_j (y_j-a^L_j)^2 $$

Assuming a neural net with 2 layers, we have the final Loss as

$$ C = \frac{1}{2} \sum_j (y_j-a^2_j)^2 $$

Where

$$ a^2 = \sigma(w^2.a^1) $$

We can then write

$$ C = \frac{1}{2} \sum_j v^2 \quad \rightarrow (Eq \;A) $$

Where

$$ v= y-a^2 $$

Partial Derivative of Loss function wrto Weight

For the last layer, lets use Chain Rule to split like below

$$ \frac {\partial C}{\partial w^2} = \frac{\partial v^2}{\partial v} * \frac{\partial v}{\partial w^2} \quad \rightarrow (Eq \;B) $$

$$ \frac{\partial v^2}{\partial v} =2v \quad \rightarrow (Eq \;B.1) $$

$$ \frac{\partial v}{\partial w^2}= \frac{\partial (y-a^2)}{\partial w^2} = 0-\frac{\partial a^2}{\partial w^2} \quad \rightarrow (Eq \;B.2) $$

$$ \frac {\partial C}{\partial w^2} = \frac{1}{2} *2v(0-\frac{\partial a^2}{\partial w^2}) \quad \rightarrow (Eq \;B) $$  

Now we need to find $\frac{\partial a^2}{\partial w^2}$

Let

$$ a^2= \sigma(sum(w^2 \otimes a^1 )) = \sigma(z^2) $$ $$ z^2 = sum(w^2 \otimes a^1 ) $$

$$ z^2 = sum(k^2) \; \text {where} \; k^2=w^2 \otimes a^1 $$

We now need to derive an intermediate term which we will use later

$$ \frac{\partial z^2}{\partial w^2} =\frac{\partial z^2}{\partial k^2}*\frac{\partial k^2}{\partial w^2} $$ $$ =\frac {\partial sum(k^2)}{\partial k^2}* \frac {\partial (w^2 \otimes a^1 )} {\partial w^2} $$ $$ \frac{\partial z^2}{\partial w^2} = (1^{\rightarrow})^T* diag(a^1) =(a^{1})^T \quad \rightarrow (Eq \;B.3) $$ How the above is, you need to check this in https://explained.ai/matrix-calculus/#sec6.2

Basically though these are written like scalar here; actually all these are partial differention of vector by vector, or vector by scalar; and a set of vectors can be represented as the matrix here.

Note that the vector dot product $w.a$ when applied on matrices becomes the elementwise multiplication $w^2 \otimes a^1$ (also called Hadamard product)

Going back to $Eq \;(B.2)$

$$ \frac {\partial a^2}{\partial w^2} = \frac{\partial a^2}{\partial z^2} * \frac{\partial z^2}{\partial w^2} $$

Using $Eq \;(B.3)$ for the term in left

$$ = \frac{\partial a^2}{\partial z^2} * (a^{1})^T $$

$$ = \frac{\partial \sigma(z^2)}{\partial z^2} * (a^{1})^T $$

$$ \frac {\partial a^2}{\partial w^2} = \sigma^{'}(z^2) * (a^{1})^T \quad \rightarrow (Eq \;B.4) $$

Now lets got back to partial derivative of Loss function wrto to weight

$$ \frac {\partial C}{\partial w^2} = \frac {1}{2}*2v(0-\frac{\partial a^2}{\partial w^2}) \quad \rightarrow (Eq \;B) $$ Using $Eq \;(B.4)$ to substitute in the last term

$$ = v(0- \sigma^{'}(z^2) * (a^{1})^T) $$

$$ = v*-1*\sigma^{'}(z^2) * (a^{1})^T $$

$$ = (y-a^2)*-1*\sigma^{'}(z^2) * (a^{1})^T $$

$$ \frac {\partial C}{\partial w^2}= (a^2-y)*\sigma^{'}(z^2) * (a^{1})^T $$

",33650,,33650,,3/27/2021 4:32,3/27/2021 4:32,,,,0,,,,CC BY-SA 4.0 27027,1,,,3/27/2021 6:17,,0,186,"

I have encountered this problem on how to predict the probability of a periodically happening event occurring at a given time.

For example, we have an event called being_an_undergrad. There are many data points: bob is an undergrad from (1999 - 2003), Bill is an undergrad from (1900 - 1903), Alice is an undergrad from (1900 - 1905), and there are many other data points such as (2010 - 2015), (2011 - 2013) ....

There are many events(data points) of being_an_undergrad. The lasting interval varies, it might be 1 year, 2 years, 3 years, .... or even 10 years. But the majority is around 4 years.

However, I am wondering given all the data points above. If I now know that Jason starts college in 2021, and how can I calculate/predict the probability that he will still be an undergrad in 2022? and 2023? and 2024 .... 2028, etc.

My current dataset consists of 10000 tuples representing events of different relations. The relations are all continuous relations similar to the example above. There are about 10 continuous relations in total in this dataset, such as isMarriedTo, beingUndergrad, livesIn, etc. For each relation, there are about 1000 data points(1000 durations) about this relation, for example,

<Leo, isUndergrad, Harvard, 2010 - 2011>, <Leo, isUndergrad, Stanford, 2013 - 2016>.....

<Jason, livesIn, US, 1990 - 2021>, <Richard, livesIn, UK, 1899- 1995> ...

My problem now is that I want to get a confidence level(probability) when I want to predict one event happening at a specific time point. For example, I want to predict the probability that event <Jason, livesIn, US, 2068> happens, given:

1.the above datasets which includes info about the relation: livesIn

2.the starting time when Mike lives in US, say he started to live in US since 2030.

I have used normal distribution to simulate, but I am wondering if there are any other better AI / ML / Stats approaches. Thanks a lot!

",45735,,45735,,3/28/2021 14:57,3/28/2021 14:57,Predicting the probability of a periodically happening event occurring at a given time,,0,7,,,,CC BY-SA 4.0 27028,1,27030,,3/27/2021 9:11,,3,296,"

You can fool some of the people all of the time.

This can be represented in FOL as follows

$$\exists x \; \forall t \; (\text{person}(x) \land \text{time}(t)) \Rightarrow \text{can-fool}(x,t) \tag{1}\label{1}$$

Is $\exists x \; \forall t \; \text{can-fool}(\text{person}(x), \text{time}(t))$ equivalent to (\ref{1}) ?

",45740,,2444,,3/27/2021 20:47,3/27/2021 20:47,"Do these FOL formula both represent ""You can fool some of the people all of the time""?",,1,0,,,,CC BY-SA 4.0 27030,2,,27028,3/27/2021 11:13,,1,,"

(1) can be paraphrased as "There exists an x, and for any t if x is a person and t is a time, then x can be fooled at time t" (I would use fool-able instead of can-fool, as it is closer to the intended meaning).

(2) would be "There exists an x, and for any t, you can fool x is a person and t is a time."

They are not equivalent: person(x) and time(t) are boolean predicates, which return a truth value: they are true if x is a person, and t is a time, respectively. So in (1) they act as a constraint on the values that x and t can take. If x was a saucepan, then person(x) would be false, and thus you wouldn't be able to claim that you can fool a saucepan.

So can-fool takes two arguments: one for which person is true, and one for which time is true. But in (2), the arguments are actually the boolean truth values: if x was "Falstaff" and t was "yesterday", then in (1) the premise would be true, as person("Falstaff") and time("yesterday") are true, and so you conclude can-fool("Falstaff", "yesterday").

In (2) that becomes can-fool(person("Falstaff"), time("yesterday")), which evaluates to can-fool(true, true), and that won't work.

",2193,,,,,3/27/2021 11:13,,,,0,,,,CC BY-SA 4.0 27031,1,,,3/27/2021 13:41,,1,34,"

For a neural network model that classifies images, is it better to use normalization (dividing by 255.0) or using standardization (subtract mean and divide by STD)?

When I started learning convolutional neural networks, I always used normalization because it's simple and effective, but then I started to learn PyTorch and in one of the tutorials https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html they preprocess images like this:

transform = transforms.Compose(
    [transforms.ToTensor(),
     transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])

trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
                                        download=True, transform=transform)

The transform object is created, which has the NORMALIZE parameter, which in itself has the mean and STD values for each channel.

At first, I didn't understand how this works, but then learned how standardization works from Andrew Ng's video, but I didn't find the answers to why is it better to use standardization over normalization or vice-versa. I understand that normalization scales inputs from [0, 1], and standardization first subtracts mean, so that dataset would be centered around 0, and divides everything by STD, so that it would normalize the variance.

Though I know how each of these techniques work (I think I know), I still don't understand why would anybody use one over the other to preprocess images.

Could anybody explain where and why would you use normalization or standardization (if possible could you give an example)? And as a side question: is it better to use the combined version where first you normalize the image and then standardize it?

",40591,,2444,,3/29/2021 10:31,3/29/2021 10:31,"For image preprocessing, is it better to use normalization or standartization?",,0,0,,,,CC BY-SA 4.0 27035,1,,,3/27/2021 17:54,,2,386,"

I am working on a Baby Crying Detection model using logistic regression.

Out of $581$ audios, $222$ are of a baby crying. Each audio is of $5$ seconds.

what I have done is convert each audio into numbers. and those numbers go into a .csv file. so first I took $100$ samples from each audio, then $1000$ samples, and then all $110250$ samples into a .csv file, and at the end of each of them was a number 1 (crying) or 0 (not crying). Then I trained the model using logistic regression from that .csv file.

The Problem I m facing is that with $100$ samples the 64% accuracy on each audio, while with 1000 samples and 110250 samples(Full dataset) it reaches to 66% accuracy only. How can I improve the accuracy of my model to upto 80% using logistic regression.

I can only use simple logistic regression because I have to deploy the model on Arduino.

",45062,,36737,,3/28/2021 4:28,9/26/2021 19:10,How to get more accuracy of the logistic regression model?,,1,3,,,,CC BY-SA 4.0 27038,1,,,3/27/2021 19:55,,8,3044,"

After looking into transformers, BERT, and GPT-2, from what I understand, GPT-2 essentially uses only the decoder part of the original transformer architecture and uses masked self-attention that can only look at prior tokens.

Why does GPT-2 not require the encoder part of the original transformer architecture?

GPT-2 architecture with only decoder layers

",37519,,37519,,3/27/2021 20:52,1/19/2023 18:50,Why does GPT-2 Exclude the Transformer Encoder?,,3,0,,,,CC BY-SA 4.0 27039,2,,27004,3/27/2021 20:50,,3,,"

Assuming that the $6$ vertices of the hexagon are on the unit circle,

>>> from sympy import *
>>> A = Matrix([[ 1, Rational(1,2),-Rational(1,2), -1, -Rational(1,2), Rational(1,2)], 
                [ 0,     sqrt(3)/2,     sqrt(3)/2,  0,     -sqrt(3)/2,    -sqrt(3)/2]])
>>> A * A.T
Matrix([[3, 0],
        [0, 3]])

Since ${\bf A} {\bf A}^\top - 3 \, {\bf I}_2 = {\bf O}_2$, any two orthogonal directions could be the principal components.

",3171,,3171,,7/14/2022 15:38,7/14/2022 15:38,,,,2,,,,CC BY-SA 4.0 27040,2,,27038,3/27/2021 20:55,,7,,"

GPT-2 is a close copy of the basic transformer architecture.

GPT-2 does not require the encoder part of the original transformer architecture as it is decoder-only, and there are no encoder attention blocks, so the decoder is equivalent to the encoder, except for the MASKING in the multi-head attention block, the decoder is only allowed to glean information from the prior words in the sentence. It works just like a traditional language model as it takes word vectors as input and produces estimates for the probability of the next word as outputs but it is auto-regressive as each token in the sentence has the context of the previous words. Thus GPT-2 works one token at a time.

BERT, by contrast, is not auto-regressive. It uses the entire surrounding context all-at-once. GPT-2 the context vector is zero-initialized for the first word embedding.

",36737,,,,,3/27/2021 20:55,,,,1,,,,CC BY-SA 4.0 27042,2,,27035,3/27/2021 21:34,,2,,"

Try Rectification

Improve the features available to your model, Remove some of the NOISE present in the data.

  • In audio data, a common way to do this is to smooth the data and then rectify it so that the total amount of sound energy over time is more distinguishable.
# Rectify the audio signal
audio_rectified = audio.apply(np.abs)
  • You can also calculate the absolute value of each time point. This is also called Rectification because you ensure that all time points are positive.

  • Smooth your data by taking the rolling mean in a window of say 50 samples

audio_rectified_smooth = audio_rectified.rolling(50).mean()
  • Calculating the envelope of each sound and smoothing it will eliminate much of the noise and you have a cleaner signal.

Calculate Spectrogram

  • Calculate a spectrogram of sound(i.e combining of windows Fourier transforms). This describes what spectral content (e.g., low and high pitches) are present in the sound over time. there is a lot more information in a spectrogram compared to a raw audio file. By computing the spectral features, you have a much better idea of what's going on.

    • this is similar to how we calculate rolling mean :

      • We calculate multiple Fourier transforms in a sliding window to see how it changes over time. For each time point, we take a window of time around it, calculate a Fourier transform for the window, then slide to the next window. The result is a description of the Fourier transform as it changes throughout the time-series called a short-time Fourier transform or STFT.

        • Choose a windows size and shape
        • At a timepoint, calculate the FFT for that window
        • Slide the window over by one
        • Aggregate the results
    • Calculating the STFT

      • We can calculate the STFT with librosa $\rightarrow$ import librosa as lr
      • There are several parameters we can tweak (such as window size)
      • For our purposes, we'll convert into decibels which normalizes the average values of all frequencies.
      • We can then visualize it with the specshow() function

This is how you can calculate the STFT

# Calculating the STFT
# Import the functions we'll use for the STFT
from librosa.core import stft, amplitude_to_db
from librosa.display import specshow

# Calculate our STFT
HOP_LENGTH = 2**4
SIZE_WINDOW = 2**7
audio_spec = stft(audio, hop_length=HOP_LENGTH, n_fft=SIZE_WINDOW)

# Convert into decibels for visualization
spec_db = amplitude_to_db(audio_spec)

# Visualize
specshow(spec_db, sr=sfreq, x_axis='time', y_axis='hz', hop_length=HOP_LENGTH)

Try Spectral feature engineering

you can also perform the Spectral feature engineering on your baby audio data

  • since each time series has a different spectral pattern.
  • We can calculate these spectral patterns by analyzing the spectrogram.
  • For example, spectral bandwidth and spectrum centroids describe where most of the energy is at each moment in time.
# Calculate the spectral centroid and bandwidth for the spectrogram
bandwidths = lr.feature.spectral_bandwidth(S=spec)[0]
centroids = lr.feature.spectral_centroid(S=spec)[0]
 
# Display these features on top of the spectrogram
ax = specshow(spec, x_axis='time', y_axis='hz', hop_length=HOP_LENGTH)
ax.plot(times_spec, centroids)
ax.fill_between(times_spec, centroids - bandwidths / 2,centroids + bandwidths / 2, alpha=0.5)

Now you can Combine spectral(spec $\rightarrow$ spectral dataframe) and temporal features in a classifier

centroids_all = []
bandwidths_all = []
 
for spec in spectrograms:
    bandwidths = lr.feature.spectral_bandwidth(S=lr.db_to_amplitude(spec))
    centroids = lr.feature.spectral_centroid(S=lr.db_to_amplitude(spec))
    # Calculate the mean spectral bandwidth
    bandwidths_all.append(np.mean(bandwidths))
    # Calculate the mean spectral centroid
    centroids_all.append(np.mean(centroids))
# Create our X matrix
X = np.column_stack([means, stds, maxs, tempo_mean, tempo_max, tempo_std, bandwidths_all, centroids_all])

for your logistic regression models

  • One of the ways to improve accuracy is by optimizing the prediction probability cutoff scores generated by your logit model.
  • You can Normalize all your features to the same scale before putting them in a machine learning model.
  • Look for class imbalance in your data.
  • You can Optimize on other Metrics also such as Log Loss and F1-Score.
  • Tune the hyperparameters of your model. In the case of LogisticRegression, parameter $C$ is a hyperparameter.
",36737,,36737,,9/26/2021 19:10,9/26/2021 19:10,,,,2,,,,CC BY-SA 4.0 27044,1,,,3/28/2021 3:17,,1,110,"

Models based on the transformer architectures (GPT, BERT, etc.) work awesome for NLP tasks including taking an input generated from words and producing probability estimates of the next word as the output.

Can an existing transformer model, such as GPT-2, be modified to perform the same task on a sequence of numbers and estimate the next most probable number? If so, what modifications do we need to perform (do we still train a tokenizer to tokenize integers/floats into token IDs?)?

",45748,,2444,,3/29/2021 10:38,12/19/2022 21:06,Can an existing transformer model be modified to estimate the next most probable number in a sequence of numbers?,,1,1,,,,CC BY-SA 4.0 27045,1,27064,,3/28/2021 4:22,,0,46,"

We have our training set and our test set. When we scale our data we "fit" the scaler transform to the training set and then we scale both the training set and test set using this scaler object. Using splitting and cross-validation techniques, one can use the training set as training and validation. Finally, reporting on the test set.

Now, if I want to use a model in a real-life environment, it's common to use the entire dataset (training and test) to train our already optimized model to obtain a final ready for the production model.

My question is regarding scaling. Should we fit the scaler to the entire set and then scale? Or can we simply append the scaled training set and scaled test set (both have been scaled using the training set's scaling parameters)?

I am making use of sklearn.preprocessing.PowerTransformer. Using "Yeo-Johnson's" power transform and also standardizing the data.

",45701,,2444,,3/29/2021 16:35,3/29/2021 16:35,Is the final model scaling done on the full training set?,,1,0,,,,CC BY-SA 4.0 27046,1,,,3/28/2021 5:38,,1,40,"

in the original paper the following scheme of the self-attention appears: https://arxiv.org/pdf/1805.08318.pdf

In a later overview: https://arxiv.org/pdf/1906.01529.pdf

this scheme appears: referring the original paper.

My understanding more correlates with the second paper scheme, as: Where there is two dot-product operations and three hidden parametric matrices: $$W_k, W_v, W_q$$ which corresponds to $W_f, W_g, W_h$ without $W_v$ as it in the original paper explanation, which is as following:

Is this a mistake in the original paper ?

",41535,,,,,3/28/2021 5:38,SAGAN - is there a mistake in the original paper?,,0,2,,,,CC BY-SA 4.0 27047,1,27059,,3/28/2021 7:58,,0,829,"

In this https://pytorch.org/vision/stable/models.html tutorial it clearly states:

All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].

Does that mean that for example if I want my model to have input size 128x128 it is or if I calculate mean and std which is unique to my dataset that it is gonna perform worse or won't work at all? I know that with tensorflow if you are loading pretrained models there is a specific argument input_shape which you can set according to your needs just like here:

tf.keras.applications.ResNet101(
include_top=True, weights='imagenet', input_tensor=None,
input_shape=None, pooling=None, classes=1000, **kwargs)

I know that I can pass any shape to those (pytorch) pretrained models and it works. What I wanna understand is can I change input shape of those models so that I don't decrease my models training performance?

",40591,,,,,3/29/2021 4:56,Does anybody know what would happen if I changed input shape of pytorch models?,,2,0,,,,CC BY-SA 4.0 27048,1,,,3/28/2021 12:32,,-1,33,"

I am trying to read a PDF file and put it in Python string and trying to fetch information based on keywords. The text here is completely irregular.

Example of text

Blockquote Ram has taken an insurance of his premises with total sum insured of INR 256,200,000,000. XYZ company provides an insured with limit of liability of INR 100,250,000 and 90 days indemnity period. Insured with deductible of INR 200,000.

Here I want to find 3 things from this text

  • limit of liability amount
  • Deductible amount
  • Sum insured amount

For example

Limit of liability = 100,250,000

",45762,,2444,,3/29/2021 10:27,3/29/2021 14:53,Extracting values from text based on keywords,,1,1,,,,CC BY-SA 4.0 27051,2,,26958,3/28/2021 14:44,,3,,"

Yes, if the activation function of the network is not zero centered, $y = f(x^{T}w)$ is always positive or always negative. Thus, the output of a layer is always being moved to either the positive values or the negative values. As a result, the weight vector needs more updates to be trained properly, and the number of epochs needed for the network to get trained also increases. This is why the zero centered property is important, though it is NOT necessary.

Zero-centered activation functions ensure that the mean activation value is around zero. This property is important in deep learning because it has been empirically shown that models operating on normalized data––whether it be inputs or latent activations––enjoy faster convergence.

Unfortunately, zero-centered activation functions like tanh saturate at their asymptotes –– the gradients within this region get vanishingly smaller over time, leading to a weak training signal.

ReLU avoids this problem but it is not zero-centered. Therefore all-positive or all-negative activation functions whether sigmoid or ReLU can be difficult for gradient-based optimization. So, To solve this problem deep learning practitioners have invented a myriad of Normalization layers (batch norm, layer norm, weight norm, etc.). we can normalize the data in advance to be zero-centered as in batch/layer normalization.

Reference:

A Survey on Activation Functions and their relation with Xavier and He Normal Initialization

",36737,,18758,,5/31/2022 0:29,5/31/2022 0:29,,,,7,,,,CC BY-SA 4.0 27053,2,,23666,3/28/2021 18:32,,0,,"

I found the reason it wasn't learning. The issue was this line of code:

q_target[torch.arange(states.size()[0]), actions] = rewards + (self.gamma * next_q_vals.max(dim=1)[0]) * (~dones).float()

I had been using the tilde operator before to invert uint8 tensors, but recently I had updated to the latest version of pytorch that seems to have changed how the operator works. It was changing the done values to 255.

Changing to this line fixed it:

q_target[torch.arange(states.size()[0]), actions] = rewards + (self.gamma * next_q_vals.max(dim=1)[0]) * (1 - dones)
",41097,,,,,3/28/2021 18:32,,,,0,,,,CC BY-SA 4.0 27055,1,27060,,3/28/2021 23:02,,0,83,"

I have two models and a file contains captions for images. The output of model 1 is .pkl files that contain the features of the images. Model 2 is the language model that will be trained with the captions. How can I link between two models to predict a caption for any image? The output of model 1 should be the input of model 2. But the features only are not enough so the input of model 2 will be .pkl files + caption file. Right?

If someone can help me in getting the link between the two models, I will appreciate it.

",32563,,32410,,5/3/2021 18:56,5/3/2021 18:56,How can Image Caption work?,,1,0,,,,CC BY-SA 4.0 27057,2,,27047,3/29/2021 4:15,,0,,"

In all pre-trained models, the input image has to be the same shape; the transform object resizes the image when you add it as a parameter to your dataset object

transform = T.Compose([T.Resize(256), T.CenterCrop(224), T.ToTensor()])
dataset = datasets.ImageNet(".", split="train", transform=transform)

T.Resize(256) changes the image shape then T.CenterCrop(224) takes a random crop to help with overfitting in the training data.

The mean and std are applied to each image before they are used as an input to the model. They were determined when the model was trained, I assume they were hyperparamters.

",21565,,,,,3/29/2021 4:15,,,,0,,,,CC BY-SA 4.0 27059,2,,27047,3/29/2021 4:56,,0,,"

Each machine learning model should be trained by constant input image shape, the bigger shape the more information that the model can extract but it also needs a heavier model.

A model's parameters will adapt with the datasets it learns on, which means it will perform well with the input shape that it learned. Therefore, to answer your question "What I wanna understand is can I change input shape of those models so that I don't decrease my models training performance?", the answer is no, it will decrease the performance.

"I know that I can pass any shape to those (pytorch) pretrained models and it works."

$\Rightarrow$ this happened because Pytorch team replace all Pooling layer with Adaptive Average Pooling 2d so you can pass any shape of the image into the model without any bugs.

",41287,,,,,3/29/2021 4:56,,,,3,,,,CC BY-SA 4.0 27060,2,,27055,3/29/2021 5:19,,1,,"

The Standard Image Captioning Pipeline is to train the model in a single batch(or mini-batch) i.e. get the features from the CNN Image encoder and then feed that into an RNN decoder (features + Real Captions) to produce output captions for the Image.

The training loop in PyTorch would look something like this:

# zero the parameter gradients
decoder.zero_grad()
encoder.zero_grad()
        
# Forward pass
features = encoder(image)
outputs = decoder(features, captions)
        
# Compute the Loss
loss = criterion(outputs.view(-1, vocab_size), 
                         captions.view(-1))
        
# Backward pass.
loss.backward()
        
# Update the parameters in the optimizer.
optimizer.step()

I'd suggest you go through the paper Show and Tell: A Neural Image Caption Generator.

I also made this Kaggle Kernel implementing the paper from scratch. Should help clear up any other doubts.

",40434,,,,,3/29/2021 5:19,,,,3,,,,CC BY-SA 4.0 27062,1,27066,,3/29/2021 12:05,,0,646,"

Are hill climbing variations (like steepest ascent hill climbing, stochastic hill climbing, random restart hill climbing, local beam search) always optimal and complete?

",,user45792,2444,,3/29/2021 15:16,3/29/2021 15:16,Are hill climbing variations always optimal and complete?,,1,0,,,,CC BY-SA 4.0 27064,2,,27045,3/29/2021 12:42,,0,,"

The short answer is yes.

When you merge the test set into the train set, you try to squeeze available data till the last drop. The cons and pros of this approach have been considered in other questions in the network 1, 2. But if you decided to go for it, there is no point to not use the whole dataset for the scaling transformation, as the trend "more data leads (generally) to better models" is valid to the scaling to the same extent it's valid for the model itself.

",16940,,16940,,3/29/2021 13:20,3/29/2021 13:20,,,,0,,,,CC BY-SA 4.0 27065,2,,14082,3/29/2021 13:21,,1,,"

For standard NNs, their extrapolation behavior an important aspect for financial applications cannot be controlled due to complex functional forms typically involved.

Neural Networks with Asymptotics Control discuss how they overcome this significant limitation and develop a new type of neural networks that incorporate large-value asymptotics, when known, allowing explicit control over extrapolation. This new type of asymptotics-controlled Neural network is based on two novel technical constructs:

  1. A Multi-Dimensional Spline Interpolator with Prescribed Asymptotic Behavior
  2. and A Custom NN layer that guarantees zero asymptotics in chosen directions.

Let $f(x)$ be the Function that we want to approximate while preserving its asymptotics. (It goes without saying that the asymptotics needs to be known.) As the first step, we find a control variate function $S(x)$ that has the same asymptotics as $f(x)$.

  • Multi-Dimensional Inputs - the most interesting case - these asymptotics could either be known in all directions or only in some; naturally, as much information about the asymptotics as is available should be incorporated into the control variate function $S(x)$.
    • For step one, they show how to construct a universal control variate, a multi-dimensional spline $S(x)$ that has the same asymptotics as $f(x)$; and that can be used in all situations. In specific applications, of course, if more is known about the function $f$; a better choice of the control variate could be available.

    • For step two, we design a custom NN layer that guarantees zero asymptotics in all directions, with fine control over the regions where the NN interpolation is used and where the asymptotics kick in. they approximate the residual function $R(x) = S(x) - f(x)$ with a special NN that has vanishing (zero) asymptotics in all, or some, directions.

One-Dimensional Spline with Asymptotics Control:

The paper discussed that Instead of using the natural boundary conditions, i.e., setting second derivatives to zero at the boundaries, they have fixed the first derivatives at boundary points to arbitrary values. $\color{blue}{\text{Clearly, this paves the way to control asymptotics}}$.

Suppose that we know the behavior of the original, calculation-heavy function in its tails, i.e. $f(x) \simeq f_{−}(x)$ for $x<h_0$ and $f(x) \simeq f_{+}(x)$ for $x>h_{N+1}$ for large negative $h_0$ and large positive $h_{N+1}$. First, we expand the set of spline nodes to include points $h_0$ and $h_{N+1}$ and make sure the spline passes through them, $S(h_0) = f_{−}(h_0)$ and $S(h_{N+1}) = f_{+}(h_{N+1})$. To finish, they have specifies first order derivatives at these new boundary points.

$$S^{\prime}_0(h_0) = f^{\prime}_{−}(h_0), S(h_{N+1}) = f^{\prime}_{+}(h_{N+1})$$

This fully specifies the approximating function $\tilde{S}(x)$, $$\tilde{S}(x)=\left\{\begin{array}{ll}f_{-}(x), & x \leq h_{0} \\ S(x), & h_{0}<x<h_{N+1} \\ f_{+}(x), & h_{N+1} \leq x\end{array}\right.$$

One-Dimensional Spline with Asymptotics Control: Continuous Second Derivatives

where they have discussed how they have constructed a control variate function based on the spline like above, but with the continuous second derivative at the endpoints. In order to do that, they have picked a point between $h_0$ and $h_1$, say $h_{\frac{1}{2}}$ , and find, analytically, the value $S(h_{\frac{1}{2}})$ such that the second derivative at $h_{0}, S^{\prime \prime}\left(h_{0}\right)= f_{-}^{\prime \prime}\left(h_{0}\right)$ is matched.

Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems. This paper deals with studying the asymptotical properties of multilayer neural networks models used for the adaptive identification of the wide class of nonlinearly parameterized systems in a stochastic environment.

A study of the asymptotic behavior of neural networks. where they have studied neural networks modeled as a set of nonlinear differential equations of the form $TX+X=Wf(X)+b$, where $X$ is the neural membrane potential vector, $W$ is the network connectivity matrix, and $F(X)$ is the nonlinearity (an essentially sigmoid function). Topologies of neural networks that exhibit asymptotic behavior are established as behavior depends solely on the topology of the network. Moreover, the connectivity $W$ need not be symmetric. The simulated behavior of typical neural networks is presented

Refrence

Neural Networks with Asymptotics Control

Asymptotic Behaviour of Gradient Learning Algorithms in Neural Network Models for the Identification of Nonlinear Systems

A study of the asymptotic behavior of neural networks

",36737,,1671,,9/29/2021 7:09,9/29/2021 7:09,,,,0,,,,CC BY-SA 4.0 27066,2,,27062,3/29/2021 14:47,,1,,"

No, they are prone to get stuck in local maxima, unless the whole search space is investigated.

A simple algorithm will only ever move upwards; if you imagine you're in a mountain range, this will not get you very far, as you will need to go down before going up higher. You can see that going down a bit will have a net benefit, but the search algorithm will not be able to see that.

Random restart (and similar variations) allow you to do that, up to a point. Imagine you have ten people that you parachute over your mountain range, but they can only go upwards. Now you've got a better chance of finding a higher peak, but there's still no guarantee that any of them will reach the highest one.

",2193,,,,,3/29/2021 14:47,,,,0,,,,CC BY-SA 4.0 27067,2,,27048,3/29/2021 14:53,,0,,"

If the text is really that regular, you can do simple pattern matching: Search for the keywords "limit of liability", "deductible", and "sum insured". The take the next numerical value (possibly preceded by "INR"), and assign it to the corresponding value.

However, this is very simplistic and brittle, and will fail if the texts are more varied. You might still get quite far with it, and you can always add more variations of the keywords to it.

Note that I would not advise that for general text analysis, as it's far too primitive.

",2193,,,,,3/29/2021 14:53,,,,0,,,,CC BY-SA 4.0 27068,2,,14082,3/29/2021 15:52,,1,,"

A simpler answer is that for a standard neural net, the asymptotic behaviour is the asymptotic behaviour of the output neurons. For example, if the output layer is ReLUs, then the asymptotic behaviour is necessarily linear.

In your case, since you want it to be asymptotically constant, you can use the slightly old-fashioned choice of sigmoid units in the output layer.

The training set is necessarily finite, so no machine learning method can learn asymptotic behaviour. It can only be supplied by prior knowledge, for example as I describe.

",12269,,,,,3/29/2021 15:52,,,,0,,,,CC BY-SA 4.0 27069,2,,27044,3/29/2021 15:59,,0,,"

To answer this, you need some constraints on the problem. Here are some sequences of numbers. No machine learning technique could be expected to learn all of them:

  • the odd numbers
  • the primes
  • numbers expressed in digits, but listed in alphabetical order of their name in German
  • numbers listed in the lexical order of the reverse of their representation in base 3
  • the phone numbers in the Manhattan phone director, listed in alphabetical order of subscriber
",12269,,,,,3/29/2021 15:59,,,,5,,,,CC BY-SA 4.0 27071,2,,22166,3/29/2021 16:03,,0,,"

For a regressor, it can work fine to have an output layer that is linear.

The composition of two linear functions is also linear, so in a deep neural net, if all layers are linear it can only learn a linear function. As Daniel B explains, XOR is a good example of a function with no useful linear approximation.

",12269,,,,,3/29/2021 16:03,,,,0,,,,CC BY-SA 4.0 27072,2,,24279,3/29/2021 16:08,,0,,"

Even with a binary classifier, one number does not fully represent the behaviour - the confusion matrix has three degrees of freedom. Even more with a multi-class problem, it is best to print our the whole confusion matrix. Then you can pick up problems like "large class A is well classified, many Bs are wrongly classified as C, and the few Ds are wrongly assigned to A,B, or C".

Even better, printing the confusion matrix helps you think about what the real business goals are: which of these errors matters most in practice?

",12269,,,,,3/29/2021 16:08,,,,0,,,,CC BY-SA 4.0 27073,1,27075,,3/29/2021 20:13,,0,228,"

I am trying to make a big classification model using the coco2017 dataset. Here is my code:

import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
import IPython.display as display
from PIL import Image, ImageSequence
import os
import pathlib
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import cv2
import datetime

gpus = tf.config.list_physical_devices('GPU')
if gpus:
  try:
    # Currently, memory growth needs to be the same across GPUs
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    # Memory growth must be set before GPUs have been initialized
    print(e)

epochs = 100
steps_per_epoch = 10
batch_size = 70
IMG_HEIGHT = 200
IMG_WIDTH = 200

train_dir = "Train"
test_dir = "Val"

train_image_generator = ImageDataGenerator(rescale=1. / 255)

test_image_generator = ImageDataGenerator(rescale=1. / 255)

train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
                                                           directory=train_dir,
                                                           shuffle=True,
                                                           target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                           class_mode='sparse')

test_data_gen = test_image_generator.flow_from_directory(batch_size=batch_size,
                                                         directory=test_dir,
                                                         shuffle=True,
                                                         target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                         class_mode='sparse')

model = Sequential([
    Conv2D(265, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
    MaxPooling2D(),
    Conv2D(64, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Conv2D(32, 3, padding='same', activation='relu'),
    MaxPooling2D(),
    Flatten(),
    keras.layers.Dense(256, activation="relu"),
    keras.layers.Dense(128, activation="relu"),
    keras.layers.Dense(80, activation="softmax")
])

optimizer = tf.keras.optimizers.Adam(0.001)
optimizer.learning_rate.assign(0.0001)

model.compile(optimizer='adam',
              loss="sparse_categorical_crossentropy",
              metrics=['accuracy'])

model.summary()
tf.keras.utils.plot_model(model, to_file="model.png", show_shapes=True, show_layer_names=True, rankdir='TB')
checkpoint_path = "training/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)

cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
                                                 save_weights_only=True,
                                                 verbose=1)

os.system("rm -r logs")

log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=1)

model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
history = model.fit(train_data_gen,steps_per_epoch=steps_per_epoch,epochs=epochs,validation_data=test_data_gen,validation_steps=10,callbacks=[cp_callback, tensorboard_callback])
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
model.save('model.h5', include_optimizer=True)

test_loss, test_acc = model.evaluate(test_data_gen)
print("Tested Acc: ", test_acc)
print("Tested Acc: ", test_acc*100, "%")

I have tried different optimizers like SGD, RMSProp, and ADAM. I also tried changing the configuration of the hidden layers. I also tried to change the metrics from accuracy to sparse_categorical_accuracy with no improvement. I cannot go beyond 30% accuracy. My guess is that the MaxPooling is doing something because I just added it but don't know what it means. Can somebody explain what the MaxPooling Layer does and what is stopping my neural network from gaining accuracy?

",44611,,,,,3/30/2021 2:25,Accuracy Not Going Above 30%,,2,0,,,,CC BY-SA 4.0 27075,2,,27073,3/29/2021 23:06,,1,,"

You have two questions in one.

  1. Is it maxpool that ruins the model?

I would say no, the maxpool is a standard operation for convolution networks, it down-samples the intermediate representation to reduce the necessary computations, improve the regularization, and adds translation invariance to some degree. Originally averaging was used to downsample over few neighbor pixels, for example, 2x2 were averaged to one pixel. Then it was discovered max-pool often performs better in practice, where you took the max value out of these 2x2 pixels. The way you applied is ok in general.

  1. Why the accuracy is not that great?

I see two issues here - first one is COCO dataset is not a classification dataset. It's an object detection dataset and there are many objects on the same image. I.e. there is an image with a person on a bicycle and a car behind him. Which class the model should assign - a person, a bicycle, or a car? The model can't know. To check if it's the issue try top-5 accuracy - it tells if the correct answer would be among top-5 guesses of the network. I would also recommend to watch the images and try to manually guess the class for few dozens of them, that would help to build the intuition

The second thing is that your model is not that deep and 30% accuracy is not bad, i.e. the random guess would be around 1% and your model doing x30 times better. You could try models like resnet - it's still quite fast, but should be doing noticeably better.

",16940,,16940,,3/29/2021 23:26,3/29/2021 23:26,,,,1,,,,CC BY-SA 4.0 27078,2,,27073,3/30/2021 2:25,,0,,"

Accuracy is a good measure if our classes are evenly split, but is very misleading if we have imbalanced classes.Always use caution with accuracy. You need to know the distribution of the classes to know how to interpret the value.

",,user36770,,,,3/30/2021 2:25,,,,0,,,,CC BY-SA 4.0 27081,1,,,3/30/2021 14:15,,3,251,"

The book from Sutton and Barto, Reinforcement Learning: An Introduction, define a model in Reinforcement Learning as

something that mimics the behavior of the environment, or more generally, that allows inferences to be made about how the environment will behave.

In this answer, the answerer makes a distinction:

There are broadly two types of model:

  • A distribution model which provides probabilities of all events. The most general function for this might be $p(r,s'|s,a)$ which is the probability of receiving reward $r$ and transitioning to state $s'$ given starting in state $s$ and taking action $a$.

  • A sampling model which generates reward $r$ and next state $s'$ when given a current state $s$ and action $a$. The samples might be from a simulation, or just taken from history of what the learning algorithm has experienced so far.

The main difference is that in sampling models I only have a black box, which, given a certain input $(s,a)$, generates an output, but I don't know anything about the probability distributions of the MDP. However, having a sampling model, I can reconstruct (approximately) the probability distributions by running thousands of experiments (e.g. Monte Carlo Tree Search).

On the other hand, if I have a distribution model, I can always sample from it.

I was wondering if

  1. what I wrote is correct;

  2. this distinction has been remarked in literature and where I can find a more in-depth discussion on the topic;

  3. someone has ever separated model-based algorithms which use a distribution model and model-based algorithms which use only a sampling model.

",45799,,2444,,12/19/2021 19:25,1/3/2022 13:31,What is the difference between a distribution model and a sampling model in Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 27083,1,,,3/30/2021 14:38,,1,103,"

I have been trying to use MicroMLP to teach a small neural network to converge to correct results. Ultimately, I want to have three outputs, one which is high priority (X must be as close to XTarget as possible; Y and Z must be within bounds, and should approach YTarget and ZTarget as best as possible). Right now, I'm just trying to get convergence of one variable to understand this library.

The below code works for xor, but I don't really understand why it works, or how to extend this to reward behavior:

from microMLP import MicroMLP
import utime
import random
import machine
import gc

DEPTH=3
mlp = MicroMLP.Create( neuronsByLayers           = [DEPTH, DEPTH, 1],
                       activationFuncName        = MicroMLP.ACTFUNC_GAUSSIAN,
                       layersAutoConnectFunction = MicroMLP.LayersFullConnect )

nnFalse  = MicroMLP.NNValue.FromBool(False)
nnTrue   = MicroMLP.NNValue.FromBool(True)

led = machine.Pin(25, machine.Pin.OUT)
tl = 0
xor = []
c=0
for i in range(5000):
    if not i % 100:
        led.toggle()
        gc.collect()
        print(" Iteration: %s \t Correct: %s of 10" % (i,c))
        c = 0
    xor.clear()
    xorOut = nnFalse
    for j in range(DEPTH):
        if random.random() > 0.5:
            xor.append(nnTrue)
            xorOut = nnFalse if xorOut == nnTrue else nnTrue
        else:
            xor.append(nnFalse)
    p = mlp.Predict(xor)
    mlp.QLearningLearnForChosenAction(None, xorOut, xor, 0)
    if p[0].AsBool == xorOut.AsBool:
        c += 1

led.off()

print( "LEARNED :" )

c = 0
tries = 0
for i in range(100):
    led.toggle()
    gc.collect()
    xor.clear()
    xorOut = nnFalse
    for j in range(DEPTH):
        if random.random() > 0.5:
            xor.append(nnTrue)
            xorOut = nnFalse if xorOut == nnTrue else nnTrue
        else:
            xor.append(nnFalse)
    tries += 1
    p = mlp.Predict(xor)
    c += 1 if mlp.Predict(xor)[0].AsBool == xorOut.AsBool else 0
 
print( "  %s of %s" % (c, tries) )

del mlp
print(gc.mem_alloc())
gc.collect()
print(gc.mem_alloc())

I'm trying to achieve two goals, first for me to understand, second for the machine to do useful work.

Goal #1: learn to adjust a value properly.

Inputs:

  • Target (0,1)
  • Value (0,1)

Outputs:

  • An adjustment toward Value (-1,1)
    • Possibly this has to be (0,1) so I've considered using the adjustment as adjustment - 0.5 to put it into the (-0.5,0.5) range

I want to reward the thing based on the degree to which value comes closer to the target. (As a special case, if it's impossible to adjust that far given the output, I want to maximize its reward for making the maximum adjustment.) I don't want to know the value adjustment should target; I only want to know that whatever value it gave produced a state I like better, and what that value was. If I can know the correct output, I don't need deep learning.

Goal #2, the later one I expect to be able to do myself if I can get one variable working, is to have several inputs and three outputs. These inputs relate to the current targets and the deviation from those targets. One of these is of the highest priority to track toward a target value; the other two should track toward a target value, but are allowed to deviate by some amount with no harm done. If I can just figure out how to use the neural network, I should be able to assemble that.

Does this sound reasonable? Is this the correct tool, or is Q-Learning wrong for this?

Feel free to suggest a better package for regular Python as well, although MicroMLP is the only usable one of which I'm aware for the platform I'm targeting. I'll likely want a much more powerful one that I can use with extra available hardware if present.

If I get something I can work with, I'll write documentation and submit a PR to the MicroMLP repo so nobody has to ask this again.

",45840,,45840,,3/30/2021 19:18,3/30/2021 19:18,MicroPython MicroMLP: How do I reward the program based on state?,,0,0,,,,CC BY-SA 4.0 27085,1,,,3/30/2021 16:26,,0,139,"

In "Introduction to Reinforcement Learning" (Richard Sutton) section 13.3(Reinforce algorithm) they have the following equation:

\begin{align} \nabla_{\theta}J &\propto \sum_s \mu(s) \sum_a q_{\pi}(s,a)\nabla_{\theta}\pi(a|s,\theta) \\ &= E_{\pi}[\sum_a q_{\pi}(S_t,a) \nabla_{\theta}\pi(a|S_t,\theta)] \tag{1}\label{1} \end{align} But in my opinion equation 1 should be expectation over state distribution: $$E_{\mu}[\sum_a q_{\pi}(S_t,a) \nabla_{\theta}\pi(a|S_t,\theta)]$$ If I am right here then the rest of the lines follows like this: \begin{align} \nabla_{\theta}J &= E_{\mu}[\sum_a q_{\pi}(S_t,a) \nabla_{\theta}\pi(a|S_t,\theta)] \\ &= E_{\mu}[\sum_a \pi(a|S_t,\theta) q_{\pi}(S_t,a) \frac{\nabla_{\theta}\pi(a|S_t,\theta)}{\pi(a|S_t,\theta)}] \\ &= E_{\mu}[E_{\pi}[q_{\pi}(S_t,A_t)\frac{\nabla_{\theta}\pi(A_t|S_t,\theta)}{\pi(A_t|s,\theta)}]]\\ &= E_{\mu}[E_{\pi}[G_t\frac{\nabla_{\theta}\pi(a|S_t,\theta)}{\pi(a|S_t,\theta)}]] \end{align} Now the final update rule using stochastic gradient descent will be: $$\triangle \theta = \alpha E_{\pi}[G_t\frac{\nabla_{\theta}\pi(A_t|S_t,\theta)}{\pi(A_t|S_t,\theta)}] \tag{2}$$ I think I am doing something wrong here because this equation 2 does not match with the book also with other materials. Can anyone please show me where I am doing wrong?

",28048,,2444,,1/14/2022 0:28,1/14/2022 0:28,"How to simplify policy gradient theorem to $E_{\pi}[G_t \frac{\nabla_{\theta}\pi(a|S_t,\theta)}{\pi(a|S_t,\theta)}]$?",,1,2,,,,CC BY-SA 4.0 27086,1,,,3/30/2021 17:48,,3,219,"

I found many tutorials and posts on how to solve RL environments with discrete action spaces using the cross entropy method (e.g., in this blog post for the OpenAI Gym frozen lake environment). However now I have built my first custom environment, which simulates a car driving on a road with leading and following vehicles. I want to control the acceleration of my vehicle without crashing into anyone. The state consists of the velocity and distance to the leading and following vehicles. The observation and action spaces are continuous and not discrete, which is why I cannot implement my training loop like in the examples that use the cross entropy method. That is, because the method relies on modifying each tuple for training <s, a, r> (state, action, reward) so that the probability distribution in a is equal to 1 in one dimension and equal to 0 in all others (meaning, it it very confident in its action, i.e., [0, 1, 0, 0]).

How do I implement the cross entropy method for a continuous action space (in Python and Pytorch) or is that even possible? The answer to this question probably describes what I want to do in a very mathematical form, but I do not understand it.

",45843,,,,,3/30/2021 17:48,How do I implement the cross-entropy-method for a RL environment with a continuous action space?,,0,0,,,,CC BY-SA 4.0 27087,2,,27085,3/30/2021 18:10,,1,,"

When the authors write go from $$\nabla_{\theta}J \propto \sum_s \mu(s) \sum_a q_{\pi}(s,a)\nabla_{\theta}\pi(a|s;\theta)\;$$ to $$\nabla_{\theta}J = E_{\pi}\left[\sum_a q_{\pi}(S_t,a) \nabla_{\theta}\pi(a|S_t;\theta)\right]\;$$ they are simply taking an expectation where the only random variable is the state $S_t$. This is because, as they say in the book, the Policy Gradient Theorem is a sum over a state distribution, therefore it can be written as an expectation. The expectation is over the states wrt the state distribution; the $\pi$ subscript here does not mean we are taking expectation with respect to the action policy $\pi$, they use the notation $\pi$ to emphasise that the state distribution is induced by the current policy $\pi$ -- this is what makes REINFORCE an on-policy algorithm because the state distribution with which we take the expectation must come from the current policy $\pi$, not some arbitrary distribution.

Now, as you don't seem to understand how they go from this to the Stochastic Gradient Ascent (SGA) (NOT descent as you keep referring to it as) update rule in the book, I will explain further. As you rightly say we can go from my previous equation to the following $$\nabla_{\theta}J = E_{\pi}\left[G_t \nabla_{\theta}\log\pi(A_t|S_t;\theta)\right]\;$$ by doing some re-arranging and noting that $G_t$ is an unbiased estimate of $q_\pi(s, a)$ when taking expectation over the actions and states. Now, this expectation we have is with respect to the actions and states; your first mistake is that you have made it an expectation over the actions without signifying that the action is now a random variable (maybe you could look up expectations and random variables to understand really what this means).

Your second mistake is by not seeing how you can use this gradient to optimise the policy parameters. As we want to maximise this objective we want to perform SGA to it; that is we want to perform $$\theta = \theta + \alpha \nabla_\theta J\;;$$ but $\nabla_\theta J$ is an expectation. It would be costly to evaluate this at every single state-action pair that we see in an episode, thus we only perform SGA for one state-action tuple at a time. Now, you might think that this is not correct, but it is in fact an unbiased estimate as we will sample lots of state-action pairs; this kind of estimation is known as Monte Carlo Sampling. Therefore, we can write our REINFORCE update rule as

$$\theta = \theta + \alpha G_t \nabla_\theta \log \pi(A_t | S_t; \theta)\;;$$

which is exactly the update rule in the book.

Note that at the start of the chapter in Sutton and Barto they also make it clear that we wish to perform $$\theta = \theta + \alpha \hat{\nabla J(\theta)} \;;$$ where $\hat{\nabla J(\theta)}$ is a stochastic estimate whose expectation approximates the true gradient; this is exactly what we are doing, $G_t \nabla_\theta \log \pi(A_t | S_t; \theta)$ is our stochastic estimate.

",36821,,36821,,3/30/2021 23:40,3/30/2021 23:40,,,,1,,,,CC BY-SA 4.0 27090,2,,18439,3/31/2021 2:51,,1,,"

I'm not aware of a direct way for finding the best NN architecture for a given task, but the recommended way, as far as I know, is to devise a network that can overfit the training data, and then apply regularization on top of it.

That way, you can be almost sure you're not underfitting/underperforming due to network capacity.

",32621,,32621,,3/31/2021 13:41,3/31/2021 13:41,,,,0,,,,CC BY-SA 4.0 27091,1,,,3/31/2021 3:10,,2,25,"

Humans are good at guessing animals with zoomed-in images from patterns of fur/skin. (For example, if we saw a black-white pattern fur, it must be a zebra)

I have some experience guessing a car model from an interior/exterior photo without a brand logo. (based on the dashboard/gear level/air vent or something like that)

I think it would be helpful for my coworkers to have such a model. (I'm working at a car forum, and I have some limited experience working with TensorFlow).

Is this possible? Where should I start with?

",45855,,,,,3/31/2021 3:10,How can I train a model to recognize object with zoomed-in image?,,0,0,,,,CC BY-SA 4.0 27093,1,,,3/31/2021 8:30,,0,50,"

Humans learn facts about the world like "most A are B" by own experience and by being told so (by other people or texts). The systems and mechanisms of storage and usage of such facts (by an "experience system" and a "declarative system") are presumably quite different and may have to do with "episodic memory" and "semantic memory". Nevertheless at least in the human brain the common currency are synaptic weights, and it would be quite interesting to know how these two systems cooperate.

I assume that machine learning is mainly concerned with "learning by own experience" (= training data + annotations), be it supervised or unsupervised learning. I wonder which approaches there are that allow a neural network to "learn by being told". One brute force approach might be to translate a declarative statement like "most A are B" into a set of synthetic training data, but that's definitely not how it works for humans.

",25362,,25362,,11/12/2021 17:33,11/12/2021 17:33,How to transfer declarative knowledge into neural networks,,1,5,,,,CC BY-SA 4.0 27094,1,,,3/31/2021 8:51,,2,200,"

I wonder if the following equation (you can find it in almost every ML book) refers to a general assumption that we make when using machine learning:

$$y = f(x)+\epsilon,$$

where $y$ is our output, $f$ is e.g. a neural network and $\epsilon$ is an independent noise term.

Does this mean that we assume the $y$'s contained in our training data set come from a noised version of our network output?

",45600,,2444,,3/31/2021 9:57,3/31/2021 9:57,Is the target assumed to be a noisy version of the output of the model in machine learning?,,2,0,,,,CC BY-SA 4.0 27095,2,,27094,3/31/2021 8:56,,3,,"

Not necessarily. The neural network (or whatever else you use) is a model of what you are trying to do, and usually models are not able to perfectly model reality, as it is too complex. A noise term is generally used to represent that, ie the imperfection of the model's relationship with the actual world.

",2193,,,,,3/31/2021 8:56,,,,0,,,,CC BY-SA 4.0 27096,2,,27094,3/31/2021 9:51,,1,,"

That equation is just an assumption that we make about the relationship between a response variable (aka dependent variable) $y$ and a predictor (aka independent variable) $x$, i.e. the response variable (target) is an unknown function $f$ of the predictor $x$ plus some noise $\epsilon$ due to e.g. measurement errors (caused e.g. by damaged sensors). So, if you have a dataset $D = \{(y_i, x_i)\}_{i=1}^N$, you assume that $y_i = f(x_i) + \epsilon, \forall i$. The goal (in supervised learning) is then to estimate $f$ with e.g. a neural network $\hat{f}_\theta$, so the goal is to find a function $\hat{f}_\theta$ such that $\hat{f}_\theta(x_i) = y_i$, so, in practice, you often ignore $\epsilon$ because that is associated with irreducible errors.

You can find that equation on page 16 of the book An Introduction to Statistical Learning. There you will also find more info about the goal of (statistical) supervised learning and why $\epsilon$ is irreducible.

So, the answer to your question is no, given that $f$ there is not the neural network but an unknown function. If your neural network $\hat{f}$ was equal to $f$, then, yes, but, of course, in practice, this will almost never be the case.

",2444,,2444,,3/31/2021 9:57,3/31/2021 9:57,,,,2,,,,CC BY-SA 4.0 27098,2,,27093,3/31/2021 10:11,,1,,"

One way to look at intelligence is it's the way to compress the universe. That means we have a short mental representation of meaningful concepts.

For example, if I would say "there is a red swan in your building, it's dangerous and can kill you", you already have concepts of "red", "swan", "danger" and this easy allows you to add the bird to your classifier "should I beware it or not". (and you probably have tried to imagine the red swan by now)

There are similar properties in deep networks when deeper layers could respond to more abstract representation concepts. For example, if you have a convolutional network classifying the faces, the first layers would detect simple shapes like lines and circles, then more complex shapes and then parts of the face like eyes, nose and ears (see example here)

Now, let's imagine you have a classification task "is the thing dangerous or not". You have a trained convolutional model and it has "red" and "swan" abstract features on some level. You could freeze all the preliminary weights and make a single backpropagation with a high learning rate only for the subnetwork which is of interest to you.

That could work in theory, but the key challenge here is that most of the neural network representations are not interpretable at all and you would need to solve this problem before (there is a whole research direction with workshops and conferences on the topic with varying success)

",16940,,,,,3/31/2021 10:11,,,,2,,,,CC BY-SA 4.0 27101,1,,,3/31/2021 11:56,,1,62,"

I am troubled by natural gradient methods. If we have a function f(x) we wish to minimize, gradient descent minimizes f(x) of course, but what does the natural gradient do?

I found on https://towardsdatascience.com/natural-gradient-ce454b3dcdfa:

Instead of fixing the euclidean distance each parameter moves(distance in the parameter space), we can fix the distance in the distribution space of the target output.

Where did the distributions come from? If we wish to minimize f(x), the target output is just a minimizer x* right, and not a distribution, or am I missing something?

",45863,,,,,3/31/2021 11:56,Do Gradient Descent and Natural Gradient solve the same problem?,,0,2,,,,CC BY-SA 4.0 27102,1,,,3/31/2021 13:39,,1,38,"

When we talk of optimization, it usually boils down to gradient descent and its variants in the context of deep learning. However, I wonder if there are some works that use discrete optimization in one way or another in deep learning.

In brief, what are some applications of discrete optimization to deep learning?

",32621,,32621,,4/1/2021 16:11,4/1/2021 16:11,What are some use cases of discrete optimization in Deep Learning?,,0,0,,,,CC BY-SA 4.0 27104,2,,26792,3/31/2021 15:54,,0,,"

Ok, I solved this problem The simple thing was that learning rate was too big I changed the code to this

LR = batch_size/((z+1)*100000)
LR=LR/3

instead of

LR = batch_size/((z+1)*1000)
LR=LR/3

and it seems to work well

",44529,,,,,3/31/2021 15:54,,,,1,,,,CC BY-SA 4.0 27105,1,,,3/31/2021 19:51,,1,37,"

I stumbled upon this passage when reading this guide.

Universality theorems are a commonplace in computer science, so much so that we sometimes forget how astonishing they are. But it's worth reminding ourselves: the ability to compute an arbitrary function is truly remarkable. Almost any process you can imagine can be thought of as function computation.* Consider the problem of naming a piece of music based on a short sample of the piece. That can be thought of as computing a function. Or consider the problem of translating a Chinese text into English. Again, that can be thought of as computing a function. Or consider the problem of taking an mp4 movie file and generating a description of the plot of the movie, and a discussion of the quality of the acting. Again, that can be thought of as a kind of function computation.* Universality means that, in principle, neural networks can do all these things and many more.

How is this true? How can any process be thought of as function computation? How would one compute function in order to translate Chinese text to English?

",45875,,2444,,4/1/2021 1:18,4/1/2021 14:59,"How can ""any process you can imagine"" be thought of as function computation?",,2,0,,,,CC BY-SA 4.0 27106,2,,27105,3/31/2021 20:06,,2,,"

A function is simply a procedure that maps a particular input to a particular output. You put in $X$, and the function computes $Y$. Those $X$ and $Y$ can take many different forms. It could be mapping one number to another number (convert miles to kilometres), mapping sound to text (name that tune), mapping text to text (translate languages), mapping a video to text (review this movie), or mapping text to an image (draw a picture of $X$). Anytime you have a procedure that produces a fixed output based on a fixed input, it's a function.

Universality theorems guarantee that a neural network can produce an arbitrarily good approximation of any possible function. That doesn't mean it's easy, though - finding the right function that maps $X$ to $Y$ is the hard part.

",2841,,36737,,4/1/2021 14:59,4/1/2021 14:59,,,,0,,,,CC BY-SA 4.0 27107,1,,,3/31/2021 20:15,,1,30,"

I'm coding some stuff for CNNs, just relying on numpy (and scipy just for the convolution operation for pure performance reasons).

I've coded a small network consisting of a convolutional layer with several feature maps, the max pooling layer and a dense layer for the output, so far so good, extrapolating the backpropagation from fully connected neural networks was quite intuitive.

But now I'm stuck when several convolutional layers are chained. Imagine the following architecture:

  • Output neurons: 10
  • Input matrix (I): 28x28
  • First convolutional layer (CN1): 3x5x5, stride 1 (output shape is 3x24x24)
  • First pooling layer (MP1): 2x2 (output shape is 3x12x12)
  • Second convolutional layer (CN2): 3x5x5, stride 1(output shape is 3x8x8)
  • Second pooling layer (MP2): 2x2 (output shape is 3x4x4)
  • Dense layer (D): 10x48 (fully connected to flattened MP2)

Propagating the error back:

  • Error delta in output layer: 10x1 (cost delta)
  • Error delta in MP2: 3x4x4 (48x1 unflattened, calculating the error delta for the dense layer as usual)
  • Error delta in CN2: 3x8x8 (error delta of MP2 but just upsampled)

How do I keep from here? I don't know how to keep propagating the error to the previous layer, if the error delta in the current one is 3x8x8, and the kernel 3x5x5, performing the convolution between the error delta and the filter for calculating the delta for the previous layer, that gives a 3x4x4 delta.

",35806,,,,,3/31/2021 20:15,"In a convolutional neural network, how is the error delta propagated between convolutional layers?",,0,0,,,,CC BY-SA 4.0 27108,2,,27105,3/31/2021 20:36,,0,,"

To speak to your question about how Chinese to English translation can be a computation, it first requires a way to turn the base units of translation (tokens) into something computable. One basic way is to define the set of your vocabulary terms and create a gigantic matrix (typically called an embedding) with each column representing a token as well as one-hot encoded matrices to perform a selection from the matrix.

Say, if my vocabulary is ("apple", "kiwi") and I want 2-dimensional vectors for the tokens (trivially small but manageable example), you'd need a 2x2 (two tokens in two dimensions) random matrix and two one-hot vectors:

  • "apple" = [1 0]
  • "kiwi" = [0 1]

Multiplying [1 0] for instance, by your 2x2 matrix, will "select" the vector in the first column, which represents the token "apple".

Once you have a randomly initialized embedding matrix, you have to train it to be useful. A relatively common method to do so is to make it the hidden layer in a neural network, mask tokens in the source data, and train the model with gradient descent to "guess" what token was removed. You can also just fully train the embedding as part of training a larger network, but that increases training time significantly.

If you have a lot of sentence pairs of English-Chinese translations, you could train an English/Chinese joint embedding and then further train a neural network that uses it to translate between sentence pairs (but this is not a state of the art method).

Every step of the process after text preparation here is a mathematical operation, so a forward pass of the trained model can be expressed as an equation (though one you could never fully write out by hand), and if we're careful to only choose differentiable operations, then the training of the model as well comes down to solving (many, massive) equations.

",29873,,36737,,3/31/2021 23:43,3/31/2021 23:43,,,,0,,,,CC BY-SA 4.0 27112,1,27113,,4/1/2021 9:56,,1,49,"

I want to predict using the same model as multivariate time series data in a time series prediction problem.

Example:

pa = model predict result(a)
pb = model predict result(b)
pc = model predict result(c)
...
model ensemble([pa, pb, pc,...]) -> predict(y)

Can I expect a better performance of our model by using a model ensemble with more kinds of time series data here?

",41045,,2444,,4/1/2021 10:56,4/1/2021 14:06,"In ensemble learning, does accuracy increase depending on the number of models you want to combine?",,1,2,,,,CC BY-SA 4.0 27113,2,,27112,4/1/2021 14:06,,0,,"

Yes, you can. Let's say you have 5 classes named a,b,c,d,e. You fit your data into a SVM Classifier and a Random Forest Classifier. Assume that, SVM classified "a" and "b" class well and RFC classified "c","d","e" well. So, ensembling these two models is going to increase accuracy dramatically. Ensemble learning is really good when a model generating many "False-Positive" and "False-Negative". You can also use weighted ensemble learning methods. You can set greater weights for reliable models & lower weights for untrustable models.

",28129,,,,,4/1/2021 14:06,,,,0,,,,CC BY-SA 4.0 27117,1,,,4/1/2021 18:35,,1,41,"

When we perform gradient descent, especially in an online setting where the training data is presented in a non-random order, a particular 1-dimensional parameter (such as an edge weight) may first travel in one direction, then turn around and travel the other way for a while, then turn around and travel back, and so forth. This is wasteful, and the problem is that the learning rate for that parameter was too high, making it overshoot the optimal point. We don't want parameters to oscillate as they are trained; instead, ideally, they should settle directly to their final values, like a critically damped spring.

Is there an optimizer that sets learning rates based on this concept? Rprop seems related, in that it reduces the learning rate whenever the gradient changes direction. The problem with Rprop is that it only detects oscillations of period 2. What if the oscillation is longer, e.g. the parameter is moving in a sine wave with a period of dozens or hundreds of time steps? Looking for an optimizer that can suppress oscillations of any period length.

Let's be specific. Say that $w$ is a parameter, receiving a sequence of gradient updates $g_0, g_1, g_2, ... $ . I am looking for an optimizer that would pass the following tests:

  • If $g_t = sin(t) - w$, then $w$ should settle to the value 0.
  • If $g_t = sin(t) + 100 - 100 cos(0.00001t) - w$, then $w$ should settle to the value 100.
  • If $g_t = sin(t) - w$ for $0 < t < 1000000$, and $g_t = sin(t) + 100 - w$ for $1000000 \leq t$, then $w$ should at first settle to the value 0, and then not too long after time step $1000000$ it should settle to the value 100.
  • If $g_t = sin(t) - w$ for $floor(t / 1000000)$ even, and $g_t = sin(t) + 100 - w$ for $floor(t / 1000000)$ odd, then $w$ should at first settle to the value 0, then not too long after time step $1000000$ it should settle to the value 100, and then not too long after step $2000000$ it should settle back to 0, but eventually after enough iterations it should settle to the value 50 and stop changing forever after.
",45906,,45906,,4/6/2021 2:27,4/6/2021 2:27,Optimizer that prevents parameters from oscillating,,0,1,,,,CC BY-SA 4.0 27118,1,27138,,4/1/2021 20:40,,6,322,"

So far I've developed simple RL algorithms, like Deep Q-Learning and Double Deep Q-Learning. Also, I read a bit about A3C and policy gradient but superficially.

If I remember correctly, all these algorithms focus on the value of the action and try to get the maximum one. Is there an RL algorithm that also tries to predict what the next state will be, given a possible action that the agent would take?

Then, in parallel to the constant training for getting the best reward, there will also be constant training to predict the next state as accurately as possible? And then have that prediction of the next state always be passed as an input into the NN that decides on the action to take. Seems like a useful piece of information.

",25904,,2444,,4/3/2021 18:19,4/3/2021 18:19,Are there RL algorithms that also try to predict the next state?,,2,1,,,,CC BY-SA 4.0 27119,1,,,4/1/2021 21:44,,1,22,"

I'm giving my first steps in really learning machine learning. As an exercise in my online course, it was asked for me to code the Cost function of some neural network that should resolve the handwritten problem with digits between 1 to 10.

As most of you know, the cost function of NN is given by:

$$ J(\theta)=\frac{1}{m} \sum_{i=1}^{m} \sum_{k=1}^{K}\left[-y_{k}^{(i)} \log \left(\left(h_{\theta}\left(x^{(i)}\right)\right)_{k}\right)-\left(1-y_{k}^{(i)}\right) \log \left(1-\left(h_{\theta}\left(x^{(i)}\right)\right)_{k}\right)\right] $$

so I tried to code it considering the following information:


where $\mathrm{h_{\theta}}(\mathrm{x^{i}})$ is computed as shown in Figure 2 and $K=10$ is the total number of possible labels. Note that $h_{\theta}\left(x^{(i)}\right)_{k}= a_{k}^{(3)}$ is the activation (output value) of the $k$ -th output unit Also, recall that whereas the original labels (In the variable $y$) were $1,2, \ldots, 10$, for the purpose of training a neural network, we need to recode the labels as vectors containing only values 0 or 1, so that

$$ y=\left[\begin{array}{c} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{array}\right],\left[\begin{array}{l} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{array}\right], \ldots . \text { or }\left[\begin{array}{c} 0 \\ 0 \\ 0 \\ \vdots \\ 1 \end{array}\right] $$

For example, if $x^{i}$ is an image of the digit $5,$ then the corresponding $y^{i n}$ (that you should use with the cost function) should be a $10-$ dimensional vector with $y_{5}=1,$ and the other elements equal to $0$. You should implement the feedforward computation that computes $\mathrm{h_{\theta}}(\mathrm{x^{i}})$ for every example $i$ and sum the cost overall examples. Your code should also work for a dataset of any size, with any number of labels (you can assume that there are always at least $K \geq 3$ labels)


plotting the graph, I got:

here's my error cost function code:

function [J grad] = nnCostFunction(nn_params, ...
                                   input_layer_size, ...
                                   hidden_layer_size, ...
                                   num_labels, ...
                                   X, y, lambda)

% Setup some useful variables
 m = size(X, 1)
% bias  = ones(m,1)';
Theta1 = [ ones(401,1)  reshape(nn_params(1:hidden_layer_size * (input_layer_size + 1)), ...
                 hidden_layer_size, (input_layer_size + 1))']';

Theta2 = reshape(nn_params((1 + (hidden_layer_size * (input_layer_size + 1))):end), ...
                 num_labels, (hidden_layer_size + 1));

J =0;
error_history = zeros(1,m);
y_coded = zeros(1,num_labels);
for i = 1:m
    y_coded(y) = 1;
    X_in = [1 X(i,:)]';
    hypotesis_array = sigmoid(Theta2*sigmoid(Theta1*X_in));
    
    for k =1:num_labels
        J = J  -(y_coded(k)*log10(hypotesis_array(k)) -  (1- y_coded(k))*log10(hypotesis_array(k)));
    end
    J =J/m;
    error_history(i) = J;
    
    
end
plot(1:5000, error_history);
ylabel("Error_Value");
xlabel("training iteration");

I did that considering that the weights were previously given. Is it normal getting this noise error history value or I did something wrong with my code?

",36735,,36737,,4/2/2021 19:54,4/2/2021 19:54,Is it normal getting noise values in the error history along training iteration?,,0,0,,,,CC BY-SA 4.0 27120,1,,,4/1/2021 21:44,,2,43,"

Suppose $G_{\phi}:\mathcal{Z}\rightarrow \mathcal{X}$ is a generator (neural network, non-invertible) that can sample from some distribution $\pi$ on $\mathcal{X}$. That is, $G_{\phi}(z)\sim \pi$ when $z\sim \mathcal{N}(0,I)$. Let $\phi+\delta_{\phi}$ represent a (small) perturbation of the parameters of $G_{\phi}$ and let $G_{\phi+\delta_{\phi}}(z)\sim \pi'$ when $z\sim \mathcal{N}(0,I)$.

Are there any results that quantify or bound $\mathcal{D}(\pi,\pi')$ in terms of $\delta_{\phi}$, where $\mathcal{D}$ is a distance measure for distributions (let's say KL-divergence, or the Wasserstein-1 distance)?

Basically, I want to know what kind of geometry is induced on the space of distributions by the Euclidian geometry on the parameter space of a generative adversarial network.

To explain further, let's consider a parametric family of distributions $p_{\phi}$, where $\phi\in\Phi$ (some parameter space). It is a fairly well-known result in statistics that $\text{KL}(p_{\phi}||p_{\phi+\delta_{\phi}})\approx \frac{1}{2}\delta_{\phi}^\top F_{\phi} \delta_{\phi}$, where $F_{\phi}$ is the Fisher information matrix. When the family $p_{\phi}$ is generated by a GAN with parameter $\phi$ (in which case we don't know $p_{\phi}$ in closed-form), can we have an analogous result?

",28286,,28286,,4/5/2021 14:07,4/5/2021 14:07,How does the output distribution of a GAN change if the parameters are slightly purturbed?,,0,4,,,,CC BY-SA 4.0 27121,1,,,4/1/2021 23:06,,1,34,"

I was reading Proximal Policy Optimization paper. It states following:

The advantage estimator used is:
$\hat{A}_t=-V(s_t)+r_t+\gamma r_{t+1}+...+\gamma^{T-t+1}r_{T-1}+\color{blue}{\gamma^{T-t}}V(s_T) \quad\quad\quad\quad\quad\quad\quad(10)$
where $t$ specifies the time index in $[0, T]$, within a given length-$T$ trajectory segment. Generalizing this choice, we can use a truncated version of generalized advantage estimation, which reduces to Equation (10) when $λ = 1$:
$\hat{A}_t=\delta_t+(\gamma\lambda)\delta_{t+1}+...+(\gamma\lambda)^{T-t+1}\delta_{T-1}\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad(11)$
where, $\delta_t=r_t+\gamma V(s_{t+1})-V(s_t)\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad(12)$

How equation (11) reduces to equation (10). Putting $\lambda=1$ in equation (11), we get:

$\hat{A}_t=\delta_t+\gamma\delta_{t+1}+...+\gamma^{T-t+1}\delta_{T-1}$
Putting equation (12) in equation (11), we get:
$\hat{A}_t$
$=r_t+\gamma V(s_{t+1})-V(s_t) $
$+\gamma[r_{t+1}+\gamma V(s_{t+2})-V(s_{t+1})]+...$
$+\gamma^{T-t+1}[r_{T-1}+\gamma V(s_{T})-V(s_{T-1})]$

$=-V(s_t)+r_t\color{red}{+\gamma V(s_{t+1})} $
$+\gamma r_{t+1}+\gamma^2 V(s_{t+2})\color{red}{-\gamma V(s_{t+1})}+...$
$+\gamma^{T-t+1}r_{T-1}+\color{blue}{\gamma^{T-t+2}} V(s_{T})-V(s_{T-1})$

I understand the terms cancels out. I am not getting the difference in blue colored power of $\gamma$ in last terms. I must have made some stupid mistake.

",41169,,,,,4/1/2021 23:06,Understanding advantage estimator in proximal policy optimization,,0,0,,,,CC BY-SA 4.0 27123,1,27157,,4/2/2021 1:42,,1,226,"

I read the book "Foundation of Deep Reinforcement Learning, Laura Graesser and Wah Loon Keng", and when I go through the REINFORCE algorithm, they show the objective function:

$$ J\left(\pi_{\theta}\right)=\mathbb{E}_{\tau \sim \pi_{\theta}}[R(\tau)]=\mathbb{E}_{\tau \sim \pi_{\theta}}\left[\sum_{t=0}^{T} \gamma^{t} r_{t}\right] $$

and the gradient of the objective:

$$ \nabla_{\theta} J\left(\pi_{\theta}\right)=\mathbb{E}_{\tau \sim \pi_{\theta}}\left[\sum_{t=0}^{T} R_{t}(\tau) \nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right)\right] $$

But when they implement it,

class Pi(nn.Module):
    def __init__(self, in_dim, out_dim):
        super(Pi, self).__init__()
        layers = [
                nn.Linear(in_dim, 64),
                nn.ReLU(),
                nn.Linear(64, out_dim)
        ]
        self.model = nn.Sequential(*layers)
        self.onpolicy_reset()
        self.train()

    def onpolicy_reset(self):
        self.log_probs = []
        self.rewards = []

    def forward(self, x):
        pdparam = self.model(x)
        return pdparam

    def act(self, state):
        x = torch.from_numpy(state.astype(np.float32))
        pdparam = self.forward(x) # (1, num_action), each number represent the raw logits for that specific action
        # model contain the paremeters theta of the policy, pd is the probability 
        # distribution parameterized by model's theta 
        pd = Categorical(logits = pdparam)
        action = pd.sample()
        log_prob = pd.log_prob(action)
        self.log_probs.append(log_prob)
        return action.item()

def train(pi, optimizer):
    T = len(pi.rewards)
    rets = np.empty(T, dtype = np.float32)
    future_ret = 0.0
    for t in reversed(range(T)):
        future_ret = pi.rewards[t] + gamma*future_ret
        rets[t] = future_ret

    rets = torch.tensor(rets)
    log_probs = torch.stack(pi.log_probs)
    loss = -log_probs*rets
    loss = torch.sum(loss)
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
    return loss

def main():
    env = gym.make('CartPole-v0')
    # in_dim is the state dimension
    in_dim = env.observation_space.shape[0]
    # out_dim is the action dimension
    out_dim = env.action_space.n
    pi = Pi(in_dim, out_dim)
    optimizer = optim.Adam(pi.parameters(), lr = 0.005)
    for epi in range(300):
        state = env.reset()
        for t in range(200): # max timstep of cartpole is 200
            action = pi.act(state)
            state, reward, done, _ = env.step(action)
            pi.rewards.append(reward)
        # env.render(mode='rgb_array')
            if done:
                break
        loss = train(pi, optimizer)
        total_reward = sum(pi.rewards)
        solved = total_reward > 195.0
        pi.onpolicy_reset()
        print(f'Episode {epi}, loss: {loss}, total reward: {total_reward}, solve: {solved}')
    return pi

In train(), they minimize the gradient term, and I can not understand why is that.

Can someone shed light on that?

I am new to this so please forget me if this question is stupid.

",45912,,45912,,4/5/2021 11:05,4/5/2021 11:05,Why does the implementation of REINFORCE algorithm minimize the gradient term but not the loss?,,1,2,,,,CC BY-SA 4.0 27124,2,,22676,4/2/2021 3:14,,4,,"

In QA, it's computed over the individual words in the prediction against those in the True Answer. The number of shared words between the prediction and the truth is the basis of the F1 score: precision is the ratio of the number of shared words to the total number of words in the prediction, and recall is the ratio of the number of shared words to the total number of words in the ground truth.

Source

",45913,,32410,,12/22/2021 23:11,12/22/2021 23:11,,,,0,,,,CC BY-SA 4.0 27125,1,27126,,4/2/2021 8:36,,1,70,"

RNN and LSTM models have many architectures that can be modified. We can also compose their input and output data. However, in the examples that I found on the web, the inputs and outputs of RNNs/LSTMs are usually sequences.

Let's say we have a 3-column dataset:

data= np.array([[1.022 0.94  1.278]
                [2.096 1.404 2.035]
                [1.622 2.348 1.909]
                [1.678 1.638 1.742]
                [2.279 1.878 2.045]])

where the first two columns contain the inputs (features) and the third one contains the labels.

Usually, when modeling with feedforward neural networks (FFNNs), the input and output look like this:

Input:

x_input = np.vstack((data[:, 0], data[:, 1])).reshape(5, 2)

[[1.022 2.096]
 [1.622 1.678]
 [2.279 0.94 ]
 [1.404 2.348]
 [1.638 1.878]]

Output:

y_output = np.vstack((data[:, 2])).reshape(5, 1)

[[1.278]
 [2.035]
 [1.909]
 [1.742]
 [2.045]]

When modeling with RNN, the input and output are:

Input:

[[1.022 0.94  1.278]
 [2.096 1.404 2.035]
 [1.622 2.348 1.909]

Output (as a sequence):

 [1.678 1.638 1.742]
 [2.279 1.878 2.045]]

I would like to ask: Is it possible to model the input and output as an ANN model when modeling with RNN? Would it be correct?

",45648,,45648,,12/27/2021 12:03,12/27/2021 12:03,Can RNNs get inputs and produce outputs similar to the inputs and outputs of FFNNs?,,1,0,,,,CC BY-SA 4.0 27126,2,,27125,4/2/2021 11:33,,1,,"

Yes, it is possible. What you have shown in case of ANN is what happens in a regression model using NNs. What you have shown in case of RNN is what happens when you are doing sequence-to-sequence translation (like French to English).

If you want to get single values like in case of ANN, suppose you are doing regression, then, in the end, you will flatten the features aggregated by RNN (in case of Tensorflow, use Flatten layer and in case of PyTorch, you can directly do it). It should be then followed by a dense layer of 3 (in case of Tensorflow) or linear layer of 3 (in case of PyTorch), if I am talking about your example.

Since, you have shown values above 1, I presume you are doing some kind of regression. But, it would be good idea to normalize your outputs in case of regression, it makes the optimization easier.

If you want to classification then in the last layer, use Dense layer with softmax (in case of Tensorflow) and softmax followed by a linear layer (in case of PyTorch).

",37203,,,,,4/2/2021 11:33,,,,6,,,,CC BY-SA 4.0 27127,1,,,4/2/2021 12:04,,1,396,"

When applying the bellman expectation equation:

$$v(s)=\mathbb{E}\left[R_{t+1}+\gamma v\left(S_{t+1}\right) \mid S_{t}=s\right]$$

to the MRP below, states further away from the terminal state will have the same value ${v(s)}$ as states closer to the terminal states. Even though it is clear that the expected total reward from states further away is lower. If the discount factor $\gamma$ would be even lower states further away would get a higher value. If we now make this an MDP where the agent can decide to go either direction from all states (with the first state having an action leading to itself), the agent would then choose to go further away from the terminal. Getting less reward over the whole episode. So, this seems to be an example where policy/value iteration would not converge to an optimal policy. I know there is something wrong with the reasoning here. I just cannot seem to figure out what.

What am I missing here?

EDIT: So, the problem actually was that I didn't take into account that the terminal state has to get a value of 0. If you put it at 0 at all times this will converge as expected because all the other states will get lower and lower values while, assuming a greedy policy, the one-to-last state will retain a value of -1. After a bit over 10 iterations (if gamma is close to 1) it will converge because the states further away will get a value less than -1.

",45918,,45918,,5/12/2021 13:00,5/12/2021 13:00,Bellman Expectation Equation leading to results where value iteration would not converge to the optimal policy,,1,0,,,,CC BY-SA 4.0 27130,1,,,4/3/2021 0:56,,1,67,"

This article attempts to provide a graphical justification of the universal approximation theorem.

It succeeds in showing that a linear combination of two sigmoids can produce essentially a bounded constant function or step function, and thus can therefore to a reasonable degree of approximation produce any function by essentially splitting up any function into a cluster (linear combination?) of these towers or steps.

However, he produced the steps and towers using specific weight parametrizations.

However, since when are we allowed to specify weights and biases? Isn't this all out of our hands and in the hands of cost function minimization?

I don't understand why he was dealing with setting weights to this, biases to that, when in my experience that is all done by "the machine" to minimize the cost function. I doubt the weights to minimize the cost function are arranged in the ways specified in order to form the towers and steps that were formed in this tutorial, so I kind of don't understand what all the hub-ub is all about.

",45875,,2444,,4/3/2021 12:52,4/3/2021 12:52,Issue with graphical interpretation of the universal approximation theorem,,1,0,,,,CC BY-SA 4.0 27131,1,,,4/3/2021 5:24,,2,42,"

I've asked this question before (@ Reddit) and people suggested CNNs on a mel spectrogram more than anything else. This is great.

But I'm sort of stuck at: label some music data as "queen" and "not queen" and have this be the training set. Like, download 300 songs, 70 queen (that's all they have) and 230 not queen, create their mel spectrograms using some python package that can do that.

First of all, is 300 songs even enough?

I only have a basic understanding of what I'm doing. I need some help

",45939,,,,,5/4/2021 14:04,I want to determine how similar a given song is to Queen's songs. Am I headed in the right direction?,,1,2,,,,CC BY-SA 4.0 27133,2,,27130,4/3/2021 8:59,,1,,"

The classical version of the universal approximation theorem states that, roughly, given a continuous function $f \colon [0, 1]^n \to [0, 1]^n$, there exists a single layer neural network and a set of weights and biases such that this network approximates the given function $f$ arbitrarily well.

It doesn't say anything about how you obtain such weights: the result is entirely independent of the way you train your network, and what it says is that the set of single-layer neural networks has enough capacity, in principle, to approximate any continuous function arbitrarily well.

This kind of result is fairly common in mathematics: it is shown that a suitable object must exist, but the proof is non-constructive, i.e. it doesn't actually show you how to get that object.

You're indeed correct that the backpropagation algorithm might not be able to find suitable weights. In fact, if you fix an architecture in advance, there might not even be any weights that would lead to a neural network that is a good approximation to $f$.

Why do we care?

Universality tells us, at least in principle, that we should be able to approximate the function we want, if we pick an appropriate architecture. The XOR Problem, for example, affects a type of model called a perceptron (essentially a single neuron neural network), and says that the XOR function cannot be approximated by perceptrons. It is always useful to know the kinds of functions that can and cannot be expressed by a certain type of model.

",44413,,,,,4/3/2021 8:59,,,,5,,,,CC BY-SA 4.0 27134,2,,27127,4/3/2021 9:10,,1,,"

What am I missing here?

You are not missing anything mathematically.

Potentially what you are missing is that the discount factor $\gamma$, is part of the problem definition. In reinforcement learning (RL), you do not always solve problems to obtain the highest total sum of rewards. Instead you solve problems to obtain the highest expected return on any action. If you use a discount factor, then it affects the return calculation, which in turn may affect which behaviours are optimal.

If you choose $gamma$ such that an agent would move away from a terminal state to receive the highest expected return, then you have defined the problem such that this behaviour is optimal.

This can be an issue when solving RL problems and you have taken a free choice for reward functions and definition of return (this is quite common, often it is the researcher's problem to set these values up to define the agent's goals). Often you can use a discount factor to make a problem more numerically stable, because it will keep sums of rewards over many time steps within bounds. Also, it can be used as a way to search for faster solutions - i.e. in less time steps to reach episode end - because the agent will prioritise reaching higher values sooner if it can. Your example MDP deliberately puts those two effects in opposition to each other.

Assuming you want the agent to find the terminal state by getting over the larger negative reward just before it: Probably you should not use a discount factor for your MDP. Alternatively, if the reward signal is more of a free choice for that problem, you could offset it (by $+0.1$) and then most values of discount factor will still resolve to reaching the terminal state as optimal behaviour.

There are a few common ways to mitigate or avoid the issue of changing problem definition by selecting $\gamma$:

  • Use a value of $\gamma$ derived from the problem. In some cases, the value of $\gamma$ is a realistic physical part of a problem. For instance in financial problems, cash now is often preferable to cash later.

  • Set $\gamma$ to relavtively high value, such as $0.99$ or $0.999$, depending on episode length and relative sizes of rewards. This can work well enough for a wide range of problems. It is a common solution when using DQN. Effectively this approximates the next approach (average rewards) for a low cost.

  • Use average reward settings. There are separate Bellman equations, TD updates etc based on maximising expected average reward per time step as opposed to maximising expected discounted return.

",1847,,1847,,4/3/2021 9:24,4/3/2021 9:24,,,,3,,,,CC BY-SA 4.0 27135,1,,,4/3/2021 9:27,,1,38,"

I'm having trouble understanding an equality that comes up in the original LDA paper by Blei et al.:

Consider the classical LDA model, i.e. for every document $\textbf{w}=(w_1,\ldots,w_N)$ in a text corpus $\mathcal{D}=\{w_1,\ldots,w_M\}$ assume that the document is created as follows$^{\dagger}$:

  1. Choose $N\sim \text{Poisson}(\xi)$.

  2. Choose $\theta\sim\text{Dir}(\alpha)$.

  3. For each of the $N$ words $w_n$:

    (a) Choose a topic $z_n\sim \text{Multinomial}(\theta)$.
    (b) Choose a word $w_n$ from $p(w_n|z_n,\beta)$, a multinomial probability conditioned on the topic $z_n$.

In order to do inference for LDA, the authors use a variational approach with the following graphical model where $\gamma$ and $\phi$ are Dirichlet and multinomial parameters, respectively:

Let us write $p$ for the original LDA distribution and $q$ for the variational one. The equality I don't understand is given in the appendix$^{*}$ of the paper and states that:

$$E_q[\log p(\mathbf{w}|\mathbf{z},\beta)]=\sum_{n=1}^{N}\sum_{i=1}^{k}\sum_{j=1}^{V}\phi_{ni}w_n^j\log \beta_{ij}.$$

My work so far:

We can write the RHS as

$$\sum_{n=1}^{N}\sum_{i=1}^{k}\sum_{j=1}^{V}\phi_{ni}w_n^j\log \beta_{ij}=\sum_{n=1}^{N}\sum_{i=1}^{k}\sum_{j=1}^{V}q(z_n=i)\cdot w_n^j\cdot \log p(w_n^j=1|z_n=i)$$

and the LHS via exchangeability and de Finetti's theorem as

$$p(\mathbf{w}|\mathbf{z},\beta)]=\sum_{n=1}^{N}E_q[\log p(w_n|z_n,\beta)].$$

I now want to obtain the double sum from the right-hand side on the LHS as well. This looks like some sort of expected value with respect to $q$, of a discrete random variable that only depends on the values of $z_n$, conditional on $w_n$ (seemingly, as if $w_n$ was fixed) but the R.V. that we do have on the left-hand side is $\log p(w_n|z_n,\beta)$ which depends on both the values of $z_n$ and $w_n$, both not fixed. How do I continue?


$^{\dagger}$ Also assume that both $M$, the number of documents, and $k$, the dimensionality of the Dirichlet, are known and fixed. Furthermore, let $V$ denote the number of possible words (the "size of the vocabulary") and write every word as a $\{0,1\}^V$ vector with zeros everywhere except for the index of the word in the vocabulary list. Finally, let $\beta$ be a $k\times V$ matrix with $\beta_{ij}=p(w^j=1|z^i=1)$ and assume that both words and documents are exchangeable.

$^*$ Eq. 15, A.3

",45943,,2444,,7/24/2021 12:30,7/24/2021 12:30,"Why does $E_q[\log p(\mathbf{w}|\mathbf{z},\beta)]=\sum_{n=1}^{N}\sum_{i=1}^{k}\sum_{j=1}^{V}\phi_{ni}w_n^j\log \beta_{ij}$ hold in LDA?",,0,0,,,,CC BY-SA 4.0 27136,1,,,4/3/2021 11:46,,4,142,"

I am new to transfer learning and I start by reading A Survey on Transfer Learning, and it stated the following:

according to different situations of labeled and unlabeled data in the source domain, we can further categorize the inductive transfer learning setting into two cases:

case $(a)$ (It is irrelevant to my question).

case $(b): $ No labeled data in the source domain are available. In this case, the inductive transfer learning setting is similar to the self-taught learning setting, which is first proposed by Raina et al. [22]. In the self-taught learning setting, the label spaces between the source and target domains may be different, which implies the side information of the source domain cannot be used directly. Thus, it’s similar to the inductive transfer learning setting where the labeled data in the source domain are unavailable.

From that, I understand that self-taught learning is inductive transfer learning.

But I opened the paper of self-taught learning that was mentioned (i.e paper by Raina et al. [22].), and It stated the following in the introduction:

Because self-taught learning places significantly fewer restrictions on the type of unlabeled data, in many practical applications (such as image, audio or text classification) it is much easier to apply than typical semi-supervised learning or transfer learning methods.

And here it looks like transfer learning is different from self-taught learning.

So what is the right relation between them?

",36578,,2444,,5/11/2021 0:09,5/11/2021 10:58,What is the relation between self-taught learning and transfer learning?,,1,2,,,,CC BY-SA 4.0 27137,2,,27118,4/3/2021 13:30,,2,,"

Check out Imagination-Augmented Agents paper - seems like it does what you are talking about. The agent itself is the standard A3C that you are familiar with. The novelty is the "imagination" environment model which is trained to predict the behavior of the environment.

",20538,,,,,4/3/2021 13:30,,,,0,,,,CC BY-SA 4.0 27138,2,,27118,4/3/2021 14:01,,4,,"

Yes, there are algorithms that try to predict the next state. Usually this will be a model based algorithm -- this is where the agent tries to make use of a model of the environment to help it learn. I'm not sure on the best resource to learn about this but my go-to recommendation is always the Sutton and Barto book.

This paper introduces PlanGAN; the idea of this model is to use a GAN to generate a trajectory. This will include not only predicting the next state but all future states in a trajectory.

This paper introduces a novelty function to incentivise the agent to visit unexplored states. The idea is that for unexplored states, a model that predicts the next state from the state-action tuple will have high error (measured by Euclidean distance from true next state) and they add this error to the original reward to make a modified reward.

This paper introduces Dreamer. This is where all learning is done in a latent space and so the transition dynamics of this latent space must be learned, another example of needing to learn the next state.

These are just some examples of papers that try to predict the next state, there are many more out there that I would recommend you look for.

",36821,,,,,4/3/2021 14:01,,,,0,,,,CC BY-SA 4.0 27140,2,,25458,4/3/2021 14:44,,8,,"

Here is how I understand this regularization.

$R_1$ is simply the norm of the gradients, which indicates how fast the weights will be updated. Gradient regularization penalizes large changes in the output of some neural network layer.

$$ R_{1}\left(\psi\right) = \frac{\gamma}{2}E_{p_{D}\left(x\right)}\left[||\nabla{D_{\psi}\left(x\right)}||^{2}\right]\text{,} $$

where $\psi$ is discriminator weights, $E_{p_{D}\left(x\right)}$ means that we sample data only form the real distribution (i.e. only real images) and $\gamma$ is a hyperparameter.

Since we don't know if $G$ can already generate data from real distribution, we apply this regularization to $D$ only on real data, because we don't want the discriminator to create a non-zero gradient without suffering a loss if we are already in a Nash Equilibrium. I guess this also prevents $G$ from updating if it generates data from the real distribution.

The authors also investigate which value is best for $\gamma$ by analyzing the eigenvalues of the Jacobian of the the associated gradient vector field, but in my opinion, this value is highly dependent on dataset and architecture.

"Gradient orthogonal to the data manifold" simply means zero gradients. From a GAN perspective, the data manifold is a lower-dimensional latent features manifold embedded in a higher-dimensional space and our goal is to approximate it. Since the gradient vector shows the direction in which we need to update our function, if it is orthogonal to this manifold, we do not need to update the function.

",12841,,12841,,4/3/2021 17:06,4/3/2021 17:06,,,,0,,,,CC BY-SA 4.0 27141,1,,,4/3/2021 16:47,,0,124,"

Assume you are given a sequence of states followed by the agent, generated by a random policy, $[s_0, s_1, s_2, \dots, s_n]$. Furthermore, assume the MDP is fully observable and time is discrete.

Is it possible to find the Q-value for a state-action pair $(s_j, a_j)$ which was not encountered along this sequence?

From my understanding of the MDP, yes, it would be possible. However, I'm unsure how to get this Q-value.

",45957,,2444,,4/4/2021 13:20,4/5/2021 13:35,"Given a sequence of states followed by the agent, is it possible to find the Q-value for a state-action pair not in this sequence?",,1,0,,,,CC BY-SA 4.0 27142,2,,27141,4/3/2021 23:46,,1,,"

My understanding from your question is that you have the following data generated from a random policy:

$$[s_0, s_1, s_2 . . . s_n]$$

That is, the state observed at each time step.

You know nothing more about the MDP, such as the transition or reward functions. Although the MDP is discrete and fully observable (and thus usual RL theory is supported), you do not have any observations other than a single list of states.

You wish to obtain an estimate for $Q(s_j, a_j)$, given a specific state/action pair that has not been observed before. This is not possible. You do not have any data from your observations regarding rewards, nor the actions taken by the random agent. At minimum, you would need data about rewards and actions that were observed. Observed rewards are needed to calculate returns, and value functions are expected returns. Observed actions are needed to assign returns to the correct $(s,a)$ pair when calculating estimates.

In addition, if you want to make a meaningful estimate for an as-yet unseen state/action pair, then you would need to be using some form of function approximation for the action value function Q. Otherwise your estimate will be whatever default estimate you had assigned at the beginning of learning.

What inferences can you make given the data you have? Very roughly:

$$p(s_{k+1} | s_k, \pi ) \gt 0, \forall k \in [0,n-1]$$

i.e. the state sequence that was observed gives you proof of non-zero transition probabilities between those states under the given policy. This is not enough to learn a value function, but might have other uses.

With repeated observations - multiple lists of observed states - you could infer what those probabilities were, but would not be able to separate the effect of action choice from state transition rules (because the action was not observed). If you added reward observations then you could learn a state value function.

",1847,,2444,,4/5/2021 13:35,4/5/2021 13:35,,,,0,,,,CC BY-SA 4.0 27143,1,27146,,4/4/2021 0:31,,1,232,"

I have a classification problem, for which an inadequate amount of training data is available. Also, there is no known practical data augmentation approach for this problem (as no unlabelled data is available either), but I am working on it.

As we know, deep neural networks require a large amount of data for training, especially when a deep architecture with many layers is used. Using these complex architectures with less data can easily lead to over-fitting. Residual connections can shortcut some blocks or layers, which can result in simpler models, while we have the benefit of complex structures.

Can residual connections be beneficial when we have a small training dataset?

",43208,,2444,,4/5/2021 13:32,4/5/2021 13:32,Can residual connections be beneficial when we have a small training dataset?,,1,0,,,,CC BY-SA 4.0 27146,2,,27143,4/4/2021 10:03,,3,,"

Can residual connections be beneficial when we have a small training dataset?

The usual rule of data science investigations applies here: Try it, measure the results, then you will know.

It is very hard to tell, a priori, whether a specific architectural or hyperparameter choice will impact the performance of a neural network on a given problem.

In this case, you are wondering whether residual networks using skip connections might help when you have a relatively low amount of training data.

  • On the pro side, effects of skip connections that help correct vanishing gradients, and treat each block as learning the difference from an identity function, will still work for your problem. That means you will have some freedom to explore adding layers without worrying about the negative impacts of doing so.

  • On the con side, it is unlikley that you will benefit from very deep networks as there will not be enough examples to learn truly complex functions from.

You may find that adding depth, but reducing "width", i.e. the number of artificial neurons in each layer, will work.

If you have a low amount of training data and a difficult problem to solve, then residual networks are not a magic fix. The best you could hope for is a relatively stable statistical model that works on the simpler differences between your training examples. However, it may be possible that tuning a neural network by searching through different architectures will be a worthwhile exercise.

I would also suggest that for one of the good networks that you vary the number of examples used to train the network in multiple training runs, and plot a learning graph showing number of examples versus accuracy (or other metric that you may be interested in). This graph will help you decide whether collecting more training data would be worthwhile because you will have a rough estimate of the gradient for how much new training examples could improve your results.

",1847,,2444,,4/5/2021 13:32,4/5/2021 13:32,,,,0,,,,CC BY-SA 4.0 27147,2,,27131,4/4/2021 12:42,,2,,"

You are heading in the right direction to make an audio-based classifier. This is not quite the same as providing a similarity metric between two pieces of audio, but it may do as a first attempt. You could use the "probability that this audio is a Queen song" as a proxy for similarity.

First of all, is 300 songs even enough?

Nowhere near enough to train a classifier from scratch. In that case you might be aiming for 10,000 samples or even up to a million, depending on how sophisticated you want your classifier to be.

However, you don't necessrily need to find that many training examples. Instead, if you can find a pre-trained audio classifier for music that is compatible with the libraries you are working with, and use transfer learning, you may get decent results. This works by replacing the last few layers of a neural nework trained with your own ones and training the network on your data only modifying those new changed layers.

From your comments:

similar in voice, like Gnarles Barkley is more similar to Alicia Keys than System of a Down

The tone of the vocals is one part of many things that can vary between music recordings. A classifier with only a few examples to work from will not be able to isolate just the elements you are interested in. Also, without examples that are explicitly labelled in a way that "more similar" is meaningful in the way that you want, you will not have control over whether the neural network identifies Brian May's guitar, or 1970s studio equipment as the most easy to identify element compared to Freddie Mercury's vocals.

These things may limit the usefulness or accuracy of your first attempt, but I would not suggest you consider them immediately. You basic idea should produce something that has an interesting behaviour when given different inputs. Just be realistic that you will not get state-of-the-art results on your first attempts at the project.

",1847,,,,,4/4/2021 12:42,,,,0,,,,CC BY-SA 4.0 27150,1,,,4/4/2021 15:26,,1,161,"

I am studying for RL on my own and was trying to solve this question I came across.

  1. Write an operator function $T(w, \pi, \mu, l, g)$ that takes weights $w$, a target policy $\pi$, a behaviour policy $\mu$, a trace parameter $l$, and a discount $g$, and outputs an off-policy-corrected lambda-return. For this question, implement the standard importance-weighted per-decision lambda-return. There will only be two actions, with the same policy in each state, so we can define $\pi$ to be a number which is the target probability of selecting action a in any state (s.t. $1 - \pi$ is the probability of selecting $b$), and similarly for the behaviour $\mu$.

  2. Write an expected weight update, that uses the operator function $T$ and a value function $v$ to compute the expected weight update. The expectation should take into account the probabilities of actions in the future, as well as the steady-state (=long-term) probability of being in a state. The step size of the update should be $\alpha=0.1$.

Here is how my solution looks like (I am a total beginner in RL and in addition to studying Rich's book, I was trying to solve the basic intro course assignments as well to help understand the topic in detail.

x1 = np.array([1., 1.])
x2 = np.array([2., 1.])

def v(w, x):
    return x.T*w

def T(w, pi, mu, l, g):
    states = [0, 1]
    n_states = len(states)
    #initial_dist = np.array([[1.0, 0.0]])
    transition_matrix = np.array([[pi, 1-pi],
                                  [pi, 1-pi]])
    
    if pi <= mu: # thresholding to select the state
        val = v(w, x1)
    else:
        val = v(w, x2)
        pi = 1 - pi

    l_power = np.power(l, n_states - 1)
    lambda_corrected = l_power * val
    lambda_corrected *= 1 - l

    return lambda_corrected - val

def expected_update(w, pi, mu, l, g, lr):
    delta = T(w, pi, mu, l, g)

    w += lr * delta
    return w

The state diagram looks like this where there are two states $s_0$ and $s_1$. All rewards are $0$ and the state features $x_0 = x(s_0)$ and $x_1 = x(s_2)$ for two states are given as $x_1$ and $x_2$ in the code ([1., 1.], [2., 1.]) and also there are only two actions in each state $a$ and $b$. Action an always transitions to state $s_0$ (i.e. from s1 or from s0 itself) and action b always transitions to state $s_1$ (i.e. from $s_0$ or $s_1$ itself):

This is how the caller portion of the code looks like.

def caller(w, pi, mu, l, g):
  ws = [w]
  for _ in range(100):
    w = w + expected_update(w, pi, mu, l, g, lr=0.1)
    ws.append(w)
  return np.array(ws)

mu = 0.2 # behaviour
g = 0.99  # discount

lambdas = np.array([0, 0.8, 0.9, 0.95, 1.])
pis = np.array([0., 0.1, 0.2, 0.5, 1.])

I would appreciate any help.


Edit:

I tried implementing the T() following the Bellman backup operator, but I am still not sure if I did this right or not.

return pi * g*v(w, x1) + (1-pi) * g*v(w, x2)
",37865,,2444,,12/21/2021 15:22,12/21/2021 15:22,Off-policy Bellman Operators: Writing Operator and Weight Update Function for a 2-State System,,0,0,,,,CC BY-SA 4.0 27151,1,,,4/4/2021 17:08,,0,140,"

Multi-Layer Perceptron (MLP), Deep AutoEncoder (DAE), and Deep Belief Network (DBN) are trained differently.

However, do they follow the same process during the inference phase, i.e., do they calculate a weighted sum, then apply a non-linear activation function, for each layer until the last layer, or is there any difference? Moreover, are they only composed of fully connected layers?

",43113,,2444,,4/9/2021 3:16,4/9/2021 3:16,"What is the difference between the forward pass of the Multi-Layer Perceptron, Deep AutoEncoder and Deep Belief Network?",,0,3,,,,CC BY-SA 4.0 27152,1,27158,,4/4/2021 21:48,,3,177,"

I have been trying to understand why MCTS is very important to the performance of RL agents, and the best description I found was from the paper Bootstrapping from Game Tree Search stating:

Deterministic, two-player games such as chess provide an ideal test-bed for search bootstrapping. The intricate tactics require a significant level of search to provide an accurate position evaluation; learning without search has produced little success in these domains.

I however don't understand why this is the case, and why value based methods are unable to achieve similar performance.

So my question would be:

  • What are the main advantages of incorporating search based algorithms with value based methods?
",44320,,,,,4/5/2021 11:19,What is the advantage of using MCTS with value based methods over value based methods only?,,1,4,,,,CC BY-SA 4.0 27154,1,27234,,4/5/2021 1:28,,5,227,"

I'm trying to implement MCTS with UCT for a board game and I'm kinda stuck. The state space is quite large (3e15), and I'd like to compute a good move in less than 2 seconds. I already have MCTS implemented in Java from here, and I noticed that it takes a long time to actually reach a terminal node in the simulation phase.

So, would it be possible to simulate games up until a specific depth?

Instead of returning the winner of the game after running until the max depth, I could return an evaluation of the board (the board game is simple enough to write an evaluation function), which then back propagates.

The issue I'm having is in handling the backpropagation. I'm not quite sure what to do here. Any help/resources/guidance is appreciated!

",45983,,2444,,4/9/2021 3:12,4/9/2021 8:21,"In MCTS, what to do if I do not want to simulate till the end of the game?",,1,0,,,,CC BY-SA 4.0 27157,2,,27123,4/5/2021 9:33,,1,,"

They are not maximizing the gradient, the gradient is of the form \begin{equation} \nabla_{\theta} J \approx \sum_{t=0}^T G_t \nabla_{\theta} \log(\pi_{\theta}(a_t|s_t)) \end{equation} that means that when implementing it in software you can form your objective as \begin{equation} J = \sum_{t=0}^T G_t \log(\pi_{\theta}(a_t|s_t)) \end{equation} and then taking the gradient of that objective is equal to the policy gradient.

",20339,,,,,4/5/2021 9:33,,,,3,,,,CC BY-SA 4.0 27158,2,,27152,4/5/2021 10:22,,2,,"

Assuming a continuous/uncountable state space, we can only estimate our value function using function approximation, so our estimates will never be true for all states simultaneously (because, loosely speaking, we have far more states than weights). If we can look at the (approximated) value of states we take in, say, 5 actions time, it is better to make a decision based on these estimations, taking into account the true rewards observed after the 5 actions.

Further, MCTS also allows more implicit exploration as when choosing the actions to expand the tree we are potentially choosing lots of non-greedy actions that lead to better future returns.

",36821,,36821,,4/5/2021 11:19,4/5/2021 11:19,,,,4,,,,CC BY-SA 4.0 27161,1,27167,,4/5/2021 14:44,,2,61,"

I built a model using the tutorial on the TensorFlow site. It was a simple image classification neural network. I trained it and saved the model and weights together on a .h5 file.

Recently, I have been reading about backpropagation. From what I understand, it's basically a way to tell the neural network whether if it's identified the correct output and that it is applied during training data only.

So, I was wondering if there is a way for the model to 'improve' over time as it makes more and more predictions. Or is that not how it would work with Neural Networks?

",44731,,2444,,4/9/2021 3:21,4/9/2021 3:21,How to improve a trained model over time (i.e. with more predictions)?,,1,0,,,,CC BY-SA 4.0 27163,1,,,4/5/2021 15:29,,1,118,"

The testing problem in traditional software has been fully explored over the last decades, but it seems that testing in artificial intelligence/machine learning has not (see this question and this one).

What are the differences between the two?

",5351,,2444,,4/5/2021 15:50,4/5/2021 15:50,What are the differences in testing between traditional software and artificial intelligence?,,1,0,,,,CC BY-SA 4.0 27164,2,,27163,4/5/2021 15:48,,1,,"

Testing machine learning programs is quite different than testing traditional software.

The main reason why this is the case is quite simple, if you're familiar with machine learning.

ML programs are not just if statements and loops, but they are composed of models, which can even be black-box models, such as neural networks (i.e. it's difficult to interpret the function that they compute). These models are trained to approximate some unknown function, which is only partially described by some given data, which can also contain spurious/noisy information.

For this reason, traditional testing techniques, such as statement coverage, are insufficient to fully "test" ML programs: e.g. even if all statements of your ML program are covered, your ML program can still fail, e.g. it may predict the wrong class for some previously unseen input. So, testing ML programs requires not only traditional testing techniques, but also other approaches that attempt to address the (un-)desirable behaviour (such as the generalization) of the models.

Currently, I am also doing research on this specific topic, so I can say that there are already several people working on this topic, but the field is still quite immature and there are still many unsolved problems. It's also important to note that the general "testing problem", even for traditional software, is not yet solved.

",2444,,,,,4/5/2021 15:48,,,,0,,,,CC BY-SA 4.0 27165,1,27188,,4/5/2021 17:47,,0,40,"

I'm currently studying the Junction Tree Algorithm: I'm referring to the process of transforming a Bayesian Network into a Junction Tree in order to apply inference. I understand how you build the Junction Tree, but I'm stuck on the idea of message passing.

What exactly are these messages? Are they numbers, or vectors?

If any of you could direct me to a numerical example that would be very appreciated.

",30287,,2444,,4/7/2021 15:18,4/7/2021 15:18,What Constitutes Messages in Junction Tree Algorithm?,,1,0,,,,CC BY-SA 4.0 27166,1,,,4/5/2021 23:24,,0,55,"

I've been trying to understand the Distilling the Knowledge in a Neural Network paper by Hinton et al. But I cannot fully understand this:

When the soft targets have high entropy, they provide much more information per training case than hard targets and much less variance in the gradient between training cases [...]

The information part is very clear, but how does high entropy correlate to less variance between training cases?

",46002,,2444,,4/9/2021 3:22,12/30/2022 10:07,How does high entropy targets relate to less variance of the gradient between training cases?,,1,0,,,,CC BY-SA 4.0 27167,2,,27161,4/6/2021 2:14,,1,,"

That is exactly a neural network works like.

Suppose you have a 1000 examples. How you train a network is: First, you divide these 1000 into maybe 100 batches (10 each). After that's done, you feed a batch to the network get its output and compare it with the ground truth, whatever is the error gets backpropagated. Then, for the next batch and then another. Once all these batches are done, you say an epoch is over. So, the number of epoch is effectively the number of times the network has seen the whole data.

This is how a neural network gets better.

",37203,,,,,4/6/2021 2:14,,,,5,,,,CC BY-SA 4.0 27168,2,,27166,4/6/2021 2:18,,0,,"

Since it is a trained network already, when you run an example through it, the gradient will not have a very high variance.

The gradient varies a lot when you are training a network from the scratch but then it stops varying much since it understands the pattern.

",37203,,2444,,4/9/2021 3:23,4/9/2021 3:23,,,,0,,,,CC BY-SA 4.0 27170,1,,,4/6/2021 11:52,,1,26,"

I am working on a problem and want to explore if it can be solved with PPO (or other policy gradient methods). The problem is that the action space is a bit special, compared to classic RL environments.

At each time $t$, we chose between 4 actions: $a_1\in \{0, 1, 2, 3\}$, but given $a_1 = 0 \text{ or } 2$, we need to chose three more actions: $a_2, a_3, a_4$ (which all three can be chosen from categorical distributions).

I know I can design this kind of policy myself and re-work the entropy terms, and so on for PPO.

My question is: is there any research into this kind of RL?

I am having a hard time finding someone working with problems in which the actions chosen are dependent on other chosen at the same time. I have looked into Hierarchical RL, but the papers I have found have not worked with this particular kind of problem.

If these action spaces were small ($a_2, a_3$ are chosen from categorical distributions with $\sim$800 different options), one solution would be to roll it out into one big policy where each possible combination of actions is represented by one choice in the policy. But my concern of doing this with a bigger action space is that the choice of $a_1 = 1, 3$ where we don't choose the other separate actions will get lost in the policy.

",44980,,2444,,4/9/2021 3:37,4/9/2021 3:37,Is there any research on the application of policy gradients to problems where the selection of an action requires the selection of another one?,,0,2,,,,CC BY-SA 4.0 27171,1,27175,,4/6/2021 14:03,,1,49,"

There are many articles comparing RNNs/LSTMs and the Attention mechanism. One of the disadvantages of RNNs that is often mentioned is that while Attention can be computed in parallel, RNNs are highly sequential. That is, the computation of the next tokens depends on the result of previous tokens, thus, RNNs are losing to Attention in terms of speed.

Even though I fully agree that RNNs are sequential as stated above, I think they are still parallelizable by splitting the mini-batch into sub-batches and each of these sub-batches is processed independently by a dedicated thread. For example, a training batch of size 32 can be split into 4 sub-batches of size 8; 4 threads process 4 sub-batches independently. That way, RNNs/LSTMs are parallelizable and this is not a disadvantage compared to Attention.

Is my thought correct?

",46019,,,,,4/6/2021 17:27,Do RNNs/LSTMs really need to be sequential?,,1,0,,,,CC BY-SA 4.0 27172,1,,,4/6/2021 14:22,,3,364,"

In Reinforcement Learning, it is common to use a discount factor $\gamma$ to give less importance to future rewards when calculating the returns.

I have also seen mention of discounted state distributions. It is mentioned on page 199 of the Sutton and Barto textbook that if there is discounting then (for the state distribution) it should be treated as a form of termination, and it is implied that this can be achieved by adding a factor of $\gamma$ to the state transition dynamics of the MDP, so that now we have

$$\mu(s) = \frac{\eta(s)}{\sum_{s'} \eta(s')}\;;$$ where $\eta(s) = h(s) + \sum_{\bar{s}} \eta(\bar{s})\sum_a \pi(a|\bar{s}) \gamma p(s|\bar{s}, a)$ and $h(s)$ is the probability of the episode beginning in state $s$.

In my opinion, the book kind of skips over this and it is not immediately clear to me why we need to discount our state distribution if we have discounting in the episode.

My intuition would suggest that it is because we usually take an expectation of the returns over the state distribution (and action/transition dynamics), but, if we are discounting the (future) rewards, then we should also discount the future states to give them less importance. In Sergey Levine's lectures he provides a brief aside that I think agrees with my intuition but in a rather unsatisfactory way -- he introduces the idea of a 'death state' that we transition into at each step with probability $1-\gamma$ but he does not really provide a rigorous enough justification for thinking of it this way (unless it is just a useful mental model and not supposed to be rigorous).

I am wondering whether someone can provide a more detailed explanation as to why we discount the state distribution.

",36821,,2444,,12/19/2021 20:20,12/19/2021 20:20,Why do we discount the state distribution?,,0,0,,,,CC BY-SA 4.0 27173,1,27174,,4/6/2021 15:12,,3,89,"

It's my understanding that selecting for small models, i.e. having a multi-objective function where you're optimizing for both model accuracy and simplicity, automatically takes care of the danger of overfitting the data.

Do I have this right?

It would be very convenient for my use case to be able to skip lengthy cross-validation procedures.

",46021,,2444,,4/9/2021 3:40,4/9/2021 3:40,Does adding a model complexity penalty to the loss function allow you to skip cross-validation?,,1,0,,,,CC BY-SA 4.0 27174,2,,27173,4/6/2021 16:20,,1,,"

It's my understanding that selecting for small models, i.e. having a multi-objective function where you're optimizing for both model accuracy and simplicity, automatically takes care of the danger of overfitting the data.

Sort of. A secondary objective function often works as a form of regularisation, and can work to reduce overfit.

However, this regularisation is not a magic bullet. The degree of regularisation that you achieve will vary, it may depend on hyper-parameters in the regularisation technique. In your case the relative weightings of objective functions for the accuracy and simplicity of the model will have a sweet spot that minimises overfit without compromising too much and under-fitting instead.

It would be very convenient for my use case to be able to skip lengthy cross-validation procedures.

Typically using regularisation requires cross-validation in order to find good values for your regularisation hyperparameters. If the regularisation technique you have chosen is a good fit for your problem, then you may not have to search too much - there may be a broad set of values that work well for you.

In turn, a good choice of regularisation may mean that your model's accuracy is less sensitive to other hyperparameters of your model, so searching for an accurate model becomes a little easier.

However:

Does adding a model complexity penalty to the loss function allow you to skip cross-validation?

No. Assuming you want to find the best performing model, you still have to perform cross-validation. At best you may have to perform a little less than without the added objective because it has stabilised your model against other factors that can affect generalisation. However, you might have to perform more cross-validation, at least initially, in order to establish useful values of the new relative weighting hyperparameters you added with the secondary objectives. In addition the new simplicity objective function will likely change the best choices for other hyperparameters, such as number of free parameters, learning rate and length of training time.

If you were previously performing cross-validation after every epoch of training and picking the best model after the epoch that gave the best accuracy on the cv set, then that is often considered a different form of regularisation called early stopping. You may find you could relax this and test less often during training, because with regularisation based on complexity objectives, the training will tend towards a more stable end point and be less likely to overfit through additional training epochs. Although in my experience cross-validation during training is usually left on by default, in order to plot learning curves and ensure this stability really holds.

",1847,,1847,,4/6/2021 16:41,4/6/2021 16:41,,,,0,,,,CC BY-SA 4.0 27175,2,,27171,4/6/2021 17:27,,2,,"

You are talking about model parallelism. But, that's not the reason RNNs/LSTMs are not in vogue.

Imagine your ability to read the first line of a page and going on reading and still making connections to the first line until the end of the page.

Can RNNs/LSTMs do that? No. Can Attention (i.e. Transformers) do it? Yes.

The reason is simple Attention is effectively an affinity matrix between each and every input that goes into a network. So, it is able to do that. We have a huge memory overload but hey, we want the performance.

In case of RNNs/LSTMs, the cells have to do this heavy-lifting, there is only a set amount of information that can be contained in them. That's why you have to forget gate to control information retained.

Nevertheless, your thought is correct but that's not the reason for Attention to be in vogue. But, your thought has negative ramifications when we see how to implement it. Also, nevertheless the computation will be still sequential since you can't process input (n + 1) without input n. Local parallelization is possible but not global.

",37203,,,,,4/6/2021 17:27,,,,6,,,,CC BY-SA 4.0 27177,1,,,4/6/2021 17:38,,-3,403,"
    def forward(self, image, proj, proj_inv):
        return self.predict_2d_joint_locations(image, proj, proj_inv)

    def criterion(self, predicted, gt):
        return self.mse(predicted, gt)

    def training_step(self, batch, batch_idx):
        player_images, j2d, j3d, proj, proj_inv, is_synth = batch
        predicted_2d_joint_locations = self.predict_2d_joint_locations(player_images, proj, proj_inv)
        train_loss = self.criterion(predicted_2d_joint_locations, j2d)
        self.log('train_loss', train_loss)
        return train_loss

    def validation_step(self, batch, batch_idx):
        player_images, j2d, j3d, proj, proj_inv, is_synth = batch
        predicted_2d_joint_locations = self.predict_2d_joint_locations(player_images, proj, proj_inv)
        val_loss = self.criterion(predicted_2d_joint_locations, j2d)
        self.log('val_loss', val_loss)
        return val_loss

I have this simple code for training_step() and forward() in Pytorch. Both the functions essentially do the same.

Owing to a relatively small dataset, my model grossly overfits on the training data (as is evident from there being an orders of magnitude of difference between the training and validation losses). But that's fine for now, I am perfectly aware of that and will add more data soon.

What surprises me is when I try to evaluate (infer). I don't have a separate test set (for now) and only have a training and a validation set. When I evaluate on the validation set, the mean squared error turns out to be in the same range as the validation loss my model is based on as expected. However, when I evaluate on the training set, the mean squared error I get is again in the same range as the validation loss (not the training loss).

    if args.val:
            check_dl = dataset.val_dataloader()
        else:
            check_dl = dataset.train_dataloader()

        for player_images,j2d,j3d,proj,proj_inv,is_synth in check_dl:
            if args.visualize:
                # visualize dataset
                player_images = player_images.cpu().numpy()
                j2d_predicted = model(torch.from_numpy(player_images), proj, proj_inv).cpu().detach().numpy()
                print(((j2d - j2d_predicted) ** 2).mean(), model.training_step((torch.from_numpy(player_images),j2d,j3d,proj,proj_inv,is_synth), 0))

When I print print(((j2d - j2d_predicted) ** 2).mean() for images in the training set after fetching the model from the trained checkpoint, I get numbers in the range of the validation loss. I retried the same by printing the loss using the training_step() function, but I again receive high losses (in the validation loss range).

Note: The inference mean squared errors I receive on the training set are high but they are not as high as when the training actually started. So, the pre-trained model is fetched properly. On a model with completely random weights, I should have received orders of magnitudes of higher errors. So, the model is definitely fetched correctly.

I have been scratching my head over this. Any help would be really appreciated.

",46030,,,,,4/8/2021 19:40,Pytorch - Evaluation loss on the training set higher than loss during training,,1,0,,,,CC BY-SA 4.0 27179,1,27182,,4/6/2021 22:10,,3,204,"

Training on a quadratic function

x = np.linspace(-10, 10, num=1000)
np.random.shuffle(x)
y = x**2

Will predict an expected quadratic curve between -10 < x < 10.

Unfortunately my model's predictions become linear outside of the trained dataset.

See -100 < x < 100 below:

Here is how I define my model:

model = keras.Sequential([
      layers.Dense(64, activation='relu'),
      layers.Dense(64, activation='relu'),
      layers.Dense(1)
  ])

model.compile(loss='mean_absolute_error', optimizer=tf.keras.optimizers.Adam(0.1))

history = model.fit(
    x, y,
    validation_split=0.2,
    verbose=0, epochs=100)

Here's a link to a google colab for more context.

",25340,,44413,,4/7/2021 15:17,4/9/2021 5:43,Can predictions of a neural network using ReLU activation be non-linear (i.e. follow the pattern) outside of the scope of trained data?,,2,0,,,,CC BY-SA 4.0 27180,1,27200,,4/6/2021 23:40,,1,505,"

I'm going through the David Silver RL course on YouTube. He talks about environment internal state $S^e_t$, and agent internal state $S^a_t$.

We know that state $s$ is Markov if

$$\mathbb{P}\{S_t=s|S_{t-1}=s_{t-1},...,S_1=s_1\}=\mathbb{P}\{S_t=s|S_{t-1}=s_{t-1}\}.$$

When we say that Decision Process is Markov Decision Process, does that mean:

  1. All environment states must be Markov states
  2. All agent states must be Markov states
  3. Both (All environment states and all agent states must be Markov states)

and according to this, if we specify corresponding MDP as $(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma, T)$, is $\mathcal{S}$ the state space of environment states or agent states?

Why I'm confused by this? He claims that environment states are Markov (I'm also confused why, but I'll make another post for this), and then claims that if the agent can directly see environment internal state $S^e_t$, then observations $O_t=S^e_t$, and agent constructs its state trivially as $S^a_t=O_t=S^e_t$. Now, both environment and agent states are Markov (since they are the same), so this makes sense. If we specify MDP as $(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma, T)$, it's clear that state space $\mathcal{S}$ is state space of both agent internal states and environment internal states (again, since they are the same).

Now consider the case when the environment is not fully observable. Now $O_t\ne S^e_t$, and agent must construct it's state $S^a_{t}=f(S^a_{t-1}, H_t)$, where $H_t=(O_0, A_0, R_1, O_1,...,O_{t-1}, A_{t-1}, R_t, O_t)$ is history until time step $t$, and $f$ is some function (such as Recurrent Neural Network for example). In the case of $f$ being a recurrent neural network, we have that both environment internal states are Markov (by this hypothesis), and agent internal states are Markov (approximately), so again, the process is an MDP $(\mathcal{S}, \mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma, T)$, but state space of agent states is different to that of environment states, so I'm confused about what is $\mathcal{S}$ here. Is it environment state-space or agent state space?

Lastly, what if $f(S^a_t, H_t)=O_t$? That is, the agent's internal state is simply the last observation. Considering that environment states are always Markovian (again, don't know why we can claim this), but agent states are not, this is the case of POMDP. Even here I don't know what $\mathcal{S}$ stands for in specification of POMDP. Is it environment state-space or action state space?

",36335,,2444,,9/24/2021 12:41,9/24/2021 12:41,What is the difference between environment states and agent states in terms of Markov property?,,1,1,,,,CC BY-SA 4.0 27181,1,27184,,4/7/2021 7:43,,4,60,"

I want to build model-based RL. I am wondering about the process of building the model.

If I already have data, from real experience:

  • $S_1, a \rightarrow R,S_2$
  • $S_2, a \rightarrow R,S_3$

Can I use this information, to build model-based RL? Or it is necessary that the agent directly interact with the environment (I mean the same above-mentioned data should be provided by the agent)?

",46045,,1847,,4/7/2021 8:19,4/7/2021 9:37,How does a model based agent learn the model?,,1,0,,,,CC BY-SA 4.0 27182,2,,27179,4/7/2021 8:52,,3,,"

It isn't too surprising to see behaviour like this, since you're using $\mathrm{ReLU}$ activation.

Here is a simple result which explains the phenomenon for a single-layer neural network. I don't have much time so I haven't checked whether this would extend reasonably to multiple layers; I believe it probably will.

Proposition. In a single-layer neural network with $n$ hidden neurons using $\mathrm{ReLU}$ activation, with one input and output node, the output is linear outside of the region $[A, B]$ for some $A < B \in \mathbb{R}$. In other words, if $x > B$, $f(x) = \alpha x + \beta$ for some constants $\alpha$ and $\beta$, and if $x < A$, $f(x) = \gamma x + \delta$ for some constants $\gamma$ and $\delta$.

Proof. I can write the neural network as a function $f \colon \mathbb R \to \mathbb R$, defined by $$f(x) = \sum_{i = 1}^n \left[\sigma_i\max(0, w_i x + b_i)\right] + c.$$ Note that each neuron switches from being $0$ to a linear function, or vice versa, when $w_i x + b_i = 0$. Define $r_i = -\frac{b_i}{w_i}$. Then, I can set $B = \max_i r_i$ and $A = \min_i r_i$. If $x > B$, each neuron will either be $0$ or linear, so $f$ is just a sum of linear functions, i.e. linear with constant gradient. The same applies if $x < A$.

Hence, $f$ is a linear function with constant gradient if $x < A$ or $x > B$. $\square$

If the result isn't clear, here's an illustration of the idea:

This is a $3$-neuron network, and I've marked the points I denote $r_i$ by the black arrows. Before the first arrow and after the last arrow, the function is just a line with constant gradient: that's what you're seeing, and what the proposition justifies.

",44413,,,,,4/7/2021 8:52,,,,2,,,,CC BY-SA 4.0 27183,2,,27179,4/7/2021 9:35,,1,,"

Short answer: Yes.

Consider a non-linear regression on that dataset. Using a model of degree two, it would fit a quadratic exactly to your perfect data here. But I suppose you're asking about neural networks. You can have neural networks set up that are exactly equivalent to this kind of regression, so even with neural networks, yes you can get this non-linear extrapolation. Of course as you probably realise, you would have to know in advance what kind of behaviour you expect in this extrapolation before really trusting any extrapolated predictions.

",34473,,11539,,4/9/2021 5:43,4/9/2021 5:43,,,,0,,,,CC BY-SA 4.0 27184,2,,27181,4/7/2021 9:37,,1,,"

If you already have some transition tuples then you can train a model to predict environment dynamics using these. However, you should be careful that your pre-gathered data is diverse enough to 'cover' enough of the state/action space so that your model remains accurate. For instance, when you start training your agent it will likely start to see more of the state space than it did at the start of training (imagine playing Atari, initially your agent will die quickly but as it gets better episodes will get longer) so you would need to make sure you have data for these states that appear late in episodes, otherwise your model will just be overfitting to the start of the episode and will give a poor performance on these other states, thus slowing down or even prohibiting learning of an optimal policy.

",36821,,,,,4/7/2021 9:37,,,,7,,,,CC BY-SA 4.0 27186,1,,,4/7/2021 10:55,,1,548,"

How to interpret the following learning curves?

Background: The accuracy starts at 50%, because the network has a binary output (0 or 1). I chose an exponentially decreasing learning rate of the optimizer - I believe that this the reason why the network starts learning after 10 epochs or so.

lr_schedule = keras.optimizers.schedules.ExponentialDecay(**h_p["optimizer"]["adam"])
optimizer = keras.optimizers.Adam(learning_rate=lr_schedule)
h_p["Compile"]["optimizer"] = optimizer

",46057,,,,,4/7/2021 12:11,How to interpret this learning curve of my neural network?,,1,1,,,,CC BY-SA 4.0 27187,2,,27186,4/7/2021 12:11,,1,,"

It looks like your network is overfitting, because the training loss carries on decreasing to zero even though validation loss levels off, and then starts to increase again.

I would guess that your network is essentially "memorising" the training examples because you're getting a near zero loss in training.

You could try:

  • applying some form of regularisation
  • reducing the number of neurons or layers
  • collecting more training data to see if that helps.

Ideally you want to minimise validation loss, so epoch 20 was about the best point in your graph. Look into early stopping: this is an effective way to avoid overfitting too much.

When the learning rate is too high, you often see the loss 'jumping around' quite rapidly, and when the learning rate is too low, the graph barely changes at all. You could try fixing the learning rate closer to what it is around epoch 10 and see if this is "the sweet spot"; it's mostly a game of experimenting to see which works best.

",44413,,,,,4/7/2021 12:11,,,,0,,,,CC BY-SA 4.0 27188,2,,27165,4/7/2021 12:19,,0,,"

I think an example could make you understand better. Suppose you want to calculate $P(X|S)$, you now need to put evidence on $S$(so you have to change all tables where $S$ appears making probability zero where $S$ does not appear). At this point, you can proceed with the collect and distribute method in order to propagate the evidence throughout the graph, in order to maintain the global consistency property. Now you can choose any $cluster\space V$ that contains $X$, marginalize with respect to $S$, and the job is done.

",46060,,36737,,4/7/2021 14:23,4/7/2021 14:23,,,,1,,,,CC BY-SA 4.0 27190,1,27295,,4/7/2021 14:00,,1,95,"

I have often encountered the term 'clock rate' when reading literature on recurrent neural networks (RNNs). For example, see this paper. However, I cannot find any explanations for what this means. What does 'clock rate' mean in this context?

",16521,,,,,4/13/2021 6:09,What does 'clock rate' mean in the context of recurrent neural networks (RNNs)?,,2,0,,,,CC BY-SA 4.0 27192,1,,,3/16/2021 16:20,,3,6572,"

In supervised learning, bias, variance are pretty easy to calculate with labeled data. I was wondering if there's something equivalent in unsupervised learning, or like a way to estimate such things?

If not, how do we calculate loss functions in unsupervised learning?

",,user98235,,,,4/7/2021 15:13,Is there a bias-variance equivalent in unsupervised learning?,,1,0,,,,CC BY-SA 4.0 27193,2,,27192,3/25/2021 3:06,,2,,"

Yes, the concept applies but it is not really formalized. Consider unsupervised learning as a form of density estimation or a type of statistical estimate of the density.

Variance: You will train on a finite sample of data selected from this probability distribution and get a model, but if you select a different random sample from this distribution you will get a slightly different unsupervised model. This variation caused by the selection process of a particular data sample is the variance.

Bias: This is a little more fuzzy depending on the error metric used in the supervised learning. An unsupervised learning algorithm has parameters that control the flexibility of the model to 'fit' the data. For example, k means clustering you control the number of clusters. Simple example is k means clustering with k=1. You could imagine a distribution where there are two 'clumps' of data far apart. The mean would land in the middle where there is no data. This model is biased to assuming a certain distribution. For a higher k value, you can imagine other distributions with k+1 clumps that cause the cluster centers to fall in low density areas. For a low value of parameters, you would also expect to get the same model, even for very different density distributions. This is also a form of bias. This unsupervised model is biased to better 'fit' certain distributions and also can not distinguish between certain distributions.

You can see that because unsupervised models usually don't have a goal directly specified by an error metric, the concept is not as formalized and more conceptual.

",,Filip Mulier,,,,3/25/2021 3:06,,,,0,,,,CC BY-SA 4.0 27195,1,,,4/7/2021 15:53,,1,91,"

I am working with four grayscale images of float32 data type to perform regression using Keras. Three images are stacked using np.dstack to form a RGB data-set. The last grayscale image is used as label. The grayscale images contains different variations, including [0 , 790.65], [ 150.87 , 260.45], [ -2.74174 , 2.4126 ], [-32.927 , 69.333].

If I convert the images to unit8, the maximum values for the first and second image will be 255 and the decimal values for all images will be lost. I am having difficulty and struggling to find the solution for using the image in the original data type (float32) when I try to use ImageDataGenerator and flow_from_directory. Can anyone suggest way for that?

",46059,,,,,4/7/2021 15:53,Is it possible to use RGB image with decimal values when feeding training data to CNN?,,0,2,,,,CC BY-SA 4.0 27196,1,27252,,4/7/2021 16:05,,8,1768,"

I have read about the concept of ergodicity on the safe RL paper by Moldovan (section 3.2) and the RL book by Sutton (chapter 10.3, 2nd paragraph).

The first one says that "a belief over MDPs is ergodic if and only if any state is reachable from any other state via some policy or, equivalently, if and only if":

$$\forall s, s', \exists \pi_r \text{ such that } E_\beta E_{s, \pi_r}^P [B_{s'}] = 1$$

where:

  • $B_{s'}$ is an indicator random variable of the event that the system reaches state $s'$ at least once, i.e., $B_{s'} = 1 \{ \exists t < \infty \text{ such that } s_t = s'\}$
  • $E_\beta E_{s, \pi_r}^P[B_{s'}]$ is the expected value for $B_{s'}$, under the belief over the MDP dynamics $\beta$, policy $\pi$ and transition measure $P$.

The second one says "$\mu_\pi$ is the steady-state distribution, which is assumed to exist for any $\pi$ and to be independent of $s_0$. This assumption about the MDP is known as ergodicity.". They define $\mu_\pi$ as:

$$\mu_\pi(s) \doteq \lim_{t \to \infty} \Pr\{s_t=s \vert a_{0:t-1} \sim \pi\}$$

  • i.e., there is a chance of landing on state $s$ by executing actions according to policy $\pi$.

I noticed that the first definition requires that at least one policy should exist for each $(s, s')$ pair for the MDP to be ergodic The second definition, however, requires that all policies eventually visit all the states in an MDP, which seems to be a more strict definition.

Then, I came accross the ergodicity definition for Markov chains:

A state $i$ is said to be ergodic if it is aperiodic and positive recurrent. In other words, a state $i$ is ergodic if it is recurrent, has a period of $1$, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic.

This leads me to believe that the second definition (the stricter one) is the most appropriate one, considering the ergodicity definition in an MDP derives from the definition in a Markov chain. As an MDP is basically a Markov chain with choice (actions), ergodicity should mean that independently of the action taken, all states are visited, i.e., all policies ensure ergodicity.

Am I correct in assuming these are different definitions? Can both still be called "ergodicity"? If not, which one is the most correct?

",22742,,2444,,9/12/2021 1:48,9/12/2021 1:48,What is ergodicity in a Markov Decision Process (MDP)?,,1,0,,,,CC BY-SA 4.0 27197,1,27215,,4/7/2021 17:37,,0,160,"

I want to use RL instead of genetic or any other evolutionary algorithm in order to find the best parameter for a function.

Here is the problem:

Given a function $$f(x,y,z, \text{data}),$$ where $x$, $y$ and $z$ are some integers from 1 to 50.

So I can say I have a 3-dimensional array which is a way to save fitness values:

$$\text{parameters} = [[1..50], [1..50], [1..50]]$$

The $$\text{data}$$ is another input which is the $f$ needed to do some calculation on.

Currently, I am optimizing it using a genetic algorithm with $$\text{cost}(\text{fitness}) = f(x,y,z,data)$$ which is a customized cost function.

Any value for $x$, $y$, and $z$ will result in a cost for example:

$$f(1, 5, 8, X) = 15$$

$$\text{parameters}: [1, 5, 8] = 15$$

or

$$ \text{parameters}: [2, 9, 11] = 30$$

In the provided example 2, 9, and 11 is a better set of parameters.

So I run a genetic algorithm and make some children with a sequence of x,y, and z. Then I calculate the cost(fitness) and then select them and so on.

I want to know is there any alternative or method in reinforcement learning which I can use instead of a genetic algorithm? If yes, please provide the name or any helpful link.

Note that F is completely defined by the user and should be changed in other contexts.

",46067,,2444,,10/21/2021 23:37,10/21/2021 23:37,Is it possible to optimize a multi-variable function with a reinforcement learning method?,,1,3,,,,CC BY-SA 4.0 27198,1,,,4/7/2021 17:59,,0,67,"

In the paper Wrist-worn blood pressure tracking in healthy free-living individuals using neural networks, the authors talk about a combination of feed-forward and recurrent layers, as if FC layers cannot be part of the RNN.

So, must all Convolutional Neural Networks and Recurrent Neural Networks not have a fully connected layer in order to be considered CNNs and RNNs, respectively? If yes, should we consider CNNs and RNNs with an FC layer "hybrid models"?

",43113,,2444,,4/9/2021 3:31,4/9/2021 3:31,Must all CNNs and RNNs not have a fully connected layer in order to be considered as such?,,1,0,,,,CC BY-SA 4.0 27200,2,,27180,4/7/2021 20:21,,0,,"

$\mathcal S$ is just a set of all possible states. It doesn't matter if it's agents perceived state or true environment state, they are within the same set of states. Agent cannot perceive itself to be in some "middle" state that's not in $\mathcal S$, it might think that's in the state that's not the actual environment state but that state is also in set of all states.

To give an example, if the car can be blue or red, then the agent might think that the state is blue car or red car, but it cannot think that car is purple because that's not one of the possible states. It might wrongly think that the car is blue when the actual car is red, but that's ok because blue is one of the possible car states. Of course, it might also correctly think that the car state is red.

",20339,,,,,4/7/2021 20:21,,,,8,,,,CC BY-SA 4.0 27202,1,,,4/8/2021 2:13,,-2,492,"

I am currently studying reinforcement learning, especially DQN. In DQN, learning proceeds in such a way as to minimize the norm (least-squares, Huber, etc.) of the optimal Bellman equation and the approximate Q-function as follows (roughly): $$ \min\|B^*Q^*-\hat{Q}\|. $$ Here $\hat{Q}$ is an estimator of Q function, $Q^*$ is the optimal Q function, and $B^*$ is the optimal Bellman operator. $$ B^*Q^*(s,a)=\sum_{s'}p_T(s'|s,a)[r(s,a,s')+\gamma \max_{a'}Q^*(s',a')], $$ where $p_T$ is a transition probability, $r$ is an immediate reward, and $\gamma$ is a discount factor. As I understand it, in the DQN algorithm, the optimal Bellman equation is approximated by a single point, and the optimal Q function $Q^*$ is further approximated by an estimator different from $\hat{Q}$, say $\tilde{Q}$. \begin{equation}\label{question} B^*Q^*(s,a)\approx r(s,a,s')+\gamma\max_{a'}Q^*(s',a')\approx r(s,a,s')+\gamma\max_{a'}\tilde{Q}(s',a'),\tag{*} \end{equation} therefore the problem becomes as follows: $$ \min\|r(s,a,s')+\gamma\max_{a'}\tilde{Q}(s',a')-\hat{Q}(s,a)\|. $$

What I want to ask: I would like to know the mathematical or theoretical background of the approximation of \eqref{question}, especially why the first approximation is possible. It looks like a very rough approximation. Can the right-hand side be defined as an "approximate Bellman equation"? I have looked at various literature and online resources, but none of them mention exact derivation, so I would be very grateful if you could tell me about reference as well.

",46075,,,,,4/9/2021 14:23,Why the optimal Bellman operator of a Q-function can be approximated by a single point,,1,1,,,,CC BY-SA 4.0 27206,1,,,4/8/2021 8:04,,1,98,"

I don't understand the policy gradient as explained in Chapter-9 (Deep Reinforcement Learning) of the book Fundamentals of deep learning.

Here is the whole paragraph:

Policy Learning via Policy Gradients

In typical supervised learning, we can use stochastic gradient descent to update our parameters to minimize the loss computed from our network's output and the true label. We are optimizing the expression: $$ \arg \min _{\theta} \Sigma_{i} \log p\left(y_{i} \mid x_{i} ; \theta\right) $$ In reinforcement learning, we don't have a true label, only reward signals. However, we can still use SGD to optimize our weights using something called policy gradients. We can use the actions the agent takes, and the returns associated with those actions, to encourage our model weights to take good actions that lead to high reward, and to avoid bad ones that lead to low reward. The expression we optimize for is: $$ \arg \min _{\theta}-\sum_{i} R_{i} \log p\left(y_{i} \mid x_{i} ; \theta\right) $$ where $y_{i}$ is the action taken by the agent at time step $t$ and where $R_{i}$ is our discounted future return. A In this way, we scale our loss by the value of our return, so if the model chose an action that led to negative return, this would lead to greater loss. Furthermore, if the model is very confident in that bad decision, it would get penalized even more, since we are taking into account the log probability of the model choosing that action. With our loss function defined, we can apply SGD to minimize our loss and learn a good policy.

The first expression about the loss computed in a network already seems false since the log of a probability is always negative, and taking the $θ$ (weights) for which the expression is minimal doesn't seem right because it would favor very unsure answers.

The same goes with the next expression on policy gradient. A very negative $R_i$ and very unsure $p(y_i)$ would both be big negatives and multiplied together give a big positive value. Since there is a - sign in front of the expression, this would be the best configuration for the argmin. Meaning we are looking for weights in the policy that give highly negative rewards and for highly unsure actions. This just doesn't make sense to me.

Is it just a sign error (or we could just change to argmax)? Or is there more to it?

",46084,,46084,,4/10/2021 18:43,4/10/2021 18:43,Is the policy gradient expression in Fundamentals of Deep Learning wrong?,,1,0,,,,CC BY-SA 4.0 27207,2,,27206,4/8/2021 9:23,,1,,"

There is no sign error and we should not change to $\arg\max$. With Policy Gradients I find that it is not useful to think about things such as a 'loss'.

In short, we want to first find the derivative of the RL objective $J(\theta) = v_\pi(s_0)$, where $\pi$ is our policy that depends on some parameters $\theta$. The policy gradient theorem tells us that $$\nabla_\theta J(\theta) = \mathbb{E}_{a\sim\pi, s\sim\mu}\left[ G_t \nabla_\theta \log\pi(A_t|S_t)\right]\;;$$ where $\mu$ is our state distribution induced by the policy $\pi$ and $G_t$ are the discounted returns.

Now, as we want to maximise our objective $J(\theta)$, we want to perform gradient ascent. That is, our parameters should be updated according to $$\theta_{t+1} = \theta_t + \alpha \nabla_\theta J(\theta)\;.$$

The reason that you will often see the update written as a minimisation problem is because most software is built to minimise functions, rather than maximise. This is not a problem though as finding the maximum of $f$ is just the same as finding the minimum of $-f$. Using this, we can then say our parameters can be updated using gradient descent on the negative of our objective, i.e. we get our parameter update rule to be $$\theta_{t+1} = \theta_t - \alpha \nabla_\theta (-J(\theta))\;;$$ which is exactly what you have in the book.

",36821,,,,,4/8/2021 9:23,,,,0,,,,CC BY-SA 4.0 27208,1,,,4/8/2021 10:51,,2,58,"

As far as I know all clustering algorithms assume that all delivered data points have to find its cluster.

My question is, is there an algorithm that could focus only on n clusters (number stated by user) and try to dismiss the rest of the points that (according to algorithm) do not belong to n clusters, like in the picture shown below? Where we know that there are for example 2 classes that we need to cluster (red and green) and the rest (blue) we do not need in any cluster and therefore algorithm does not try to assign them to any cluster?

For example if we would have 1 000 pictures of animals, of which 200 are dogs, 200 are cats and the rest are all other animals known to men and we want to make 1 cluster for cats, 1 for dogs and maybe another for collectively all others that do not match dogs or cats.

",22659,,22659,,4/8/2021 19:02,5/8/2021 22:06,"Is there a clustering algorithm that can make n clusters and the n+1 ""others"" cluster?",,1,0,,,,CC BY-SA 4.0 27210,2,,17202,4/8/2021 11:18,,0,,"

If the auto-encoder is converging to the same encoding for different instances, there may be a problem in the loss function. Check the size and shape of the output of the loss function, as it may be getting confused and evaluating the wrong tensors (i.e. you may need to transpose something somewhere).

Basically, assuming you are using an auto-encoder to encode $M$ features of $N$ training instances, your loss function should return $N$ values i.e., the size of your loss tensor should be the amount of instances in your training set.

",46088,,40434,,4/11/2021 18:28,4/11/2021 18:28,,,,0,,,,CC BY-SA 4.0 27214,1,,,4/8/2021 15:43,,2,94,"

I have a series of games with the following properties:

  1. 3 or more players, but purely non-cooperative (i.e., no coalition forming);
  2. sequential moves;
  3. perfect information;
  4. deterministic state transitions and rewards; and
  5. game size is large enough to make approximate methods required (e.g. $1000^{120}$ for certain problems).

For example, Chinese Checkers, or, for a more relevant example to my work, a multi-player knapsack problem (where each player, in round-robin fashion, can choose without replacement from a set of items with the goal of maximizing their own knapsack value).

Question: what policy improvement operators or algorithms a) converge to optimality or b) provide reasonably strong results on these games?

What have I researched

  • In a small enough game (e.g., 3-person Nim with (3,4,5) starting board), full tree search is possible.
  • In a one-person setting, certain exact Dynamic Programming formulations can reduce complexity. For example, in a one-person setting with a small enough knapsack, any standard array-based approach can solve the problem. I'm unsure if or how these "cost-to-achieve" shortest path formulations carry over to multi-player games.
  • In a one-person setting, policy improvement approaches like rollout algorithms and fortified rollout algorithms have the cost improvement property. I'm unsure if this property carries over to multi-player versions.
  • Work has been done (for example, this thesis) to demonstrate that Monte Carlo Tree search strategies can generate powerful policies on the types of games I'm interested in. I believe it's been proven that they converge to Nash Equilibrium in two-player perfect information games, but am not aware of any guarantees regarding multiplayer games.
  • For imperfect information games, Monte Carlo Counterfactual Regret Minimization (e.g. here) is required for any convergence guarantees. Given I am working in a perfect information environment, these seem like overkill.
",46095,,46095,,4/9/2021 18:03,4/9/2021 18:03,"What are some strong algorithms for Perfect Information, Deterministic Multiplayer Games?",,0,0,,,,CC BY-SA 4.0 27215,2,,27197,4/8/2021 16:38,,0,,"

In order to have anything resembling reinforcement learning you must at the very least have a set of states $S$ and a set of actions $A$.

In your formulation I can vaguely identify the set of states $S$ as all possible $(x,y,z)$ triplets. But don't see anything in your description that could be interpreted as a set of actions $A$. You either oversimplified the description of your problem or reinforcement learning is not applicable here by lack of very basic ingredients for it.

",20538,,,,,4/8/2021 16:38,,,,0,,,,CC BY-SA 4.0 27217,1,27229,,4/8/2021 18:41,,1,52,"

I saw a couple of architectures, like CNN-LSTM, with and without attention model, use of Glove vector, self-critical models, etc. I am overwhelmed looking at different notebooks and architectures, came here for a guidance. I am looking to build a personal project on image annotations. Also, if I wanted to use this deep learning model together with TFX pipeline, what would be the best type of architecture I can go with?

",46098,,2444,,4/9/2021 13:06,4/9/2021 13:06,What would be the state of the art image captioning deep learning model?,,1,0,,,,CC BY-SA 4.0 27220,2,,27177,4/8/2021 19:40,,0,,"

Fundamentally, you are seeing a difference in behavior during training v.s. during evaluation. The most typical reason for it is the difference in behavior of some nn layers that your library (pytorch) provides, depending on the mode that you are in.

Check out documentation for torch.nn.Module.train.

Most notably nn.Dropout and nn.BatchNorm layers are prone to that.

",20538,,,,,4/8/2021 19:40,,,,2,,,,CC BY-SA 4.0 27222,1,,,4/8/2021 20:27,,1,33,"

In most implementations of neural networks the features are scaled to make the optimization of the loss function as stable as possible. Mostly a min-max scaler is used. Alternatively, there is also a standard scaler.

Why do you calculate the mean and standard deviation offline over the complete dataset before training? Couldn't this be calculated per batch or even per file? What is the disadvantage? Why doesn't anyone do this?

",46102,,2444,,4/9/2021 13:18,4/9/2021 13:18,Why do you calculate the mean and standard deviation over the complete dataset before training rather than for every batch?,,0,1,,,,CC BY-SA 4.0 27224,2,,27198,4/8/2021 20:44,,2,,"

Not quite sure about RNN & LSTM (and it always depends on the task), but for CNN the answer is clearly no; CNN routinely include FC layers. Quoting from the highly popular (and recommended) Stanford course CS231n: Convolutional Neural Networks for Visual Recognition:

ConvNet Architectures

We have seen that Convolutional Networks are commonly made up of only three layer types: CONV, POOL (we assume Max pool unless stated otherwise) and FC (short for fully-connected).

You can easily verify that this is the case for practically all popular CNN models for computer vision in any relevant exposition (e.g. Illustrated: 10 CNN Architectures).

In fact, the all-convolutional NN (i.e. without FC layers) is considered a special case: see Striving for Simplicity: The All Convolutional Net.

",11539,,,,,4/8/2021 20:44,,,,0,,,,CC BY-SA 4.0 27227,2,,27208,4/8/2021 22:04,,1,,"

So, I've prepared some data that resembles your sketch:

n , u = np.random.normal , np.random.uniform
x = np.concatenate([ n(1.0,0.2,100), n(3.0,0.3,100), u(0,10.0,100)])
y = np.concatenate([ n(7.0,0.4,100), n(5.0,0.3,100), u(0,10.0,100)])
# lets shuffle it a bit
idx = np.arange(x.shape[0])
np.random.shuffle(idx)
data = np.array([x,y])[:,idx]

And then I just tried using sklearn.mixture.GaussianMixture with n+1 = 3 components and default parameters:

gmm = GaussianMixture(n_components=3).fit(data.T)
cls = gmm.predict_proba(data.T).argmax(axis=1)

# Plotting
color = [['r','k','g'][i] for i in cls]
scatter(data[0],data[1],c=color, marker='.')
scatter(gmm.means_[:,0],gmm.means_[:,1],c='b',marker='o')

That seems awfully lot like the thing you've sketched.

",20538,,,,,4/8/2021 22:04,,,,1,,,,CC BY-SA 4.0 27228,1,,,4/8/2021 23:36,,1,26,"

The MLP output of a neural network is a dot product between the weights and the input and therefore can be written as $\|x\|\|w_l\|\cos(\theta_l)$ (see this for more details), where $x$ is the input, $w_l$ is the weights of layer $l$ and $\theta_l$ is the angle between them.

I read this in a paper: Angular Visual Hardness. The paper stated that it's much easier to maximize the norms $\|x\|$ and $\|w_l\|$ than the cosine similarity. Why is this the case? Just because $\cos(\theta_l)$ gives less weight because it's bounded between $[-1,1]$? Or is it due to the gradient? So, why is the norm easier to maximize?

",30885,,44413,,4/11/2021 18:29,4/11/2021 18:29,The MLP output of a neural network can be written as $\|x\|\|w_l\|\cos(\theta_l)$: why is the norm easier to maximize?,,0,0,,,,CC BY-SA 4.0 27229,2,,27217,4/9/2021 4:13,,0,,"

Here are a couple of Kaggle Kernels, Notebooks and Tutorials for Image Captioning.

",40434,,,,,4/9/2021 4:13,,,,0,,,,CC BY-SA 4.0 27231,1,,,4/9/2021 6:44,,5,301,"

I've come across the concept of fitness landscape before and, in my understanding, a smooth fitness landscape is one where the algorithm can converge on the global optimum through incremental movements or iterations across the landscape.

My question is: Does deep learning assume that the fitness landscape on which the gradient descent occurs is a smooth one? If so, is it a valid assumption?

Most of the graphical representations I have seen of gradient descent show a smooth landscape.

This Wikipedia page describes the fitness landscape.

",11105,,11105,,4/9/2021 13:18,4/9/2021 18:14,Does gradient descent in deep learning assume a smooth fitness landscape?,,3,10,,,,CC BY-SA 4.0 27233,1,,,4/9/2021 7:53,,1,599,"

I am reading AI: A Modern Approach. In the 2nd chapter when introducing different agent types, i.e., reflex, utility-based, goal-based, and learning agents, I understood that all types of agents, except learning agents, receive feedback and choose actions using the performance measure.

But they do so in different ways. Model-based reflex agents possess an internal state (like a memory), while goal-based agents predict the outcome of actions and choose the one serving the goal. Lastly, utility-based functions measure the 'happiness' of each state using the utility function, which is again an internalization of the performance measure, hence all have similar nature overall.

The learning agents, however, can be wrapped around the entire structure of previous agents. The entire agent's architecture is now called a performance element, and the learning agent has an additional learning element, which modifies each component of the agent, so as to bring the components into closer agreement with the available feedback information. But the feedback information in learning agents does not from the performance measure embedded in the agent's structure, but from a fixed external performance standard, which is part of the critic element*.

For the purpose of illustration, the structure of a utility-based agent and that of a learning agent are presented in the figure:

What boggles my mind is figuring out the actual difference and interaction between performance standard and performance measure, which is perhaps related to those between learning agents and other ones. Here are my thoughts thus far:

  1. Other agents aim for maximizing the performance measure, causing them to do perfect actions. On the other hand, learning agents have the freedom of doing sub-optimal actions, which allow them to discover better actions on the long run using the performance standard.

  2. Through the performance standard's feedback (which comes from the critic as shown in the figure), the learning agent can also learn a utility function or reflex component.

For providing examples, the book states that giving tip to an automated taxi is considered a performance standard. And also

hard-wired performance standards such as pain and hunger in animals can be understood in this way.

But I am still not sure about the discrepancy and interaction between the performance measure and performance standard. For instance, in the automated taxi, when confronting a road junction, the utility-based agent chooses a path that maximizes its utility function. The learning agent, however, must check different roads and after testing them, it receives feedback from outside so that eventually it would detect the user's preference.

But what if we wrap a learning agent around a utility-based agent in such a condition? Which has more effect, the utility function from inside, or the performance standard from outside (critic)? If they happen to contradict each other, which one would have the prevalent effect?

",46085,,2444,,12/12/2021 20:11,1/7/2023 0:02,What is the difference between a performance standard and performance measure?,,1,0,,,,CC BY-SA 4.0 27234,2,,27154,4/9/2021 8:21,,4,,"

Famous example is AlphaZero. It doesn't do unrolls, but consults the value network for leaf node evaluation. The paper has the details on how the update is performed afterwards:

The leaf $s'$ position is expanded and evaluated only once by the network to gene-rate both prior probabilities and evaluation, $(P(s′ , \cdot),V(s ′ )) = f_\theta(s′ )$. Each edge $(s, a)$ traversed in the simulation is updated to increment its visit count $N(s, a)$, and to update its action value to the mean evaluation over these simulations, $Q(s,a) = \frac{1}{N(s,a)}\sum_{s,a\to s'}V(s')$ , where $s, a\to s′$ indicates that a simulation eventually reached s′ after taking move a from position s.

",20538,,,,,4/9/2021 8:21,,,,0,,,,CC BY-SA 4.0 27235,1,27240,,4/9/2021 8:45,,2,809,"

Suppose that I have a DQN agent, which has two neural networks: one is the primary Q network and the other is the target Q network. In every update, the target Q network is updated with a soft update strategy:

$$Q_{target} = (1-\tau) \times Q_{target} + \tau \times Q_{prime}$$

I saved the primary Q network's weights every $n$ episodes (say $n=10$), but, unfortunately, I did not save the target Q network's weights.

Say that my training process is aborted for some reason, and now I would like to continue the training using the latest saved weights. I can load the primary Q network's weights, but what about the target Q network's weights? Should I also use the latest primary Q network's weights for the target Q network's weights, or should I use the primary Q network's weights from several episodes ago, or how should it be?

",44920,,2444,,4/9/2021 13:29,4/9/2021 13:29,How to recover the target Q network's weights solely from the snapshots of the primary Q network's weights in DQN?,,1,0,,,,CC BY-SA 4.0 27236,2,,27231,4/9/2021 8:52,,1,,"

I'm going to take the fitness landscape to be the graph of the loss function, $\mathcal{G} = \{\left(\theta, L(\theta)\right) : \theta \in \mathbb{R}^n\}$, where $\theta$ parameterises the network (i.e. it is the weights and biases) and $L$ is a given loss function; in other words, the surface you would get by plotting the loss function against its parameters.

We always assume the loss function is differentiable in order to do backpropagation, which means at the very least the loss function is smooth enough to be continuous, but in principle it may not be infinitely differentiable1.

You talk about using gradient descent to find the global minimiser. In general this is not possible: many functions have local minimisers which are not global minimisers. For an example, you could plot $y = x^2 \sin(1/x^2)$: of course the situation is similar, if harder to visualise, in higher dimensions. A certain class of functions known as convex functions satisfy the property that any local minimiser is a global minimiser. Unfortunately, the loss function of a neural network is rarely convex.

For some interesting pictures, see Visualizing the Loss Landscape of Neural Nets by Li et al.


1 For a more detailed discussion on continuity and differentiability, any good text on mathematical analysis will do, for example Rudin's Principles of Mathematical Analysis. In general, any function $f$ that is differentiable on some interval is also continuous, but it need not be twice differentiable, i.e. $f''$ need not exist.

",44413,,44413,,4/9/2021 18:14,4/9/2021 18:14,,,,2,,,,CC BY-SA 4.0 27237,2,,27231,4/9/2021 9:19,,1,,"

Main answer

To answer your question as directly as possible: No, deep learning does not make that "assumption".

But you're close. Just swap the word "assumption" with "imposition".

Deep learning sets things up such that the landscape is (mostly) smooth and always continuous*, and therefore it is possible to do some sort of optimization via gradient descent.

* quick footnotes on that bit:

  • Smoothness is a stronger condition than continuity, that's why I mention them both.
  • My statement is not authoritative, so take it with a grain of salt, especially the "always" bit. Maybe someone will debunk this in the comments.
  • The reason that I say "(mostly) smooth" is because I can think of a counter example to smoothness, which is the ReLU activation function. ReLU is still continuous though.

Further elaboration

In deep learning we have linear layers which we know are differentiable. We also have non-linear activations, and a loss function which for the intents of this discussion can be bundled with non-linear activations. If you look at papers which focus specifically on crafting new types of non-linear activations and loss functions you will usually find a discussion section that goes something like "and we designed it this way such that it's differentiable. Here's how you differentiate it. Here are the properties of the derivative". For instance, just check out this paper on ELU, a refinement on ReLU.

We don't need to "assume" anything really, as we are the ones who designate the building blocks of the deep learning network. And the building blocks are not all that complicated in themselves, so we can know that they are differentiable (or piecewise differentiable like ReLU). And for rigor, I should also remind you that the composition of multiple differentiable functions is also differentiable.

So hopefully that helps you see what I mean when I say deep learning architects "impose" differentiability, rather than "assume" it. After all, we are the architects!

",16871,,16871,,4/9/2021 9:44,4/9/2021 9:44,,,,0,,,,CC BY-SA 4.0 27238,2,,27231,4/9/2021 9:55,,-2,,"

Does deep learning assume that the fitness landscape on which the gradient descent occurs is a smooth one?

One can interpret this question from a formal-mathematical standpoint and from a more "intuitively-practical" standpoint.

From the formal point of view, smoothness is the requirement that the function is continuous with continuous first derivatives. And this assumption is quite often not true in lots of applications - mostly because of the widespread use of ReLU activation function - it is not differentiable at zero.

From the practical point of view, though, by "smoothness" we mean that the function's "landscape" does not have a lot of sharp jumps and edges like that:

Practically, there's not much difference between having a discontinuous derivative and having derivatives making very sharp jumps.

And again, the answer is no - the loss function landscape is extremely spiky with lots of sharp edges - the picture above is an example of an actual loss function landscape.

But... why the gradient descent works then?

As far as I know, this is a subject of an ongoing discussion in the community. There are different takes and some conflicting viewpoints that are still subject of a debate.

My opinion is that, fundamentally, the idea that we need it to converge to the global optimum is a flawed one. Neural networks was shown to have enough capacity to completely remember the training dataset. A neural network, that completely remembered the training data. has reached the global optimization minimum (given only the training data). We are not interested in such overtrained models - we want models that generalize well.

As far as I know, there is no conclusive results on which properties of the minimum are linked to ability to generalize. People argued that these should be the "flat" minima, but then it was refuted. After that a "wide optimium" term was introduced and gave rise of an interesting technique of Stochastic Weight Averaging.

",20538,,20538,,4/9/2021 17:58,4/9/2021 17:58,,,,13,,,,CC BY-SA 4.0 27239,1,,,4/9/2021 10:14,,1,17,"

I am working on a project related to automating the procedure of manually segmenting some bones in CT scans and hopefully if everything goes alright in this stage, move on to do something more with them - like bone reconstruction etc.

I have been doing extensive research regarding this - and CNNs were something in my target as a ML method that could be used here. Emphasis is more on using Deep learning for this project.

So, what I have - the data: CT scans of chest/shoulder and for each of the CT scan, I have 4-6 STL files of the individual bone fragments or segments located in the shoulder or near shoulder region. I am a tad uncertain as to how to use those individual STL files.

Target: To label/classify/identify these fragments in the CT scan - automate it.

My MOA (Method of Approach) or what I understand - I believe it is object (bone fragment being the object) detection and feature (of those bone pieces that I need to lock on in the CT-scan) extraction using CNNs. I am looking at Mask R-CNN etc, use a pre-trained CNN for this.

But I am not entirely sure if my understanding is correct. This is my first time with this stuff, but hoping to learn more. CT-scans are in nifti format.

I could provide more info if required, would gladly appreciate any insight or help or advice with what could be the way forward and if I am thinking along the correct lines.

Thank you.

",44829,,44829,,4/9/2021 14:14,4/9/2021 14:14,Advice required for identifying bone fragments in CT-scans using STL Files (3D image segmentation),,0,0,,,,CC BY-SA 4.0 27240,2,,27235,4/9/2021 10:29,,2,,"

Let's add a step index to your expression

$$Q_{target}^{n} = (1-\tau)Q^{n-1}_{target} + \tau\, Q^{n-1}_{primary}$$

We can expand it one step further

$$Q_{target}^{n} = (1-\tau)^2Q^{n-2}_{target} + (1-\tau)\tau\, Q^{n-2}_{primary} + \tau\, Q^{n-1}_{primary}$$

And further

$$Q_{target}^{n} = (1-\tau)^3Q^{n-3}_{target} + (1-\tau)^2\tau\, Q^{n-3}_{primary} + (1-\tau)\tau\, Q^{n-2}_{primary} + \tau\, Q^{n-1}_{primary}$$

So, I guess, we can write a general formula for $m$ steps behind like:

$$Q_{target}^{n} = (1-\tau)^{n-m}Q^{n-m}_{target} + \tau\,\sum_{i=0}^{m-1} (1-\tau)^i Q^{n-i-1}_{primary} $$

For $n-m$ large enough $(1-\tau)^{n-m}$ should be close to 0 and you should be able to approximately reconstruct your $Q_{target}^n$ using only the history of $Q_{primary}$es

Edit: I've missed that you only have snapshots with some step between them. This is not ideal, but a possible way out would be to use, say, a linear interpolation between snapshot points.

",20538,,20538,,4/9/2021 12:46,4/9/2021 12:46,,,,6,,,,CC BY-SA 4.0 27242,1,,,4/9/2021 14:01,,2,82,"

I am trying to formulate an argument at work saying the disruption in AI/ML is very high and that it is hard to stay "state of the art". I would like to support that hypothesis by numbers.

Question:

How many papers were published in 2018-2020 related to AI (or if that is too generic: ML)?

",12013,,,,,4/9/2021 14:01,How many papers about AI / ML were published in the recent years?,,0,1,,,,CC BY-SA 4.0 27243,2,,27202,4/9/2021 14:23,,0,,"

Both your notation and terminology are quite confusing. For example, I'm not sure what is an "optimal" Bellman operator is. Here's a good clarification on definition of a Bellman operator. Likewise, your description of the DQN algorithm completely ignores the averaging over states/actions/rewards sampled from the replay memory.

Trying to savage your notation, I'll introduce a Bellman operator $B$ that acts on any state-value function $Q$ as:

$$B[Q(s,a)] = \mathbb{E}_{s'}\left[r(s,a,s') + \gamma \max_{a'} Q(s',a') \right]$$

And optimal Q function $Q^*$ satisfies the Bellman equation:

$$B[Q^*(s,a)] = Q^*(s,a)$$

The value-iteration algorithm iteratively applies the Bellman operator: $$Q^{n+1}(s,a) = B[Q^n(s,a)]$$ And is proven to converge to the optimal Q function: $$Q^*(s,a) = \lim_{n\to\infty}Q^{n}(s,a)$$

Now, in the DQN algorithm we are approximating Q functions with DNNs with parameters $\theta$: $Q(s,a; \theta)$. Then we approximate the value-iteration algorithm by minimizing the norm

$$\mathbb{E}_{s,a} \left\|B[Q(s,a; \theta_{i-1})] - Q(s,a; \theta_i)\right\|$$

The minimization is performed over parameters $\theta_i$ with previous parameters $\theta_{i-1}$ held fixed.

The averages $\mathbb{E}_{s,a}$ and $\mathbb{E}_{s'}$ are approximated by sampling a minibatch $MB$ of $(s,a,r,s')$ tuples from the replay memory.

I suppose that is the closest I can get to the crux of your question. You are claiming that the average in the bellman operator application is approximated by a single point: $$B[Q(s,a)] \simeq r(s,a,s') + \gamma \max_{a'} Q(s',a')$$ While, in fact it is approximated by $$B[Q(s,a)] \simeq \mathbb{E}_{s'\in MB} \left[ r(s,a,s') + \gamma \max_{a'} Q(s',a')\right]$$

",20538,,,,,4/9/2021 14:23,,,,0,,,,CC BY-SA 4.0 27244,1,,,4/9/2021 19:17,,2,74,"

I'm designing a NLP model to extract various kinds of "hidden" expenses from 10-K and 10-Q financial statements. I've come up with about 7 different expense categories (restructuring costs, merger and acquisitions, etc.) and for each one I have a list of terms/synonyms that different companies call them. I'm new to NLP would like some advice on the best approach for extracting them.

Values are usually hidden in two different areas of the document:

Type 1: Free-form text (footnotes)

Values are nested in sentences. Here are some examples, with the Expense Type and Monetary value indicated.

Exploratory dry-hole costs were \$12.7 million, \$1.3 million, and \$1.0 million for the years ended December 31, 2012, 2011, and 2010, respectively.

2012 includes the recognition of a $3,340 million impairment charge related to the carrying value of Citi's remaining 35% interest in the Morgan Stanley Smith Barney joint venture

During the year ended December 31, 2017, we decided to discontinue the internal development of AMG 899, resulting in an impairment charge of $400 million for the IPR&D asset

Type 2: Table data

SEC statements also contain "structured" data in HTML tables. Some line items, like the first row below, correspond to the expense type I'm looking for:

Item 2020 2019 2018
impairment related to real estate assets(2): 398.2 200 0
research and development 100 200 300
other expenses 20 30 40

Correct value = 398.2


I'm thinking about a two-model approach:

  1. Define a new NER model based off the terms I already know (e.g. "dry-hole costs", "impairment charges"). I would need to manually annotate extracts from historic statements that contain these terms for the training set.

    • For free-form text, it would match the sentence and pass it on for further processing (see 2).
    • For table data, I would loop over each row using beautifulsoup and pandas, check the first column for a match (e.g. using spaCy's comparison function), and then grab that year's value from the dataframe and finish.
  2. For free-form matches, I still need to grab the monetary value for the correct year (sometimes multiple values are given for various years, see the first example above).

One potential problem here is that sentences like this would cause problems:

We gained $100 million this year, despite facing restructuring charges.

If the NLP algo is split into the above two-model process, model 1 would pass (because it contains a known term like "restructuring charges"), and model 2 would extract $100 million, which is incorrect because it doesn't actually correspond to the expense itself.

Is there a better solution here? As I said, I'm new to NLP and data extraction so would really appreciate any advice or resources to learn more about solving these types of key/value problems.

",46125,,,,,4/13/2021 4:25,"Extracting ""hidden"" costs from financial statements using NLP",,0,3,,,,CC BY-SA 4.0 27246,1,,,4/9/2021 21:49,,1,73,"

I want to get some encodings for temporal data (with a highly varying number of timesteps).

The dataset is of the format: array<TemporalSample = list, SAMPLE_COUNT> (where array is fixed size and list is variable).

The TemporalSamples are simply lists of size TemporalSample::timesteps


Currently, I use a standard RNN network of the form:

model = keras.Sequential()
model.add(layers.GRU(256, dropout=0.1, input_shape=[None, 1], return_sequences=False))
model.add(layers.Dense(len(output_names), activation="softmax"))
model.compile(optimizer='adam',
              loss='categorical_crossentropy',
              metrics=['accuracy'])
model.summary()

The problem is my inputs have variable lengths, if I want an auto-encoder, my output needs to be variable-length (just like my input), and I need an inverse RNN layer (something like layers.InverseGRU(output_shape=[None, 1])) but from my reading this seems like not something that has been considered/done before.

Is this at all possible?

",37690,,37690,,4/11/2021 2:04,4/11/2021 2:04,Is a true RNN auto encoder possible with Keras/TF,,0,10,,,,CC BY-SA 4.0 27248,2,,27190,4/10/2021 6:18,,1,,"

It seems this paper defines a "clock period ${T}_n$" that it uses to express the topology of the network: "Each module is internally fully-interconnected, but the recurrent connections from module j to module i exists only if the period $T_i$ is smaller than period $T_j$.".

This definition is, however, only in this paper, as far as I know. It wouldn't make sense to have a "clock period" (or "clock rate") on other, more self-similar, RNNs. However "time t" is something that is defined to express the amount of times the RNN has recursed, "t times".

",14892,,,,,4/10/2021 6:18,,,,2,,,,CC BY-SA 4.0 27249,1,,,4/10/2021 9:26,,2,326,"

For a network of the form:

Input(10)
Dense(200)
Dense(100+10)
Dense(20)
Output()

Those +10 outputs are what I want to add to the standard 20 outputs, for my loss function.

Is this possible - in theory or even with some pre-existing library?

",37690,,2444,,4/14/2021 10:07,4/14/2021 10:07,Is it possible to use an internal layer's outputs in a loss function?,,1,0,,,,CC BY-SA 4.0 27250,1,27255,,4/10/2021 10:14,,2,71,"

If I have an image like this

1 2 3 4 5 6 7 8
a b c d e f g h
...

And I apply a Haar-like feature with a template

1 1 1 1 
-1 -1 -1 -1

Then in the first position we get X1 = 1+2+3+4+a+b+c+d. If we slide one side to the right, we again get X2 = 2+3+4+5+b+c+d+e.

This way we will get X1 and X2 and X3 and so on. Now, how are these values combined to get the feature? Because when we say a feature we are not just running that template in one place, rather we will run it over multiple places in the image. It gives lots of values like X1,X2 and X3 and so on. Now, how are those combined to get the final feature which will be passed to Adaboost?

",46139,,2444,,4/11/2021 1:43,4/11/2021 1:43,"Viola-Jones algorithm: Haar-like features, how are the features extracted?",,1,0,,,,CC BY-SA 4.0 27252,2,,27196,4/10/2021 11:26,,7,,"

In short, the relevant class of a MDPs that guarantees the existence of a unique stationary state distribution for every deterministic stationary policy are unichain MDPs (Puterman 1994, Sect. 8.3). However, the unichain assumption does not mean that every policy will eventually visit every state. I believe your confusion arises from the difference between unichain and more constrained ergodic MDPs.

Puterman defines that a MDP is (emphasis in the following mine):

  • recurrent or ergodic if the state transition matrix corresponding to every deterministic stationary policy consists of a single recurrent class, and
  • unichain, if the state transition matrix corresponding to every deterministic stationary policy is unichain, that is, it consists of a single recurrent class and a possibly empty set of transient states.

Before unpacking this further, let's first recap what recurrent and transient states in a Markov chain are. The following definitions can be found in Puterman, Appendix A.2. For a state $s$, associate two random variables $\nu_s$ and $\tau_s$ that represent the number of visits and the time of the first visit (or first return if the chain starts in $s$) to state $s$. A state $s$ is

  • recurrent (sometimes also called ergodic), if $P_s(\tau_s < \infty) = 1$, that is, the state is eventually visited (or returned to), and
  • transient, if $\mathbb{E}[\tau_s] = \infty$ and $P_s(\tau_s < \infty) < 1$.

It is also true that $s$ is recurrent if and only if $\mathbb{E}[\nu_s] = \infty$, i.e., it is visited infinitely often, and $s$ is transient if and only if $\mathbb{E}[\nu_s] < \infty$, i.e., it is visited only finitely often (and thereby never again after some finite time).

So let's now return to the two types of MDPs above. Consider an arbitrary deterministic stationary policy $\pi$ which maps any state $s$ to an action $a = \pi(s)$. If the MDP is ergodic, then the stationary distribution $\mu_\pi(s)$ exists and is unique, because the Markov chain over states induced by any policy has a single recurrent class (it does not matter in which state $s_0$ the chain starts, the same stationary distribution is reached). There is a single class of recurrent states, i.e., all states are recurrent, therefore any $s$ is visited infinitely often and $\mu_\pi(s) > 0$.

Now, if the MDP is unichain, then once again the stationary distribution $\mu_\pi(s)$ exists and is unique, because the Markov chain over states induced by any policy has a single recurrent class. But, importantly, there may exist a policy $\pi$ for which the set of transient states in the induced Markov chain is not empty. Because any transient state $s$ will only be visited a finite number of times, in the (infinite horizon) stationary distribution $\mu_\pi(s)=0$!

So indeed, if the MDP is of the stricter ergodic type, every policy will eventually visit every state. This is not true for unichain MDPs however.

A final remark: some authors define a policy as ergodic (e.g., Kearns & Singh, 2002), if the resulting Markov chain over states is ergodic (i.e., has a unique stationary distribution). The unichain MDP is a type of MDP where every policy is ergodic.

References:

Puterman, M. L. (1994). Markov Decision Processes: Discrete Stochastic Dynamic Programming.

Kearns & Singh. Near-Optimal Reinforcement Learning in Polynomial Time. Machine Learning, 49, 209–232, 2002

",45529,,45529,,4/10/2021 11:55,4/10/2021 11:55,,,,4,,,,CC BY-SA 4.0 27253,2,,27010,4/10/2021 12:29,,0,,"

One neuron on its own can only solve linearly separable problems. You need a combination of neurons to solve non-linearly separable problems.

For the XOR case, you need at least 2 neurons at the first layer, and 1 neuron at the output layer to properly classify it.

Keep in mind that, sometimes, the 3-neurons network might get stuck in a local minimum as well. You will need some luck in the random initialization of weights. Using the right seed during the random initialization of weights can help converge, and some other seed will only result in a stuck network.

",46142,,2444,,12/12/2021 8:54,12/12/2021 8:54,,,,0,,,,CC BY-SA 4.0 27254,1,,,4/10/2021 15:24,,2,231,"

Is there a general guideline on how the Transformer model parameters should be selected, or the range of these parameters that should be included in a hyperparameter sweep?

  • Number of heads
  • Number of encoder & decoder layers
  • Size of transformer model (d_model in Pytorch)
  • Size of hidden layers

Are there general guidelines like number of decoder layers should be equal to encoder layers? Thank you

",37519,,,,,4/10/2021 15:24,"How to Select Model Parameters for Transformer (Heads, number of layers, etc)",,0,0,,,,CC BY-SA 4.0 27255,2,,27250,4/10/2021 18:30,,2,,"

I would look at table 1 of the original paper. While you're reading the alogorithm, try to really focus on Step 2 when you get to it.

In summary, each feature is used to train it's own classifier. So in your example, the calculated features X1, X2, ... Xn you describe coorespond to apply some set of feature transforms f_1, f_2, ... f_n to a single image. This is a bit backwards from what actually happens. What the method really does is train a classifier for each feature. So if you had n features, you would have n classifiers. Then in an adaboost fashion, you upweight the classifier that performed the best. I.e, you are upweighting the classifier based soley on th best performing feature. You then repeat and re-weight all the classifiers until you reach convergence.

",17408,,17408,,4/10/2021 18:37,4/10/2021 18:37,,,,5,,,,CC BY-SA 4.0 27256,1,,,4/10/2021 21:21,,1,34,"

My understanding is that when a model starts overfitting, it no longer learns useful features and starts remembering the training data set. Given enough epochs and sufficient parameters, a model can over-fit any arbitrary dataset. My question is how does the loss landscape look like for the training dataset vs the test dataset when a model is overfitting? Is there a weird dip around some point in the training dataset but the same dip is not there in the test dataset?

",38618,,2444,,4/11/2021 1:49,4/11/2021 1:49,How does the loss landscape look like or change when a model is overfitting?,,0,0,,,,CC BY-SA 4.0 27257,1,27258,,4/10/2021 22:13,,2,621,"

I'm working on a neural network that plays some board games like reversi or tic-tac-toe (zero-sum games, two players). I'm trying to have one network topology for all the games - I specifically don't want to set any limit for the number of available actions, thus I'm using only a state value network.

I use a convolutional network - some residual blocks inspired by the Alpha Zero, then global pooling and a linear layer. The network outputs one value between 0 and 1 for a given game state - it's value.

The agent, for each possible action, chooses the one that results in a state with the highest value, it uses the epsilon greedy policy.

After each game I record the states and the results and create a replay memory. Then, in order to train the network, I sample from the replay memory and update the network (if the player that made a move that resulted in the current state won the game, the state's target value is 1, otherwise it's 0).

The problem is that after some training, the model plays quite well as one of the players, but loses as the other one (it plays worse than the random agent). At first, I thought it was a bug in the training code, but after further investigation it seems very unlikely. It successfully trains to play vs a random agent as both players, the problem arises when I'm using only self play.

I think I've found some solution to that - initially I train the model against a random player (half of the games as the first player, half as the second one), then when the model has some idea what moves are better or worse, it starts training against itself. I achieved pretty good results with that approach - in tic-tac-toe, after 10k games, I have 98.5% win rate against the random player as the starting player (around 1% draws), 95% as the second one (again around 3% draws) - it finds a nearly optimal strategy. It seems to work also in reversi and breakthrough (80%+ wins against random player after the 10k games as both players). It's not perfect, but it's also not that bad, especially with only 10k games played.

I believe that, when training with self play from the beginning, one of the players gains a significant advantage and repeats the strategy in every game, while the other one struggles with finding a counter. In the end, the states corresponding to the losing player are usually set to 0, thus the model learns that whenever there is the losing player's turn it should return a 0. I'm not sure how to deal with that issue, are there any specific approaches? I also tried to set the epsilon (in eps-greedy) initially to some large value like 0.5 (50% chance for a random move) and gradually decrease it during the training, but it doesn't really help.

",46169,,40434,,4/11/2021 11:03,4/24/2021 3:08,How to fight with unstability in self play?,,2,0,,,,CC BY-SA 4.0 27258,2,,27257,4/10/2021 22:29,,3,,"

The AlphaZero paper mentions an "evaluation" step that seems to deal with the the problem similar to yours:

... we evaluate each new neural network checkpoint against the current best network $f_{\theta_*}$ before using it for data generation ... Each evaluation consists of 400 games ... If the new player wins by a margin of > 55% (to avoid selecting on noise alone) then it becomes the best player $\alpha_{\theta_*}$ , and is subsequently used for self-play generation, and also becomes the baseline for subsequent comparisons

In the AlphaStar they've use a whole league of agents that was constantly played against each other.

",20538,,,,,4/10/2021 22:29,,,,0,,,,CC BY-SA 4.0 27259,1,,,4/10/2021 22:44,,0,69,"

Given the model:

Sequence([
GRU(200, input_shape=(None,100), return_sequences=False)
])

Which maps the space (None, 100) -> (200,)

Is there an InverseGRU such that it maps the space (200,) -> (None, 100)

or, at least, is it possible to simulate this behaviour?

",37690,,,,,4/10/2021 22:44,Are there any inverse RNN layers?,,0,2,,,,CC BY-SA 4.0 27260,1,,,4/10/2021 23:05,,20,992,"

Adding BatchNorm layers improves training time and makes the whole deep model more stable. That's an experimental fact that is widely used in machine learning practice.

My question is - why does it work?

The original (2015) paper motivated the introduction of the layers by stating that these layers help fixing "internal covariate shift". The rough idea is that large shifts in the distributions of inputs of inner layers makes training less stable, leading to a decrease in the learning rate and slowing down of the training. Batch normalization mitigates this problem by standardizing the inputs of inner layers.

This explanation was harshly criticized by the next (2018) paper -- quoting the abstract:

... distributional stability of layer inputs has little to do with the success of BatchNorm

They demonstrate that BatchNorm only slightly affects the inner layer inputs distributions. More than that -- they tried to inject some non-zero mean/variance noise into the distributions. And they still got almost the same performance.

Their conclusion was that the real reason BatchNorm works was that...

Instead BatchNorm makes the optimization landscape significantly smoother.

Which, to my taste, is slightly tautological to saying that it improves stability.

I've found two more papers trying to tackle the question: In this paper the "key benefit" is claimed to be the fact that Batch Normalization biases residual blocks towards the identity function. And in this paper that it "avoids rank collapse".

So, is there any bottom line? Why does BatchNorm work?

",20538,,2444,,11/22/2021 13:54,4/3/2022 5:55,Why does Batch Normalization work?,,5,1,,,,CC BY-SA 4.0 27261,1,,,4/11/2021 0:39,,1,26,"

On page 34 of OpenAI's GPT-3, there is a sentence demonstrating the limitation of objective function:

Our current objective weights every token equally and lacks a notion of what is most important to predict and what is less important.

I am not sure if I understand this correctly. In my understanding, the objective function is to maximize the log-likelihood of the token to predict given the current context, i.e., $\max L \sim \sum_{i} \log P(x_{i} | x_{<i})$. Although we aim to predict every token that appears in the training sentence, the tokens have a certain distribution, and therefore we do not actually assign equal weight to every token in loss optimization.

And what should be an example for a model to get the notion of "what is important and what is not". What is the importance refer to in here? For example, does it mean that "the" is less important compared to a less common noun, or does it mean that "the current task we are interested in is more important than the scenario we are not interested in ?"

Any idea how to understand the sentence by OpenAI?

",42445,,2444,,4/11/2021 2:01,4/11/2021 2:01,"What is the meaning of ""Our current objective weights every token equally and lacks a notion of what is most important to predict"" in the GPT-3 paper?",,0,0,,,,CC BY-SA 4.0 27262,1,,,4/11/2021 3:17,,2,46,"

Background

I'm building a binary classification model for a pair match problem using CNN, e.g. whether person A1 likes product B1 or not. Model input features are sequence features (text descriptions) of the person and the product. The model accuracy is around 78%. So for a new person, the model can predict the probability whether he likes each product in our dataset.

Problem

The model is good if we know nothing about the person. However, in the real scenario, we already know the new person likes one or two products. We want to predict whether he likes other products. Is there any way to incorporate this prior information to improve the model?

My thought

A simple method would just retrain the model, giving the new person's pair higher sample weight. But we can't do this for each new person.

Any suggestion would be appreciated. Thanks

",46171,,46171,,6/21/2021 2:42,6/21/2021 2:42,How to add prior information when predicting using deep learning models?,,0,0,,,,CC BY-SA 4.0 27264,1,,,4/11/2021 6:33,,0,237,"

I trained different classification models using Keras with different numbers of hidden layers and the same number of neurons in each layer. What I found was the accuracy of the models decreased as the number of hidden layers increased However, the decrease was more significant in larger numbers of hidden layers. The accuracies refer to the test data and were obtained using k-fold=5. Also, no regularization was used. The following graph shows the accuracies of different models where the number of hidden layers changed while the rest of the parameters stayed the same (each model has 64 neurons in each hidden layer):

My question is why is the drop in accuracy between 8 hidden layers and 16 hidden layers much greater than the drop between 1 hidden layer and 8 hidden layers, even though the difference in the number of hidden layers is the same (8).

",46145,,46145,,4/11/2021 13:43,4/11/2021 18:31,Do larger numbers of hidden layers have a bigger effect on a classification model's accuracy?,,2,0,,,,CC BY-SA 4.0 27265,2,,27264,4/11/2021 7:51,,1,,"

In your case the most probable explanation would be the case of overfitting. The model with too many hidden layers have lots of parameters. By means of all these parameters the model is remembering stuff from the training data itself instead of generalizing by learning the useful patterns.

As a rule of thumb if you increase the number of hidden layers more and more at some point model would perform poorly. (I am assuming there is non-linearity in between. In case there is no non-linearity, it doesn't matter how much you stack, it would give the same result because it just boils down to one single layer).

As an experiment, you can try to add regularization and you will see the model won't be performing that bad. Because now model is being punished for being too confident about the things it is remembering. As a result, it won't overfit to the training data.

",46176,,,,,4/11/2021 7:51,,,,0,,,,CC BY-SA 4.0 27266,1,27267,,4/11/2021 7:52,,1,283,"

I am currently writing my bachelor thesis, which is an implementation of proximal policy optimization. Sometimes, I hit a wall because of the gaps in my mathematical knowledge. However, implementing the algorithm helped me to understand the math behind the algorithm.

Unfortunately, I still have a question.

When the action space is continuous, I am using the normal distribution (same as in the PPO implementation by Spinning up). In the mentioned implementation, the logarithm of the standard deviation is used initially to give the same probability to all of the possible action, then they use the standard deviation when choosing an action. Why do we use the logarithm? Why not directly use simply the standard deviation?

I know that the logarithm is easier when it comes to the computations, but I can not see the benefits of the logarithm in the Spinning up implementation.

class MLPGaussianActor(Actor):

    def __init__(self, obs_dim, act_dim, hidden_sizes, activation):
        super().__init__()
        log_std = -0.5 * np.ones(act_dim, dtype=np.float32)
        self.log_std = torch.nn.Parameter(torch.as_tensor(log_std))
        self.mu_net = mlp([obs_dim] + list(hidden_sizes) + [act_dim], activation)

    def _distribution(self, obs):
        mu = self.mu_net(obs)
        std = torch.exp(self.log_std)
        return Normal(mu, std)

    def _log_prob_from_distribution(self, pi, act):
        return pi.log_prob(act).sum(axis=-1)    # Last axis sum needed for Torch Normal distribution
",46175,,2444,,4/11/2021 12:19,4/11/2021 12:19,Why is the logarithm of the standard deviation used in this implementation of proximal policy optimization?,,1,0,,,,CC BY-SA 4.0 27267,2,,27266,4/11/2021 8:09,,0,,"

As you have mentioned using log is nicer because it makes multiplications to additions etc etc(it helps in numerical stability issues). But I think over here, the reason they are doing it like that is because of enforcing a simple constraint in a much more simpler way. In the __init__ we are noticing that the log_std is being formulated instead of the std itself. Would it be wrong if you formulated the std itself? No. But it would be a bit messy to have some constrained imposed on it. For example, if we modeled std it could have been possible for the std to become 0 or negative. Well that's not a correct values of a std. But over here that is being enforced automatically, this kind of little things makes the learning for the model easier. Even if the log_std becomes negative or zero it does not matter because the exp will take care of it by exponentiating it to a positive number.

",46176,,,,,4/11/2021 8:09,,,,1,,,,CC BY-SA 4.0 27268,1,,,4/11/2021 10:37,,0,107,"

I have implemented the simple Q-Learning based solution for AI-gym's Cartpole-v0.

However, despite changing hyper-parameters, and rechecking my code, I cannot get an average reward (N-running reward) of more than 30. My question is, is it not possible to get successful completion of Cartpole without using sophisticated algorithms such as Deep learning etc.?

I am glad to share my code, but I am sure no one would have time to check it.

PS. I know there are many implementations out there, but I have learned from them but I want to implement my own code for learning purpose and do not just want to copy-paste.

PSS (Edit): I have added the code in the answer to this question for reference.

",36710,,36710,,4/12/2021 2:35,5/12/2021 4:07,Is is not possible to achieve average reward of more than 20-40 with simple Q-Learning,,1,5,,,,CC BY-SA 4.0 27269,1,27275,,4/11/2021 12:25,,2,233,"

I do not understand the link of importance sampling to Monte Carlo off-policy learning.

We estimate a value using sampling on whole episodes, and we take these values to construct the target policy.

So, it is possible that in the target policy, we could have state values (or state action values) coming from different trajectories.

If the above is true, and if the values depend on the subsequent actions (the behavior policy), there is something wrong there, or else, better, something I do not understand.

Linking this question with importance sampling, do we use this ro value to correct this inconsistency?

Any clarification is welcome.

",33566,,2444,,4/11/2021 13:23,4/11/2021 22:17,With Monte Carlo off-policy learning what do we correct by using importance sampling?,,2,0,,,,CC BY-SA 4.0 27270,2,,27269,4/11/2021 13:14,,3,,"

We estimate a value using sampling on whole episodes, and we take this values to construct the target policy.

The crucial bit that you are missing is that there is no single value of $V(s)$ (or $Q(s,a)$) of a state (or a state action pair). These value functions are always defined with respect to some policy $\pi(a|s)$ and is given the notation of $V^{\pi}(s)$ (or $Q^{\pi}(s,a)$).

The off-policy learning problems are arising when you have two policies: the generation policy $\mu(a|s)$ and the target policy $\pi(a|s)$. Your MC sampling data came from an agent following $\mu$, while you want to improve your target policy $\pi$. It is pretty straightforward from here that you'd need to weight your calculations with factors like $\frac{\pi(a_i|s_i)}{\mu(a_i|s_i)}$ - that's what importance sampling is.

",20538,,36821,,4/11/2021 22:17,4/11/2021 22:17,,,,0,,,,CC BY-SA 4.0 27273,2,,27249,4/11/2021 17:46,,2,,"

Yes, you can do that, and it is a standard practice. One famous example is the "Inception" network architecture. To keep inner subnets from "dying out", several outputs from inner layers are extracted and passed into FC->Softmax. Then all the outputs are averaged in the loss function.

From practical point of view, you won't be able to implement such things with basic Sequential model construction. Most libraries allow you to move beyond it, though, and this distinction is usually well-documented. For example in the tensorflow guide on functional API it states right in the introduction:

The functional API is a way to create models that are more flexible than the tf.keras.Sequential API. The functional API can handle models with non-linear topology, shared layers, and even multiple inputs or outputs.

Hope that'll help you to get started.

",20538,,,,,4/11/2021 17:46,,,,0,,,,CC BY-SA 4.0 27274,2,,27264,4/11/2021 18:31,,0,,"

In general, yes.

Stacking more layers and adding non-linearities will form a better function approximation (neural nets are basically function approximators), and when trained with the current regularization for each layer (such as L2 or L1) will cause your model to learn a better mapping, and hence generalize better.

If you don't regularize, it will overfit.

They are overparameterized, but why they don't overfit more (or in other words, why they generalize so well to unseen data) with increasing number of parameters is an effect that is even yet to be understood by the ML theory community [1]

[1] - https://arxiv.org/abs/1806.11379

",36474,,,,,4/11/2021 18:31,,,,0,,,,CC BY-SA 4.0 27275,2,,27269,4/11/2021 20:19,,0,,"

Recall that the definition of a value function is $$v_\pi(s) = \mathbb{E}\left[G_t | S_t = s\right]\;.$$ That is, the expected future returns given from state $s$ at time $t$ when we follow our policy $\pi$ -- i.e. our trajectory is generated according to $\pi$.

Using Monte Carlo methods we typically will estimate our value function by looking at the empirical mean of rewards we see throughout many training episodes, i.e. we will generate many episodes, keep track of all the rewards we see from state $s$ onwards across all of our episodes (this may be the first visit method or the all visit methods) and use these to approximate the expectation that is our value function.

The key here is that to approximate the value function in this way, then the episodes must be generated according to our policy $\pi$. If we choose the actions in an episode according to some other policy $\beta$ then we cannot use these episodes to approximate the expectation directly. As an example, this would be like trying to approximate the mean of a Normal(0, 1) distribution with data drawn from a Normal(10, 1) distribution.

To account for the fact that the actions came from a different distribution, we have to reweight the returns according to an importance sampling ratio. To see why we need importance sampling, see this question/answer.

",36821,,,,,4/11/2021 20:19,,,,1,,,,CC BY-SA 4.0 27278,2,,27268,4/12/2021 2:34,,-1,,"

The code to my question is as below, for reference:

import gym
import numpy as np
import matplotlib.pyplot as plt

# Discretize the contiuous space
DISCRETE_POINTS = 50
X_position = np.linspace(-2.4, 2.4, DISCRETE_POINTS)
Velocity = np.linspace(-5, 5, DISCRETE_POINTS)
Angle = np.linspace(-0.7295476, 0.7295476, DISCRETE_POINTS)
Angular_vel = np.linspace(-5,5,DISCRETE_POINTS)

# Fit the instantnous state into any discrete box
def get_state(_state):
    
    X, X_bar, Angle_, Angle_bar = _state
    
    X = int(np.digitize(X, X_position))
    X_bar = int(np.digitize(X_bar, Velocity))
    Angle_ = int(np.digitize(Angle_, Angle))
    Angle_bar = int(np.digitize(Angle_bar, Angular_vel))
    
    return (X, X_bar, Angle_, Angle_bar)

def epsilon_greedy_action(s, epsilon):
    
    '''   Input argument: state 's' tuple in the form (4,0,1,0)    '''
    # if np.random.uniform() < epsilon:
    if np.random.random() > epsilon:
        a = env.action_space.sample()
    else: 
        _,a = find_maxQ_value(s)
    
    return a

def find_maxQ_value(state):
    '''
    Input argument: 
    state: should be a tuple of form (0,0,0,0) or (1,0,0,1) etc.
    
    Output argument:
    best_value: best q-value of the current state-action pair
    choosen_action: best action corresponding to current state. It depends on the best q-value
    '''
    
    for act_ in range(env.action_space.n):
        A = [Q[state,0], Q[state, 1]]
        best_value = np.max(A)
        choosen_action = np.argmax(A) 
    
    return best_value, choosen_action
 

def plotRunningAverage(totalrewards, N, n_avg):
  
    running_avg = np.empty(N)
    for t in range(N):
        running_avg[t] = np.mean(totalrewards[max(0, t-N):(t+1)])
    return running_avg 


if __name__ == '__main__':

    env = gym.make('CartPole-v0');
    
    EPISODES = 1000;
    no_actions = env.action_space.n
    
    # Hyper parameters
    alpha = 0.001 # Learning rate
    gamma = 0.99 #Discount Factor
    epsilon = 1 # For Epsilon-Greedy algorithm
    epsilon_decay_factor = 0.99;
    min_epsilon = 0.1;    

    states = []
    for i in range(len(X_position)):
        for j in range(len(Velocity)):
            for k in range(len(Angle)):
                for l in range(len(Angular_vel)):
                    states.append((i,j,k,l))
                    
    #Initialize Q-table : 
    # 1. We make the Q-table in form of a dictionary
    # 2. We initialize Q-table values as zero in this
    Q = {}
    for s in states:
        for n_a in range(no_actions):
            Q[s, n_a] = 0
    
    Running_reward = [];
    l_action_cnt = 0;
    r_action_cnt = 0;
    wrong_action =0;
    
    #Q-Learning agent episodes    
    for e in range(EPISODES):
            
        cn_state = env.reset()
        ds_state = get_state(cn_state)
        
        done = False
        ep_reward = 0
        ep_len = 0
         
        while not done:
            
            action = epsilon_greedy_action(ds_state, epsilon)   
            if action == 0:
                l_action_cnt+=1
            elif action == 1:
                r_action_cnt+=1;
            else:
                wrong_action+=1;
                
            cn_next_state, reward , done , ep_len = env.step(action)
            ep_reward += reward
            
            ds_next_state = get_state(cn_next_state)
            
            # Update the Q-table based on the action
            Val_Q_bar, _ = find_maxQ_value(ds_next_state);
             
            Q[ds_state, action] = (1-alpha)*Q[ds_state, action] + alpha*(reward + gamma*Val_Q_bar)
            
            ds_state = ds_next_state;
         
        Running_reward.append(ep_reward)
        
        if e%100 == 0:
            print('Episode : {}, Episode reward: {}, Epsilon: {}'.format(e, ep_reward, epsilon))
                
        if epsilon >= min_epsilon:
            epsilon*=epsilon_decay_factor;

plt.plot(Running_reward);
plt.xlabel('episodes')
plt.ylabel('episodic reward')
plt.grid('ON') 
running_avg = plotRunningAverage(Running_reward, EPISODES, 50)
plt.plot(running_avg);
plt.legend(['Episodic rewards', '50-Episode moving-average reward'])

print('The ratio of left to right action is : {}'.format(l_action_cnt/r_action_cnt))
",36710,,,,,4/12/2021 2:34,,,,0,,,,CC BY-SA 4.0 27282,1,27284,,4/12/2021 8:36,,0,752,"

I have a relatively small data set comprised of $3300$ data points where each data point is a $13$ dimensional vector where the $12$ first dimensions depict a "category" by taking the form of $[0,...,1,...,0]$ where $1$ is in the $i-th$ position for the $i-th$ category and the last dimension is an observation of a continuous variable, so typically one data point would be $[1,...,0,70.05]$. I'm not trying to have something extremely accurate so I went with a Fully Connected Network with two hidden layers each comprising two neurons, the activation functions are ReLus, one neuron at the output layer because I'm trying to predict one value, and I didn't put any activation function for it. The optimizer is ADAM, the loss is the MSE while the metric is the RMSE.

I get this learning curve below: Eventhough at the beginning the validation loss is lesser than the training loss (which I don't understand), I think at the end it show no sign of overfitting.

What I don't understand is why my Neural Network predicting the same value as long as the $13-th$ dimension takes values greater than $5$ and that value is $0.9747201$. If the $13-th$ dimension takes for example $4.9$ then the prediction would be $1.0005863$. I thought that it has something to do with the ReLu but even when I switched to Sigmoid, I have this "saturation" effect. The value is different but I still get the same value when I pass a certain threshold.

EDIT: I'd also like to add that I get this issue even with normalizing the 13th dimension (substracting the mean and dividing by the standard deviation).

I'd like to add that all the values in my training and validation set are at least greater than $50$ if that may help.

",44965,,44965,,4/13/2021 3:59,4/13/2021 3:59,Why my Fully Connected Neural Network outputs the same prediction?,,1,0,,,,CC BY-SA 4.0 27284,2,,27282,4/12/2021 9:38,,1,,"

two hidden layers each comprising two neurons

From your description it looks like that you only have 6 parameters for your inner layer (2x2 weight matrix + 2 biases). The whole network should be easy to interpret: you've got two 13-dimensional weight vectors $\vec{w}_1,\vec{w}_2$ that are dot-multiplied with the inputs, plus two biases $b$ and activation $\sigma$:

$$ l_1 = \sigma\left(\vec{w}_1\vec{x} + b_1\right)$$ $$ l_2 = \sigma\left(\vec{w}_2\vec{x} + b_2\right)$$

Then these two values are multiplied by 2x2 matrix + biases, then activation and linear combination.

I'd look at how $l_1$ and $l_2$ are distributed. The fact that the outputs don't change is most likely due to the first layer getting saturated somehow. Look at the 13th dimension of $\vec{w}_i$ - it is likely to be large compared to other dimensions.

First thing I'd try - standardizing your input 13th dimension, so it is distributed closer to $[0,1]$ ( or $[-1,1]$ ) range.

",20538,,,,,4/12/2021 9:38,,,,8,,,,CC BY-SA 4.0 27285,1,27291,,4/12/2021 9:47,,1,159,"

I'm currently training a deep q-learning network. Due to resource limitations, I am not able to train the model to the desired performance in one go. So what I'm doing now is training the model for a certain number of episodes and save the resultant model. Later, I will load up the previously saved model and resume training.

However, what I'm noticing is that when training resumes, the average rewards goes back to very low again, compared to what it achieved at the end of the previous training session. What I'm currently doing is to load up the previously saved model into the prediction and target models, and I keep all hyperparameters unchanged.

  • Is this behaviour expected?
  • If not, how do I properly resume training of a deep q-learning network?
  • Do I start off with the epsilon value at the end of the previous session, currently I reinitialize that as well?
",46209,,40434,,4/13/2021 4:45,4/13/2021 4:45,How to properly resume training of deep Q-learning network?,,1,0,,,,CC BY-SA 4.0 27287,1,27288,,4/12/2021 11:50,,2,383,"

I am a bit confused about how the number of parameters are calculated in Dense model for the Kera/Tensorflow.

For example, in the figure below I thought that both the statements were the same, but I found a different number of parameters for both. In particular, I am talking about model.add(Dense(...)) command.

",36710,,36710,,4/13/2021 17:36,4/13/2021 17:36,Number of parameters in Keras/Tensorflow Dense layers,,1,0,,12/22/2021 22:08,,CC BY-SA 4.0 27288,2,,27287,4/12/2021 12:28,,2,,"

Check the documentation for Dense layer:

Note: If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 1 of the kernel (using tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel with shape (d1, units), and the kernel operates along axis 2 of the input, on every sub-tensor of shape (1, 1, d1) (there are batch_size * d0 such sub-tensors). The output in this case will have shape (batch_size, d0, units).

That is what happening in your first case - for input dimensions (4,1) you've got d0=4 and d1=1. So it creates a kernel of shape (1,32) that gets applied along the axis of dimension 4. That's why your output shape is (4,32) and you've got 32 weights + 32 biases = 64 parameters.

In second case you've got a "standard" 32 * 4 fully-connected weight matrix + 32 biases = 160.

",20538,,,,,4/12/2021 12:28,,,,0,,,,CC BY-SA 4.0 27289,1,,,4/12/2021 12:35,,1,56,"

I'm trying to create an algorithm (neural network) that is able to predict a time series from a set of different parameters that are not given through time. Let's say I have a plane flying under the following conditions:

Parameters Value
Angle of attack 8 degrees
Lateral angle 12 degrees
Wind speed -20 m/s
Plane speed 200 m/s

From this point, I would like to predict the translational velocities in x-y-z axis for the next 2-3 seconds.

In order to train my model, I have a data base with different initial situations (input) and different motions of the plane (desired output) linked to their initial situation. Therefore, I want to train my model to predict these motions mentioned before, based only on the initial situation described.

In other words, the basics of what I'm trying to do could be summed up as the following:

Parameters describing the initial situation -> Model -> Time series of translational velocities.

",46213,,32410,,5/10/2021 0:01,10/2/2022 3:11,Predict time series from initial non-time dependant parameters,,1,0,,,,CC BY-SA 4.0 27290,1,,,4/12/2021 12:49,,2,119,"

I have been reading about discounted MDPs and Stochastic Shortest Path (SSP). I recently came to know (from a friend) that every discounted MDP can be converted to an equivalent SSP but not the other way around. Questions:

  1. Is this claim true? Is the discount factor equal to 1 when the MDP is converted to an SSP?
  2. More generally, what is the relationship between these two problem categories?
",46214,,46214,,4/13/2021 3:27,1/10/2023 3:04,Relation between discounted MDP and stochastic shortest path problems in RL,,1,6,,,,CC BY-SA 4.0 27291,2,,27285,4/12/2021 15:28,,1,,"

Do I start off with the epsilon value at the end of the previous session, currently I reinitialize that as well?

You should probably re-start with $\epsilon$ at the value you left off at. Using high values of epsilon may cause the neural network to forget some of what it learned from close-to-optimal policies in favour of learning possibly useless values of states and actions that are not important to a more highly-trained agent.

Also, you should either save, or wait to refill experience replay memory before restarting training updates to the neural network. Working from a small memory may also cause the neural network to overfit to specific samples and generalise less well - at least temporarily until the memory fills up again.

From your description, I am assuming that you are monitoring the average reward per training episode. This is metric that is easy to collect, but that has a problem in off-policy RL. The problem is that you are seeing the results from your behaviour policy, not your learned policy. When using $\epsilon$-greedy in Q learning, with a high value of epsilon, then the behaviour policy is likely to perform badly.

Is this behavior expected?

I would expect it if you re-set $\epsilon$ to a high value.

If not, how do I properly resume training of a deep q-learning network?

I recommend that you look at the following things:

  • Re-start with $\epsilon$ at or close to where you left off. Perhaps allow starting epsilon to be passed in to the script as an argument.

  • Measure performance of the greedy policy at checkpoints - e.g. run 100 test episodes at the end of every 500 training episodes - and use plots of that to decide how well your agent is performing. You can still plot training performance, but it should not be your guide to how well the agent is learning.

  • Save experience replay so far at checkpoints too, and reload it on re-start. Alternatively, allow for experience replay to fill significantly before allowing update steps in the training loop.

You could consider these in priority order. Note that just the first one may appear to "fix" your problem, but that is because you are not yet properly measuring your agent's performance.

",1847,,,,,4/12/2021 15:28,,,,2,,,,CC BY-SA 4.0 27295,2,,27190,4/13/2021 6:09,,1,,"

The purpose of a clockwork RNN is to help with long term dependencies. Let's say in this case, we have a sentence that starts with "John went to..." and at no point again is John's name mentioned throughout the few paragraphs we are passing to our model.

As mentioned in the paper, the most common method to combat this (at the time) was using an LSTM that stored long term data in it's cell state, or as put in the paper:

[an LSTM] uses a specialized architecture that allows information to be stored in a linear unit called a constant error carousel (CEC) indefinitely

However, this requires a whole heap of extra parameters in order to work (input gate, output gate and forget gate which all require parameters). So proposed was the CW-RNN.

The fundamental idea behind a clockwork RNN is to have "modules" computed periodically. First to clear things up, a timestep is 1 input into the model. So in the case of our example, if we're inputting character by character, timestep 1 is the character "J", timestep 2 is "h" and so on; "o", "n", " ", "w"...

So what is the clock-rate? Well, it's simply how often each module of the CW-RNN is computed. Let's say the hidden layer of the clockwork RNN is split into 8 modules, which I will reference as $M_1, M_2, M_3 ... M_8$, and the associated clock-rates for each of these modules are the powers of 2, so: $1, 2, 4, 8, 16, 32, 64$ and $128$.

Before I continue I want to note a potential discrepancy, if you consider the first input to be timestep 0, then it changes how this executed (all modules would be activated at the first timestep), however if you consider it to be 1 (like I do for this example), only module 1 would be executed (again, in this example).

So we're at timestep 1, ie "J", so we check the timestep against the clock-rates of each module to determine which ones will be executed in the computation of the hidden state and output for this timestep. To do this, we take the mod of the timestep against the time-rate, so: 1 mod 1, 1 mod 2, 1 mod 4 ... 1 mod 128 and if it equals 0 (basically, is the timestep a multiple of the time-rate) then that module will be executed at this timestep. So in this case $M_1$ will be executed. When we input "h" at timestep 2, $M_1$ and $M_2$ will both be executed, and will equally contribute to the hidden state (ie, they will be added together) and the output.

By performing calculations this way, each module can all simultaneously be responsible for information over different time periods, for example $M_8$ which is only executed every 128 timesteps will be responsible for very long term dependencies, but $M_1$ will cover short term dependencies.

So in basic terms, clock-rate refers to how often (what timestep interval) a given module of a Clockwork RNN is computed

",26726,,,,,4/13/2021 6:09,,,,2,,,,CC BY-SA 4.0 27297,1,,,4/13/2021 10:00,,1,25,"

I was reading about Immune Clonal Strategy, specifically about Monoclonal operator from Immunity clonal strategies, and it goes as follows:

Here $a_i $ is a point and $a_i = \{ x_1, x_2, \cdots, x_m \}$.

I do not understand what $I_i$ really is, It seems like just copy $a_i$ for $q_i$ times or something like that, can someone please explain to me what is really happened here?

",36578,,36578,,4/15/2021 6:58,4/15/2021 6:58,Clonal operator in Immune Clonal Strategy,,0,0,,,,CC BY-SA 4.0 27299,1,27303,,4/13/2021 11:20,,0,1277,"

I'm trying to implement the vehicle re-identification model described in https://arxiv.org/pdf/2004.06271.pdf.

My question focuses on Section 3.2 of the paper, which uses a ResNet-50 for deep feature extraction in order to generate discriminative features which can be used to compare images of vehicles by Euclidean distance for re-identification. It takes a 256x256x3 image as input.

My understanding of ResNet-50 is that its output is of the shape N, where N is the number of classes which an input image could be, and ground truth labels take the form of a one-hot encoding where the '1' value represents the node in the output layer which is associated with the given class.

I am therefore confused by the usage of ResNet-50 in a re-identification task in which the goal is to generate an array of discriminative features which can be compared by Euclidean distance. There is no discrete set of N classes, as the model should work on any of the infinite number of vehicles in the world.

What is the ground truth label in a ResNet-50 in the context of a re-identification task?

",44524,,,,,4/13/2021 14:15,How is a ResNet-50 used for deep feature extraction?,,1,0,,,,CC BY-SA 4.0 27301,1,,,4/13/2021 13:41,,1,34,"

I am new in evolutionary algorithms field. I have a chromosome of 6 variables (real variable), where the sum of these variables is equal to 1.

I am looking for mutation formulas that can generate a new chromosome respecting the equality constraint: in my case, the sum of new chromosome should always equal to 1.

",46199,,32410,,5/10/2021 0:00,5/10/2021 0:00,How to handle equality constraints in the mutation operation of evolutionary algorithms?,,1,0,,,,CC BY-SA 4.0 27303,2,,27299,4/13/2021 14:15,,1,,"

The authors use so-called embeddings, it's a form to represent the images in some meaningful vector form.

The procedure to get embedding as follows. First, keep in mind most of the popular convolutional net architectures starts with convolutional layers and then have few fully connected layers. Then do the following.

  1. Train the full network with one-hot encoded labels as usual
  2. Take away the last fully connected layer and use the values on the previous layer as a representation.

In the case of resnet50, you will get a 2048-value float vector. The property of a neural network is that semantically close images usually have close representation on the last layers and you could use euclidian distance to measure the similarity of images in some sense (not really, but it's another long discussion)

I don't know the paper, but I glimpsed through it. You could see in formula 4 the $x$ is embedding representation. Loss at formula 5 is cross-entropy after applying the last linear layer ($Wx + b$) and then softmax.

As a side note, you could skip the first step completely and use pretrained weights as a starting point. I.e. in pytorch you could take pretrained on ImageNet classification weights as follows

import torchvision.models as models
resnet = models.resnet50(pretrained=True)
",16940,,,,,4/13/2021 14:15,,,,0,,,,CC BY-SA 4.0 27304,1,,,4/13/2021 14:17,,1,105,"

I would like to show the RGB features learned in the first layer of a convolutional neural network similarly to this visualization of the same layer's features from AlexNet:

My learned weights are in the range [-1.1,1.1]. When I use imshow in python or imagesc in Matlab, the weight values are clipped to [0,1], leaving only positive weights intact, everything else black (obviously).

Negative weight values could be informative, so I don't want to clip them. Rescaling the weights to [0,1] works fine for grayscale features, but not for RGB features as it is unclear how negative values of a channel should be visualized. In the above picture 0 furthermore seems to map to the middle of the range (gray).

How are such RGB features visualized so that they look similarly to above AlexNet visualization?

(Sorry for the beginner's question.)

",5548,,5548,,4/13/2021 15:53,1/13/2023 17:08,Showing first layer RGB weights similarly to AlexNet,,1,0,,,,CC BY-SA 4.0 27305,1,,,4/13/2021 14:52,,1,58,"

I want an LSTM to output one of two classes (Y, N), per frame, based on all the input so far.
My original inputs are very long (~100000 samples long, far more than a standard LSTM training can handle due to vanishing gradients).

  1. If the last seen instance out of the tokens (A, B) was A, output Y.
  2. If the last seen instance out of the tokens (A, B) was B, output N.
  3. The very long sequence is guaranteed to start with either A or B.

If the sequence was short, this would be quite easy.

For example, the following top lines and bottom lines correspond to inputs and required outputs:

ABCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
YNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNN

ACCCCCCCCCBCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCACCCCCCCCCCCCCCCCACCCCCCCC
YYYYYYYYYYNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNNYYYYYYYYYYYYYYYYYYYYYYYYYY

Looks easy enough, just push batches comprised of chunks of the long sequence to the LSTM and have a coffee, right?
However, for my case, the available inputs are (A, B, C), of which (A, B) are extremely rare, meaning I can have batches comprised of 100% C's. The LSTM has no chance then, if not fed with some current state, telling it about the last A or B seen.
Unfortunately, this "state" is really something learned, and I can't just feed it as input AFAIK.

CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
????????????????????????????????????????????????????????????????????

I am looking for a standard practice, or other references on how to train an LSTM or other RNN based model to be able to classify based on rare events far in history.

I hope this is clear, if not please ask and I will edit.


Please note that the data is labeled, and labeling can't be generated automatically for this task. The above is just an example for ease of understanding, the reality is more complicated.

",21645,,21645,,4/13/2021 15:04,4/13/2021 17:40,How to train an LSTM to classify based on rare historic event?,,1,5,,,,CC BY-SA 4.0 27307,2,,27305,4/13/2021 17:40,,1,,"

How about a Temporal Convolutional Network? It feels like for such a long sequences having the recurrent/memory based approach is not too feasible. But, intuitively, the 1D convolutions should be able to pick out those rare features from your extremely long sequences.

There are also claims that TCNs are comparable to RNNs in performance on common tasks, so there's that.

",20538,,,,,4/13/2021 17:40,,,,0,,,,CC BY-SA 4.0 27308,1,27310,,4/13/2021 18:12,,1,32,"

I am trying adversarial attack (AA) for a simple CNNs. Instead of the clean image, my simple CNN is trained with attacked images as suggested by some papers. As the training goes on, I am not sure if the training is well going or something is wrong.

Here is what I observed:

When the epsilon value is large, the classification performance of the model from the adversarial training is low. I understand if the attacked image is given to the model, then the performance is poor. Although the model is from the adversarial training, because the epsilon is large, the model is poorly perform. However, when an clean image is given, the performance of the model is still low. Performance on the clean images are higher than the performance of the attacked images, but not as high as the baseline model without adversarial training.

So, I wonder if the adversarial training also degrades the performance of the model on the clean images. When I read papers, I only see the results on the adversarial Images, not clean images. If you have any experience, it will be very helpful to check if my training code is working well or not.

When the epsilon is very large, the accuracy of the model on clean image is around 15%. The model without the adversarial training is around 81%.

Some details. I use PGD attack with 5-iterations and epsilon is one of eps = [0.0001, 0.0005, 0.001, 0.005, 0.01, 0.03, 0.05, 0.07]. Step size is eps/3. Only one epsilon is selected and the adversarial training is conducted. So there are 8 different models trained with different epsilons. I use natural image Dataset.

",46248,,,,,4/13/2021 18:40,Expected behavior of adversarial attacks on deep NN?,,1,0,,,,CC BY-SA 4.0 27309,1,,,4/13/2021 18:13,,10,4471,"

I'll start with my understanding of the literal difference between these two. First, let's say we have an input tensor to a layer, and that tensor has dimensionality $B \times D$, where $B$ is the size of the batch and $D$ is the dimensionality of the input corresponding to a single instance within the batch.

  • Batch norm does the normalization across the batch dimension $B$
  • Layer norm does the normalization across $D$

What are the differences in terms of the consequences of this choice?

",16871,,2444,,10/10/2021 8:18,11/9/2021 9:02,What are the consequences of layer norm vs batch norm?,,1,0,,,,CC BY-SA 4.0 27310,2,,27308,4/13/2021 18:40,,1,,"

This seems to be a known problem, and intuitively seems reasonable. You might be interested in the paper Adversarial Training Can Hurt Generalization.

The authors suggest that this might be because training on the perturbed data requires the model to learn more robust features, which means more samples are required to obtain performance comparable to a model that is not adversarially trained.

You could try collecting or generating additional samples to see if this leads to an improvement. They also mention that in their experiments on the MNIST dataset, using Xavier initialisation led to a significant benefit, so you could experiment with that too.

",44413,,,,,4/13/2021 18:40,,,,0,,,,CC BY-SA 4.0 27312,1,,,4/13/2021 22:02,,0,101,"

When do you tend to use CNN rather than LSTM (or the other way round) in classification or generation tasks of sequential data like text or log-data? What are the reasons for the decision and what does it depend on? Are there any papers or statistics that confirm this?

I'm thinking of data like Linux log entries or short sentence of length of less than 20 words/tokens.

Personally i would almost always use LSTM but I'm curious if CNN wouldn't be better in some cases, if its possible to implement them in a meaningful way. On short sentence there isn't much buffer to use CNN if i'm not mistaken.

",46251,,,,,4/14/2021 15:20,Advantages of CNN vs. LSTM for sequence data like text or log-files,,1,0,,,,CC BY-SA 4.0 27313,2,,27289,4/14/2021 0:54,,0,,"

I don't think you need a recurrent neural network for this. Why not just train a feedforward model with angle of attack etc as input and translation velocities as output? The size of the output will depend on how frequently you want updates to the velocities. e.g. if the update rate is 0.5 seconds and the network is predicting 2 seconds in advance it could have 12 outputs, being x,y,z values every 0.5 seconds.

You may need to massage the way you represent the outputs to find the best approach. For example, will the network be predicting absolute velocities or just the initial velocities plus changes to velocity over time? If the latter you can always get back to absolute velocities.

",7760,,,,,4/14/2021 0:54,,,,0,,,,CC BY-SA 4.0 27314,1,27330,,4/14/2021 2:20,,0,51,"

I'm trying to predict the continuous values of a variable $y$ using a Fully Connected Neural Network while providing it with data from a $(3300, 13)$ matrix $X$ where $X[i, :]=[0,...,1,...,0,x_{i}]$. So the first $12$ elements of a data vector are all zeros except for one element which is equal to $1$ to denote the belonging of this data to a category. I'd like to add that my $X$ data is normalized with regard to the $13$-th column and that both $X$ and $y$ are shuffled in the same manner. Please find below my code for my model:

model = Sequential()
model.add(Input(shape=(13,)))
model.add(Dense(6, activation = 'relu'))
model.add(Dense(2, activation = 'relu'))
model.add(Dense(1))

model.compile(loss = 'mean_squared_error',
              optimizer = 'adam',
              metrics = ['RootMeanSquaredError'])

history = model.fit(X, y, validation_split = 0.1, epochs=64)

When trying to plot the learning curve using:

plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('rmse')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()

I get these curves:

There's already an "unusual" element to point here; I've noticed that throughout the training the loss decreases but sometimes in an oscillating manner but we don't notice that on the training learning curve. For example the last four values of the loss are: $2.5176$, $3.4718$, $3.0704$ and it settles down on $3.8177$. I've also noticed that the losses provided by history.history are different than those shown during training, I suspect ones are computed before the epoch and ones after but I'm not sure.

I've tried to predict on the $275$ first elements of the training data. Most of the predictions took the value $4.2138872e+00$ but there are other predictions who took lesser values. I've computed the maximum of the predictions on the whole training set and it is $4.2138872e+00$.

I've also tried to train on the whole training set without a validation set to see what'll happen. I've made sure to rerun the cells of the model so that it doesn't take the weights it already found. I've noticed the same behaviour for the loss during training, but this time there is no constant predicted value that comes up as a maximum limit for the predictions.

I've already asked this question here and a user suggested to me that I should ask this question separately while providing the whole code. I ran the same code I was running and that was giving me the same predictions for no matter for my input vectors.

I think, as the user @Kostya that answered my previous question pointed out, what's happening here is called "dying ReLus". It's the same code that I'm running over and over but gives different predictions and the only random parameters are the weights and the biases. I'm sure the biases are initially initialized to zero but I don't know how the weights are handled. I suppose they're randomly generated by a centered and reduced normal distribution.

I have last came to this question: does the number of neurons, hence the number of weights influence the phenomena of "dying ReLus" ? I came to think that because if we had a large number of weights, their values are likely to fill that interval where the majority of the probability mass is concentrated. And since we have a small number of weights, we can get some "outlier" weights which lead to dyind ReLus.

",44965,,,,,4/14/2021 15:00,How to explain that a same DNN model have radically different behaviours with each new initialization and training?,,1,0,,,,CC BY-SA 4.0 27315,1,,,4/14/2021 2:50,,2,317,"

I've read in the book Neural Network Design, by Martin Hagan et al. (chapter 11), that, to train the feed-forward neural network (aka multilayer perceptron), one uses the backpropagation algorithm.

Why this algorithm? Could someone please explain in simple terms and detailed terms?

",44999,,2444,,12/4/2021 9:33,12/4/2021 9:33,Why is the backpropagation algorithm used to train the multilayer perceptron?,,1,0,,,,CC BY-SA 4.0 27316,1,,,4/14/2021 3:37,,5,649,"

Most RL books (Sutton & Barto, Bertsekas, etc.) talk about policy iteration for infinite-horizon MDPs. Does the policy iteration convergence hold for finite-horizon MDP? If yes, how can we derive the algorithm?

",46214,,2444,,4/14/2021 8:24,4/26/2021 14:20,Does the policy iteration convergence hold for finite-horizon MDP?,,1,0,,,,CC BY-SA 4.0 27317,2,,27315,4/14/2021 4:19,,4,,"

According to wikipedia of backpropagation:

In fitting a neural network, backpropagation computes the gradient of the loss function during supervised learning with respect to the weights of the network for a single input–output example, and does so efficiently, unlike a naive direct computation of the gradient with respect to each weight individually.

Backpropagation is a special case of reverse accumulation of automatic differentiation, and it was announced in Rumelhart, Hinton & Williams (1986). Automatic differentiation has 2 modes: forward accumulation and backward accumulation.

Automatic differentiation is distinct from symbolic differentiation and numerical differentiation (the method of finite differences). Symbolic differentiation can lead to inefficient code and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both classical methods have problems with calculating higher derivatives, where complexity and errors increase. Finally, both classical methods are slow at computing partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems.

Usually, two distinct modes of AD are presented, forward accumulation (or forward mode) and reverse accumulation (or reverse mode). Forward accumulation specifies that one traverses the chain rule from inside to outside, while reverse accumulation has the traversal from outside to inside...

Forward accumulation is more efficient than reverse accumulation for functions $ f:ℝ^n → ℝ^m$ with $m ≫ n$ as only $n$ sweeps are necessary, compared to $m$ sweeps for reverse accumulation.

Reverse accumulation is more efficient than forward accumulation for functions $ f:ℝ^n → ℝ^m$ with $m ≪ n$ as only $m$ sweeps are necessary, compared to $n$ sweeps for forward accumulation.

Under this historical computational differentiation techniques context, since the loss function in a MLP maps $ℝ^n \rightarrow ℝ^m$ with $m ≪ n$ for most cases, that's one of the main reasons why backpropagation is most often used to train a MLP as a special case of reverse accumulation introduced above.

",45381,,40434,,4/14/2021 8:25,4/14/2021 8:25,,,,2,,,,CC BY-SA 4.0 27318,1,,,4/14/2021 5:09,,3,93,"

Neural networks are usually evaluated by dividing a dataset into three splits:

  • training,
  • validation, and
  • test

The idea is that critical hyperparameters of the network such as the number of epochs and the learning rate can be tuned by testing the network on the validation data while keeping the test data completely unseen until a final evaluation that happens only after the hyperparameters have been tuned.

However, if the amount of data is very small (e.g. 10-20 examples per class), then dividing the dataset into three splits may negatively impact the model due to lack of training data, and two splits is therefore preferable. A two split approach that makes a reasonable amount of data available for training is ten-fold stratified cross validation.

My question is -- is it statistically sound to tune hyperparameters by repeatedly evaluating hyperparameter sets using cross validation? Keep in mind that there is no held-out test data in this case, as the amount of available data is too small. I'd like some evidence/citations if possible showing that specifically for small datasets, this is the best approach for estimating the best hyperparameters that lead to the best generalizable model. Or if there is another approach that is better, I'd like to learn about that too.

",7760,,32410,,4/21/2021 18:38,7/10/2021 0:12,What is the most statistically acceptable method for tuning neural network hyperparameters on very small datasets?,,1,0,,,,CC BY-SA 4.0 27319,2,,13657,4/14/2021 5:21,,2,,"

Accuracy can sometimes be a very coarse metric. When it is applied to three class problems, people often take the class label with maximum predicted probability and predict that. The probabilities of the individual labels are ignored. I'd recommend that as well as accuracy you calculate sensitivity and specificity for each class and the area under the ROC curve. For both of these, you can take a 1 vs rest (i.e. class 1 vs classes 2 & 3, class 2 vs classes 1 & 3 etc) approach to calculating the metrics. Even if your accuracy is under 50%, the model may still be predicting at least one class well, so I recommend doing more analysis before making a decision.

I also recommend comparing your model (if it is deep learning-based) to a traditional type of model, as deep learning usually works best with big data and with such a small dataset as you have, there may be no benefit to using deep learning (unless you are leveraging transfer learning).

",7760,,7760,,4/14/2021 5:33,4/14/2021 5:33,,,,0,,,,CC BY-SA 4.0 27320,2,,16677,4/14/2021 6:21,,0,,"

From your question, it sounds like your only training data is {𝑥1,…,𝑥𝑛} and the network has to magically come up with values {y1,…,y𝑛} such that an unknown function is minimized. How do you plan to give feedback to the network during training?

Your situation appears to be something like this:

X-->Model-->Y-->f(X,Y)

where X is being copied from the first layer to the last layer using a non-sequential architecture.

The solution to this problem would be to add an extra layer to your network that implements f(X, Y). However, this will only be trainable using standard methods like gradient descent if f(X, Y) is differentiable. If f(X, Y) is not differentiable, then you may need to use a different optimization method for learning the weights of the model and it may be more difficult. Particle swarm optimization is one possibility here.

",7760,,32410,,10/2/2021 19:00,10/2/2021 19:00,,,,0,,,,CC BY-SA 4.0 27321,2,,27301,4/14/2021 6:56,,1,,"

If X is your 6D vector and m(X) is the mutated version of X, then you can renormalise the mutant back to unity by dividing by the sum of X, i.e. X' = m(X)/sum(X).

However, I encourage you figure out how to mutate a vector while keeping the length of the vector at 1. One way to do this would be to randomly rotate your vector in 6D space. The length should stay the same, and you don't need to renormalise it afterwards.

",7760,,7760,,4/15/2021 17:36,4/15/2021 17:36,,,,2,,,,CC BY-SA 4.0 27322,1,,,4/14/2021 7:21,,0,82,"

I have a list of rectangles, they are in a certain order in 2D at the beginning. The task is to move them to get the boundary (rectangular) of the minimal area. It's OK to push off the dotted border as long as the area is minimal.

The starting state may look similar to this (top view):

Any reinforcement learning approaches to this problem? I'm thinking of some actions called 'rotate 90 degs', 'push east', 'push west', 'push south', 'push north', but these actions are still not clear how to be applied, which to push, how far to push.

The 2D state can be mapped to a grid of zeros (free) and ones (occupied) to utilize conv2D layers. Before feeding to conv2D, all rectangle coords should be translated to make the ($x_{min}$,$y_{min}$) be at the origin.

",2844,,32410,,12/11/2021 8:49,12/11/2021 8:49,Any RL approaches for this 2D space optimization problem?,,0,2,,,,CC BY-SA 4.0 27323,1,27324,,4/14/2021 8:31,,3,584,"

I'm learning about how neural networks are trained. I understand how a neuron works, backpropagation, and all that. In neurons, there is a clear distinction between a "weight" and a "bias".

$$ Y= \sigma(\text{weight} * \text{input})+ \text{bias} $$

However, all the sources I've found when you train the network you just adjust the weights. Not the bias.

However, they never mention what the bias should do, which leads me to think that you just merge all weights and biases in a $W$ vector and call it weights, even though there are also biases. Is that correctly understood?

",46260,,46260,,4/14/2021 11:02,4/14/2021 11:02,"Is the bias also a ""weight"" in a neural network?",,1,1,,,,CC BY-SA 4.0 27324,2,,27323,4/14/2021 8:55,,3,,"

Yes, it is not unusual to omit the bias by adding a neuron which always outputs a constant 1, which will then be multiplied by an appropriate weight to give the same formula as you would get using an explicit bias.

One notable text using this convention is Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz and Shai Ben-David. In section 20.1 there is a diagram of a neural network where a neuron outputting a constant value is added to each layer which you might find helpful.

To understand why this works, suppose the outputs of the previous layer are $u_1, \dots, u_n, u_{n + 1}$, where $u_{n + 1}$ is always $1$. Then a neuron in the next layer (without a bias) computes

$$ \sigma\left(\sum_{i = 1}^{n + 1} w_i u_i \right) = \sigma\left(\sum_{i = 1}^n w_i u_i + w_{n + 1}\right),$$ where $\sigma$ is the activation function. So, the weight $w_{n + 1}$ just serves as the bias because it is multiplied by $u_{n + 1} = 1$.

",44413,,,,,4/14/2021 8:55,,,,0,,,,CC BY-SA 4.0 27325,1,27351,,4/14/2021 12:35,,1,210,"

If $x \sim \mathcal{N}(\mu,\,\sigma^{2})$, then it is a continuous variable, and therefore $P(x) = 0$ for any x. One can only consider things like $P(x<X)$ to get a probability greater than 0.

So what is the meaning of probabilities such as $P(x|z)$ in variational autoencoders? I can't think of $P(x|z)$ as meaning $P(x<X|z)$, if $x$ is an image, since $x<X$ don't really make sense (all images smaller than a given one?)

",46267,,2444,,4/15/2021 10:26,4/15/2021 19:35,"In variational autoencoders, what does p(x|z) mean?",,2,0,,,,CC BY-SA 4.0 27326,1,,,4/14/2021 12:36,,2,41,"

Which part of this is the transformer?

Ok, the caption says the whole thing is the transformer, but that's back in 2017 when the paper was published. My question is about how the community uses the term "transformer" now.

I'm not looking for an inline response to these questions. They are all a way of asking the same general thing.

  • Is this whole thing a transformer?
  • What parts or what relationships between parts make it a transformer?
  • Equivalently, what aspects can I change before it becomes something else and not a transformer?
  • If I only care about self-attention I suppose I don't need the right hand column. If I just keep the self-attention, is it still a transformer?

Context about me is I've just become familiar with transformers and have not read much literature on them since this paper.

",16871,,2444,,4/15/2021 10:25,4/15/2021 10:25,"What part of the Vaswani et al. is the ""transformer""?",,0,0,,,,CC BY-SA 4.0 27327,2,,27325,4/14/2021 12:54,,0,,"

In VAE's, we want to model the distribution of images $x$ with some latent variable $z$. Because $x$ is a random variable, You can think of $P(x|z)$ as the distribution of images $x$ conditioned on the random variable $z$. So given a particular value of $z$, we can generate a distribution over images $x$.

VAE's try to model images, which are themselves high dimensional 2D data. Given a 28x28 image, we already have 784 latent variables to model. We cannot visualise the distribution over all images $x$. Your notation $P(x < X|z)$ makes sense in a 1D case with a scalar value. However when considering 2D and higher, we have a problem with how we consider what is less then. if $x = (y_1,y_2)$ and $X = (y_3,y_4)$, then is $x < X$ if both $y_1 < y_3$ and $y_2 < y_4$? (I.e all dimensions have to be less than or if only one dimension needs to be less than). When talking about high dimensional space therefore, it is not very useful to denote $P(x < X|z)$ because of difficulty in interpreting the results.

",32780,,,,,4/14/2021 12:54,,,,1,,,,CC BY-SA 4.0 27328,1,,,4/14/2021 14:16,,2,548,"

I am training a deep learning model for semantic segmentation. I am using the cityscapes dataset for training/evaluation.

In cityscapes, there are 34 classes, and of which, we consider only 19 classes and the rest of the classes are ignored. For training, I have assigned the 19 classes with 0-19 traid_ids.

Now, since the rest of the classes are ignored, I have ignored them when computing the loss using cross enropy with ignore_index=255.

But, the above effect can also be achieved by assigning a background class, i.e 20 as bg class and assign all the ignored classes to it.

Now my question is, which method would be better to achieve a high mIoU in cityscapes? And what would be your intuition in choosing the approach?

",36534,,,,,4/14/2021 14:16,Semantic segmentation - background or ignore for non-target classes?,,0,1,,,,CC BY-SA 4.0 27330,2,,27314,4/14/2021 15:00,,1,,"

I'm sure the biases are initially initialized to zero but I don't know how the weights are handled.

Looking at the Dense layer docs: by default Dense layers biases are initialized with zeros (bias_initializer='zeros') and weights are initialized with Glorot uniform (kernel_initializer='glorot_uniform').

... "unusual" element to point here; I've noticed that throughout the training the loss decreases but sometimes in an oscillating manner.

There's nothing unusual about the oscillations. Quite on the contrary - your curves are suspiciously too smooth.

I've also noticed that the losses provided by history.history are different

Yes, so this is a little gotcha in the keras implementation. For training loss, keras does a running average over the batches throughout an epoch, while for the validation loss it computes it after the epoch finishes. (link)

That also explains why your validation loss starts slightly lower than the training one - it actually lags behind the training loss by $\sim1/2$ of an epoch.

The fact that both losses stay almost equal suggests that your model don't really rely on training data for prediction - it got saturated ("dying ReLus") almost instantly.

Couple of suggestions that I can make:

  • I don't see you setting up a learning rate. Try making it smaller (like, much smaller). And see if there's any difference.

  • MSE is a nasty loss - especially at large values. Have you tried standardizing the target values?

Does the number of neurons, hence the number of weights influence the phenomena of "dying ReLus" ?

Yes, the less neurons you have the higher the chance that all of the neurons will die out. You can get some intuition about it on https://playground.tensorflow.org/ - following the link, I've got a relu network that trains reasonably well. Try increasing the learning rate and observe saturation in the neurons. Reduce the number of neurons and see how the whole net gets stuck.

",20538,,,,,4/14/2021 15:00,,,,6,,,,CC BY-SA 4.0 27331,1,,,4/14/2021 15:03,,0,151,"

I'm trying to solve the openai gym taxi problem (v3) using deep q learning. I've already had some success with the q-table approach, but for the life of me cannot manage to train a NN to learn a reasonable action policy. I'm doing the training using an AWS p3.2xlarge instance.

My approach is fairly straightforward, I set up the environment and agent, then run the training loop.

My code more or less looks like this:

import gym
from taxi_agent import Agent

env = gym.make('Taxi-v3').env
optimizer = Adam(learning_rate=0.001)
agent = Agent(env, optimizer)

batch_size = 32
num_of_episodes = 200
timesteps_per_episode = 120

The agent was cobbled together from various examples online:

import numpy as np
import random
from IPython.display import clear_output
from collections import deque
from tensorflow.keras import Model, Sequential
from tensorflow.keras.layers import Dense, Embedding, Reshape
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard


class Agent:
    def __init__(self, environment, optimizer):
        
        # Initialize atributes
        self._state_size = environment.observation_space.n
        self._action_size = environment.action_space.n
        self._optimizer = optimizer
        
        self.expirience_replay = deque(maxlen=2000)
        
        # Initialize discount and exploration rate
        self.gamma = 0.6
        self.epsilon = 0.5
        
        # Build networks
        self.q_network = self._build_compile_model()
        self.target_network = self._build_compile_model()
        self.align_target_model()

        #: Set up some callbacks
        self.checkpoint_filepath = 'checkpoints/'
        model_checkpoint_callback = ModelCheckpoint(
            filepath=self.checkpoint_filepath,
            save_weights_only=True,
            save_freq='epoch'
        )
        # tensorboard_callback = TensorBoard('logs', update_freq=1)
        self.model_callbacks = [model_checkpoint_callback]
        
        self.history = []

    def store(self, state, action, reward, next_state, terminated):
        self.expirience_replay.append((state, action, reward, next_state, terminated))
    
    def _build_compile_model(self):
        model = Sequential()
        model.add(Embedding(self._state_size, 10, input_length=1))
        model.add(Reshape((10,)))
        model.add(Dense(48, activation='tanh'))
        model.add(Dense(24, activation='tanh'))
        model.add(Dense(self._action_size, activation='linear'))
        
        model.compile(loss='mse', optimizer=self._optimizer)
        return model

    def restore_weights(self):
        path = self.checkpoint_filepath
        print(f"restoring model weights from {path}")
        self.q_network.load_weights(path)

    def align_target_model(self):
        self.target_network.set_weights(self.q_network.get_weights())
    
    def act(self, state, environment):
        if np.random.rand() <= self.epsilon:
            return environment.action_space.sample()
        
        q_values = self.q_network.predict(state)
        return np.argmax(q_values[0])

    def retrain(self, batch_size, epochs=1):
        minibatch = random.sample(self.expirience_replay, batch_size)
        
        for state, action, reward, next_state, terminated in minibatch:
            
            target = self.q_network.predict(state)
            
            if terminated:
                target[0][action] = reward
            else:
                t = self.target_network.predict(next_state)
                target[0][action] = reward + self.gamma * np.amax(t)
            
            history = self.q_network.fit(state, target, epochs=1, verbose=0, callbacks=self.model_callbacks)
            self.history.append(history.history)

The training loop uses the agent to act in the environment up to a number of batch_size actions. Next, it retrains the model based on a random sample of the experience for every subsequent timestep.

I have it set to print out feedback whenever the environment terminates (achieves the objective). In practice this never happens.

I've reloaded trained models from weights and trained for cumulative 24 hours without much success. I've also tried silly things like updating the target network after N steps just so it learns something - no luck.

If I try to use my trained model to solve an example env instance, it just wants to move south other than the random actions it set to do 50% of the time.

It would be great it someone could give me some advice towards what to try next. I can keep playing around with hyperparameters but I don't have the best intuition around where to optimize my efforts.

iterations = 0
state = env.reset()
env.render()
while not terminated:
    state = np.reshape(state, [1, 1])
    action = agent.act(next_state, env)
    next_state, reward, terminated, info = env.step(action) 
    next_state = state
    if iterations % 10: env.render()
    iterations += 1
    if iterations > 1000: break
",14664,,14664,,4/15/2021 12:39,4/15/2021 12:39,Open AI Taxi - Agent fails to learn an effective policy,,0,2,,,,CC BY-SA 4.0 27332,1,27384,,4/14/2021 15:17,,0,62,"

I'm implementing a differential evolution algorithm and when it comes to evolving a population, the page I am referencing is vague on how the new population is generated.

https://en.wikipedia.org/wiki/Differential_evolution#Algorithm

The algorithm looks like the population is mutated during evolution.

# x, a, b, c are agents in the population
# subscriptable to find position in specific dimension
# pop is filled with Agent objects
for i, x in enumerate(pop):
  [a, b, c] = random.sample(pop[:i]+pop[i+1:], 3)
  ri = random.randint(0, len(x))
  new_position = []
  for j in range(len(x)):
    if random.uniform(0, 1) < CR or j == ri:
      new_pos.append(a[j] + (F * (b[j] - c[j])))
    else:
      new_pos.append(x[j])
  # Agent() class constructor for agent, takes position as arg
  new_agent = Agent(new_pos)
  if fitness(new_agent) <= fitness(x):
    pop[i] = new_agent # replace x with new_agent

But I wonder if instead it means a new population is made and then populated iteratively:

new_pop = []
for i, x in enumerate(pop):
  [a, b, c] = random.sample(pop[:i]+pop[i+1:], 3)
  ri = random.randint(0, len(x))
  new_position = []
  for j in range(len(x)):
    if random.uniform(0, 1) < CR or j == ri:
      new_pos.append(a[j] + (F * (b[j] - c[j])))
    else:
      new_pos.append(x[j])
  new_agent = Agent(new_pos)
  if fitness(new_agent) <= fitness(x):
    new_pop.append(new_agent)
  else:
    new_pop.append(x)
pop = new_pop

Note new_pop is made, and filled with agents as the for loop continues.

The first allows previously evolved agents to be used again in the same generation; in other words, the population is changed during the evolution. The second doesn't allow updated agents to be re-used, and only at the end is the original population changed.

Which is it?

",31112,,31112,,4/15/2021 15:09,4/17/2021 18:33,Does a differential evolution algorithm mutate its population during a generation?,,1,1,,,,CC BY-SA 4.0 27333,2,,27312,4/14/2021 15:20,,-1,,"

The networks with one-dimensional convolution over the temporal dimensions are called Temporal Convolutional Networks. The authors of this study claim that TCNs are comparable to RNNs in performance on common tasks associated with RNNs.

",20538,,,,,4/14/2021 15:20,,,,0,,,,CC BY-SA 4.0 27335,1,,,4/14/2021 17:38,,-1,142,"

Let $\sigma(x)$ be sigmoid function. Consider the case where $\text{out}=\sigma(\vec{x} \times W + \vec{b})$, and we want to compute $\frac{\partial{\text{out}}}{\partial{w} }.$
Set the dimension as belows:
$\vec{x}$: $(n, n_{\text{in}})$, $W$: $(n_{\text{in}}, n_{\text{out}})$, $\vec{b}$: $(1, n_{\text{out}})$.
Then $\text{out}$ has the dimension $(n, n_{\text{out}})$. So we need to calculate the matrix by matrix derivative, as I know there is no such way to define that. I know that finally it is calculated as $\vec{x}^T \times (\text{out}\cdot(1-\text{out}))$.
But I can't still get the exact procedure of calculation, why it should be $\vec{x}^T \times (\text{out}\cdot(1-\text{out}))$, not $(\text{out}\cdot(1-\text{out})) \times \vec{x}^T$,I know it by considering dimension, but not by calculation.

My intuition about this problem is that all calculation can be considerd as vector by vector differentiation since $n$ is a batch size number, we can calculate matrix differentiation by considering each column vector.

I'm not sure about my intuition yet, and I need some exact mathematical calculation procedure for the problem,

",46274,,,,,4/19/2021 2:09,How can the gradient of the weight be calculated in the viewpoint of matrix calculus?,,1,3,,,,CC BY-SA 4.0 27336,1,27337,,4/14/2021 17:57,,5,817,"

I am a beginner in AI. I'm trying to train a multi-agent RL algorithm to play chess. One issue that I ran into was representing the action space (legal moves/or honestly just moves in general) numerically. I looked up how Alpha Zero represented it, and they used an 8x8x73 array to encode all possible moves. I was wondering how it actually works since I got a bit confused in their explanation:

A move in chess may be described in two parts: selecting the piece to move, and then selecting among the legal moves for that piece. We represent the policy $\pi(a \mid s)$ by a $8 \times 8 \times 73$ stack of planes encoding a probability distribution over 4,672 possible moves. Each of the $8 \times 8$ positions identifies the square from which to "pick up" a piece. The first 56 planes encode possible "queen moves" for any piece: a number of squares $[1..7]$ in which the piece will be moved, along one of eight relative compass directions {N, NE, E, SE, S, SW, W, NW}. The next 8 planes encode possible knight moves for that piece. The final 9 planes encode possible under-promotions for pawn moves or captures in two possible diagonals, to knight, bishop or rook respectively. Other pawn moves or captures from the seventh rank are promoted to a queen.

How would one numerically represent the move 1. e4 or 1. NF3 (and how would the integer for 1. NF3 differ from 1. f3) for example? How do you tell what integer corresponds to which move? This is what I'm essentially asking.

",46275,,2444,,4/15/2021 10:19,12/22/2022 17:04,How does the Alpha Zero's move encoding work?,,1,0,0,,,CC BY-SA 4.0 27337,2,,27336,4/14/2021 18:51,,5,,"

Let's do the code, so all the details are down.

Encoding dictionary:

codes, i = {}, 0
for nSquares in range(1,8):
    for direction in ["N", "NE", "E", "SE", "S", "SW", "W", "NW"]:
        codes[(nSquares,direction)] = i
        i += 1

You'll see that the codes dictionary will have 56 entries in it for each (nSquares,direction) pair.

The knight moves we'll encode as the long "two"-cell edge move first and the short "one"-cell edge second:

for two in ["N","S"]:
    for one in ["E","W"]:
        codes[("knight", two, one)] , i = i , i + 1
for two in ["E","W"]:
    for one in ["N","S"]:
        codes[("knight", two, one)] , i = i , i + 1

Now we should have 64 codes. As I understand, the final 9 moves are when a pawn reaches the final rank and chosen to be underpromoted. It can reach teh final rank either by moving N, or by capturing NE, NW. Underpromotion is possible to three pieces. Writing the code:

for move in ["N","NW","NE"]:
    for promote_to in ["Rook","Knight","Bishop"]:
        codes[("underpromotion", move, promote_to)] , i = i , i + 1

We get 73 codes as described.

Policy

The distribution over actions is a (8,8,73) tensor (it is not formally a "policy", since policy should also depend on state, but lets cut this corner for this discussion):

policy = np.zeros((8,8,73))

Let's also do codes for columns for convenience:

columns = { k:v for v,k in enumerate("abcdefgh")}

How would one numerically represent the move 1. e4

The first two dimensions choose the figure you are moving. So, that'd be e2 pawn. And we move north 'N' by 2 cells.

So, we put 1 into the tensor at the appropriate indices. Note that you have to subtract 1 from the row index, to make it zero-based.

e4policy = np.zeros((8,8,73))
e4policy[ columns['e'] , 2 - 1 , codes[(2 , "N")]] = 1

How would one numerically represent the move 1. NF3

The first two dimensions choose the figure you are moving. So, that'd be g1 knight. And we perform north-west N,W knight jump.

NF3policy = np.zeros((8,8,73))
NF3policy[ columns['g'] , 1 - 1 , codes[("knight", 'N' , 'W')]] = 1

Generally, the policy is a probability distribution over all possible moves, so the policy tensor would have several non-zero probability values in it. For example an opening policy that does 1.e4 or 1.Nf3 with 50/50 probability would be:

openingPolicy = (e4policy + NF3policy) / 2 

Hope this clears things up.

",20538,,,,,4/14/2021 18:51,,,,3,,,,CC BY-SA 4.0 27338,1,,,4/14/2021 22:30,,0,113,"

I currently reading Least Square GAN paper. But, I cannot interpret one of its figures. .

Explanation of the figure goes like this:

Figure 1: Illustration of different behaviors of two loss functions. (a): Decision boundaries of two loss functions. Note that the decision boundary should go across the real data distribution for a successful GANs learning. Otherwise, the learning process is saturated. (b): Decision boundary of the sigmoid cross entropy loss function. It gets very small errors for the fake samples (in magenta) for updateing G as they are on the correct side of the decision boundary. (c): Decision boundary of the least squares loss function. It penalize the fake samples (in magenta), and as a result, it forces the generator to generate samples toward decision boundary.

I already knew the vanilla GAN structure. But I could not understand why decision boundary looks like this. Any help will be appreciated

",41615,,18758,,1/17/2022 6:56,1/17/2022 6:56,Decision boundary figure in Least square GAN paper,,1,1,,,,CC BY-SA 4.0 27340,2,,16556,4/15/2021 3:08,,0,,"

Recently, I've been thinking this question as well. After reading several papers, finally came up with some thoughts about the surrogate model. In FEM(finite element method), we try to find a weak form to approximate the strong form so that we can solve the weak form analytically. (weak form: approximation equation; strong form: PDE in real world) In my opinion, the surrogate model can be regarded as 'weak form'. There are many methods can form a surrogate model. And if we use a NN model as the surrogate model, the training process is equivalent to 'solving analytically'.

",46281,,,,,4/15/2021 3:08,,,,0,,,,CC BY-SA 4.0 27341,1,27392,,4/15/2021 8:43,,0,2746,"

In VAEs, we try to maximize the ELBO = $\mathbb{E}_q [\log\ p(x|z)] + D_{KL}(q(z \mid x), p(z))$, but I see that many implement the first term as the MSE of the image and its reconstruction. Here's a paper (section 5) that seems to do that: Don't Blame the ELBO! A Linear VAE Perspective on Posterior Collapse (2019) by James Lucas et al. Is this mathematically sound?

",46267,,2444,,6/12/2022 8:40,6/12/2022 8:46,"In variational autoencoders, why do people use MSE for the loss?",,2,9,,,,CC BY-SA 4.0 27345,1,,,4/15/2021 13:30,,3,498,"

I know that when using Sigmoid, you only need 1 output neuron (binary classification) and for Softmax - it's 2 neurons (multiclass classification). But for performance improvement (if there is one), is there any difference which of these 2 approaches works better, or when would you recommend using one over the other. Or maybe there are certain situations when using one of these is better than the other. Any comments or shared experience will be appreciated.

",41591,,40434,,4/15/2021 22:10,7/10/2021 0:11,What are the pros and cons of using sigmoid or softmax approach when dealing with 2 classes?,,2,2,,,,CC BY-SA 4.0 27346,1,,,4/15/2021 14:01,,0,167,"

From trying to understand neural networks better, I've come upon a tentative notion that an activation function aims to build a function it's approximating via linear combinations with biases and weights as their constants, like Fourier sums and other orthogonal basis functions.

How, then, can one neural network layer use activation function, like a sigmoid, and another one like the output using softmax? How do we know a linear combination of sigmoids and something else can still build that function no matter what? To me, it's like saying a function is approximated using sine functions with $N$ different $k$ values and then also randomly a few Hermite polynomials are thrown in as well. In this case, Hermite polynomials and the sine function aren't even orthogonal (to be honest I haven't checked but I'd assume they're not).

This question highlights some misconceptions I have about activation functions, perhaps, and I'd like to know where I'm going wrong here.

",45875,,40434,,4/19/2021 22:23,1/10/2023 2:01,Why can a neural network use more than one activation function?,,1,2,,,,CC BY-SA 4.0 27348,1,27353,,4/15/2021 17:06,,0,181,"

I am looking at training the Scaled YOLOv4 on TensorFlow 2.x, as can be found at this link. I plan to collect the imagery, annotate the objects within the image in VOC format, and then use these images/annotations to train the large-scale model. If you look at the multi-scale training commands, they are as follows:

python train.py --use-pretrain True --model-type p5 --dataset-type voc --dataset dataset/pothole_voc --num-classes 1 --class-names pothole.names --voc-train-set dataset_1,train --voc-val-set dataset_1,val  --epochs 200 --batch-size 4 --multi-scale 320,352,384,416,448,480,512 --augment ssd_random_crop

As we know that Scaled YOLOv4 (and any YOLO algorithm at that) likes image dimensions divisible by 32, I have plans to use larger images of 1024x1024. Is it possible to modify the --multi-scale commands to include larger dimensions such as 1024, and have the algorithm run successfully?

Here is what it would look like when modified:

--multi-scale 320,352,384,416,448,480,512,544,576,608,640,672,704,736,768,800,832,864,896,928,960,992,1024
",32750,,,,,4/15/2021 20:07,Object Detection: Can I modify this script to support larger images (Scaled YOLOv4)?,,1,0,,,,CC BY-SA 4.0 27349,2,,27346,4/15/2021 17:28,,0,,"

First of all, it looks like you are under impression that a neural network is structured like this (example for 4 inputs and outputs):

$$ \begin{array}{rcl} y_1 & = & \text{sigmoid}(w_{11}x_1 + w_{12}x_2+w_{13}x_3+w_{14}x_4+b_1)\\ y_2 & = & \text{sigmoid}(w_{21}x_1 + w_{22}x_2+w_{23}x_3+w_{24}x_4+b_2)\\ y_3 & = & \text{softmax}(w_{31}x_1 + w_{32}x_2+w_{33}x_3+w_{34}x_4+b_3)\\ y_4 & = & \text{softmax}(w_{41}x_1 + w_{42}x_2+w_{43}x_3+w_{44}x_4+b_4)\\ \end{array} $$

If that is how you understand it, then you've missed the point of "deep" neural networks. (Sequential) deep neural network are structured in several layers:

$$ l_i^1 = \sigma^1(\sum_jw_{ij}^1x_j+b^1_i) \\ l_i^2 = \sigma^2(\sum_jw_{ij}^2l^1_j+b^2_i) \\ \ddots\\ l_i^\alpha = \sigma^\alpha(\sum_jw_{ij}^\alpha l^{\alpha-1}_j+b^\alpha_i) \\ y_i = \sigma^{out}(\sum_jw_{ij}^{out} l^\alpha_j+b^{out}_i) \\ $$

The activation function $\sigma^l$ of a single layer stays the same.

To me, it's like saying a function is approximated using sine functions with N different k values and then also randomly a few Hermite polynomials are thrown in as well.

To follow this analogy - you are mistaken by thinking that we are simultaneously trying to decompose over $\sin$ functions and hermitian polynomials. What actually happens is that we first decompose over $\sin$ functions and then decompose over hermitian polynomials. This actually can makes sense from practical point of view.

Hermite polynomials and the sine function aren't even orthogonal (to be honest I haven't checked but I'd assume they're not).

They are not, but sigmoids (or ReLUs) are not orthogonal to each other either. Orthogonality has noting to do with any of it.

",20538,,,,,4/15/2021 17:28,,,,3,,,,CC BY-SA 4.0 27350,1,,,4/15/2021 18:51,,1,67,"

Is the reason why linear activation functions are usually pretty bad at approximating functions the same reason why combinations of hermitian polynomials or combinations of sines and cosines are better at approximating a function than combinations of linear functions?

For example, regardless of the amount of terms in this combination of linear functions, the function will always be some form of $y = mx + b$. However, if we're summing sines, you absolutely cannot express a combination of sines and cosines as something of the form $A \sin{bx}$. For example, a combination of three sinusoids cannot be simplified further than $A \sin{bx} + B \sin{cx} + D \sin{ex}$.

Is this fact essentially why the Fourier series is able to approximate functions (other than obviously the fact that $A \sin{bx}$ is orthogonal to $B \sin{cx}$)? Because if it could be simplified into one sinusoid, it could never approximate an arbitrary function because it's lost its robustness? Because with other terms combined, whereas linear functions summed up gain no further ability to approximate, things like sinusoids actually begin to approximate really well with enough terms and with the right constants.

In that vein, is this the reason why non-linear activiation functions (also called non-linear classifiers?) are generally valued more than linear ones? Because linear activation functions simply are lousy function approximators, while, with enough constants and terms, non-linear activation functions can approximate any function?

",45875,,,,,4/15/2021 21:40,Trying to understand why nonlinearity is important for neural networks by analogy,,2,0,,,,CC BY-SA 4.0 27351,2,,27325,4/15/2021 19:35,,3,,"

Whilst you're right that for any continuous distribution $P(X = x) = 0 \;; \forall x \in \mathcal{X}$ where $\mathcal{X}$ is there support of the distribution, they are not referring to probabilities here, rather they are referring to density functions (though this should really be denoted with a lower case $p$ to avoid confusion such as this).

$p(x|z)$ is a conditional distribution, which is also allowed in the continuous case -- you can also 'mix and match', i.e. $x$ could be continuous and $z$ could be discrete, and vice-versa.

In the paper, all the authors are meaning when they write $p(x|z)$ is the density of $x$ conditioned on $z$; in VAE's with an image application this is the conditional density of the image $x$ given your latent vector $z$.

",36821,,,,,4/15/2021 19:35,,,,0,,,,CC BY-SA 4.0 27352,2,,27350,4/15/2021 20:03,,0,,"

Ok. Here is an analogy for you. The equation for a neuron is wx + b, which is equivalent to a straight line. If we don't apply non-linearity we will be stuck with a straight line forever. So, this type of network won't be even able to model points in a unit circle randomly distributed.

What does non-linearity do? If you look the graphs for x to the power 2, 3, 4 and so on. You see with each increase in power, the line gets tugged like a sine or cosine curve. That bend in the straight line allows us to then model boundaries with arbitrary shapes.

The more the difficult the boundary to model between the classes the more bends in the line you need to model and so, you keep on increasing the layers in neural network.

",37203,,,,,4/15/2021 20:03,,,,0,,,,CC BY-SA 4.0 27353,2,,27348,4/15/2021 20:07,,1,,"

Yes, the functionality should is there. But, don't you think you are overdoing the scales. You have at least 18 scales mentioned here. Too much of anything is bad. There is a reason it likes things divisible by 32 because at that increase in size something more meaningful will show up in the image. Spamming sizes like this won't help you at all, it would rather waste your time.

",37203,,,,,4/15/2021 20:07,,,,2,,,,CC BY-SA 4.0 27354,2,,27345,4/15/2021 20:09,,1,,"

Sigmoid is used for binary cases and softmax is its generalized version for multiple classes. But, essentially what they do is over exaggerate the distances between the various values.

If you have values on a unit sphere, apply sigmoid or softmax on those values would lead to the points going to the poles of the sphere.

",37203,,,,,4/15/2021 20:09,,,,0,,,,CC BY-SA 4.0 27355,1,,,4/15/2021 20:12,,1,34,"

I have a time series data with a little unusual cost/reward function (I haven't seen it before)

The model must predict a $Y$ value for any $X(t)$.

The reward is computed as follows. The model will receive a reward equal to $Y_\text{true} * Y_\text{prediction}$. But if the reward is a positive value, the model won't receive a positive reward in next $5$ time steps (it will get negative rewards anytime). It means sometimes it is better for the model to predict 0 and wait for a better reward.

I have two questions:

  1. Is it a supervised learning or reinforcement learning problem?

  2. If it is a supervised learning, which optimization method should I use for it?

",45746,,2444,,4/16/2021 13:43,4/16/2021 13:43,"Is this a supervised or reinforcement learning problem, and which algorithm should I use to solve it?",,0,6,,,,CC BY-SA 4.0 27356,2,,27350,4/15/2021 21:40,,2,,"

Your analogy is correct, except it is not really an "analogy". Sin is an activation function - in past works (before modern deep learning boom) it was rather standard to see it listed as a possible activator.

So your expression $\sigma(x) = A\sin ax + B \sin bx + D \sin ex$ is of a neural network with one 3-neuron layer and a single output linear neuron:

$$\sigma(x) = \sum_i V_i \sin\left(\sum_jW_{ij} x_j + b_i\right)+\beta$$

With all biases being zero $b_i=\beta=0$, output weights are $V_i = \left(A,B,C\right)$ and the inner weight matrix is diagonal: $$W_{ij} = \begin{pmatrix}a&0&0\\0&b&0\\0&0&e\end{pmatrix}$$

",20538,,,,,4/15/2021 21:40,,,,1,,,,CC BY-SA 4.0 27357,2,,27345,4/16/2021 1:31,,1,,"

I commonly use softmax for all 2-class or k-class problems, basically, because I always like to have an output node for each class. For sigmoid, i.e., logistic, you can estimate MSE for each sample using the relationship

$E_i = \sum_c^C (y_c - \hat{y}_c)^2$,

where $C$ is the number of classes, $y_c$ is 0 or 1 for true class membership, and $\hat{y}_c$ is the predicted class membership of the $i$th object. For example, the target or outcome truth for an object in class 2 of a 4-class problem could be $y_i=(0,1,0,0)$ while the predicted output at each of the 4 nodes could be $\hat{y}_i=(0.01,0.2,0.5,0.001)$, suggesting the predicted class is 3 (greatest probability from softmax).

During the particular training epoch, calculate $MSE$ for all the training objects as

$MSE = \frac{1}{n}\sum_i^n E_i$

Obviously, use of cross-entropy is a variation on a theme.

",,user46301,,user46301,7/10/2021 0:11,7/10/2021 0:11,,,,0,,,,CC BY-SA 4.0 27359,2,,27318,4/16/2021 1:54,,1,,"

I no longer really use validation that much, but rather only training and testing. Why? Because I mostly follow Ron Kohavi's (Stanford Univ) approach to CV. I have done a lot of validation but it seemed to be overkill, essentially causing me to ask why I have this very small-sampled parameter watch on the side from which I am supposed to learn from. You know if an ANN is overfitting when the training error diverges away from testing error -- it's an indication the model is breaking up.

",,user46301,,user46301,7/10/2021 0:12,7/10/2021 0:12,,,,0,,,,CC BY-SA 4.0 27362,1,27367,,4/16/2021 9:26,,0,352,"

I'm doing a TensorFlow tutorial, where they convert an array of the numbers [1,2,3] to a tensor like this:

const xs = tf.tensor2d([1, 2, 3], [3, 1])

The shape is [3,1] because there is one row with 3 numbers. My question is, why would they use a 2D tensor, isn't this just exactly the same as:

const xs = tf.tensor1d([1, 2, 3])
",11620,,2444,,4/16/2021 13:32,4/16/2021 13:32,What's the difference between a 1d tensor and a 2d tensor with 1 dimension?,,1,3,,,,CC BY-SA 4.0 27363,1,,,4/16/2021 9:31,,2,110,"

I was discussing the topic of self-supervised learning with a colleague. After a while we realized we were using different definitions. That's never helpful.

Both of us were introduced to self-supervised learning by reading or listening to Yann LeCun. He is renaming (part of) unsupervised learning to self-supervised learning. For example in this Facebook post.

Probably the definitions of unsupervised and self-supervised learning overlap. But to me the terms are not interchangeable. For example, a prototypical example of age-old unsupervised learning technique is k-means. To me that is unsupervised but not self-supervised learning.

Is Yann LeCun renaming the entire concept unsupervised learning to self-supervised learning? More specifically, is his opinion that we should call clustering and anomaly detection self-supervised learning? And in the limit, does he call k-means supervised-learning?

References are appreciated.

",11666,,11666,,4/16/2021 10:10,4/16/2021 10:10,Does Yann LeCun consider k-means self-supervised learning?,,0,3,,,,CC BY-SA 4.0 27364,2,,13720,4/16/2021 10:38,,0,,"

The work was done through the following:

  1. Extract feature points using a Detector
  2. Extract the descriptor for these feature points
  3. Make matching using similarity measure between two descriptors from two meshes.
",27587,,,,,4/16/2021 10:38,,,,0,,,,CC BY-SA 4.0 27366,1,,,4/16/2021 12:34,,1,53,"

I want to try something with image creation via NNs. I have come across Variational Autoencoders and Generative Adversarial Networks as possible solutions but have only found image creatinon with CIFAR-100 and GANs (two class-labels, non continuous) Im looking for an idea for generating a picture with four continuous labels (Age and Rotation around X-, Y-, Z-axis). Is there anything usable for this?

Im especially looking for an VAE-Model on this task.

",46314,,46314,,4/16/2021 12:39,4/16/2021 13:42,"Are there architectures to generate pictures from four labels? (VAEs, GANs)",,1,0,,,,CC BY-SA 4.0 27367,2,,27362,4/16/2021 13:25,,1,,"

The required shape of the tensor $T$ depends on the shape of other tensors that are involved in the same operations of that same tensor $T$ and the required/desired shape of the resulting tensor, in the same way that the number of columns of the matrix $M \in \mathbb{R}^{n \times m}$ needs to match the number of rows of the matrix $M' \in \mathbb{R}^{n' \times m'}$ when you perform the matrix multiplication $M M'$, i.e. in order for $M M'$ to be well-defined, $m = n'$.

So, in your case, although the tensors contain the same elements, it might not be possible to use both in the same operations. Without more context/details, I cannot specifically answer why the first tensor was used in the tutorial you're mentioning.

",2444,,,,,4/16/2021 13:25,,,,0,,,,CC BY-SA 4.0 27368,2,,27366,4/16/2021 13:42,,1,,"

This is quite a difficult task and is still an open area of research. The idea behind the GANs is to map latent features (e.g. rotation, age) to an output image, but the problem is that the source data comes from an unknown probability distribution. GANs aim to approximate it by sampling this latent feature vector from a simple known distribution (e.g. normal).

However, such a distribution usually won't fit the target distribution and hence the latent features become entangled as they will reproduce statistics of the real data (e.g. gender and beard)

There are several attempts to solve this problem. One of them is Conditional GAN. For example, with a conditional GAN it can be possible to translate a segmentation mask into an image (Pix2Pix). Another approach has been proposed in StyleGAN. The authors introduce an intermediate feature vector that the model learns to disentangle in an unsupervised manner. Each value in this intermediate represents a meaningful attribute of the resulting image, but it is still not possible to control the latent features in a supervised manner. That is, after training you should manually test each of them to identify its relations.

",12841,,,,,4/16/2021 13:42,,,,0,,,,CC BY-SA 4.0 27369,2,,27341,4/16/2021 14:03,,2,,"

On page 5 of the VAE paper, it's clearly stated

We let $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from $\mathbf{z}$ with a MLP (a fully-connected neural network with a single hidden layer, see appendix $\mathrm{C}$ ).

...

As explained above and in appendix $\mathrm{C}$, the decoding term $\log p_{\boldsymbol{\theta}}\left(\mathbf{x}^{(i)} \mid \mathbf{z}^{(i, l)}\right)$ is a Bernoulli or Gaussian MLP, depending on the type of data we are modelling.

So, if you are trying to predict real numbers (in the case of images, these can be the RGB values in the range $[0, 1]$), then you can assume $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ is a Gaussian.

It turns out that maximising the Gaussian likelihood is equivalent to minimising the MSE between the prediction of the decoder and the real image. You can easily show this: just replace $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ with the Gaussian pdf, then maximise that wrt the parameters, and you should end up with something that resembles the MSE. G. Hinton shows this in this video lesson. See also this related answer.

So, yes, minimizing the MSE is theoretically founded, provided that you're trying to predict some real number.

When the binary cross-entropy (instead of the MSE) is used (e.g. here), the assumption is that you're maximizing a Bernoulli likelihood (instead of a Gaussian) - this can also be easily shown.

",2444,,2444,,6/12/2022 8:46,6/12/2022 8:46,,,,3,,,,CC BY-SA 4.0 27371,1,,,4/16/2021 16:30,,0,55,"

I've come across a theorem "Convergence theorem Simple Perceptron" for the first time, here-> https://zaguan.unizar.es/record/69205/files/TAZ-TFG-2018-148.pdf, page 27, (is in Spanish)

Are there others like this one but for the Multilayer Perceptron?

Could someone please point me out to them?

Thank you in advance.

",44999,,44999,,4/16/2021 16:52,4/21/2021 23:18,What are the math theorems regarding the Multilayer Perceptron?,,0,6,,,,CC BY-SA 4.0 27372,1,,,4/16/2021 16:56,,1,64,"

In the context of a neural network $\hat{y} = f_\theta(\mathbf{x})$ with parameters $\theta$ that is trained to perform regression such that the prediction $\hat{\mathbf{y}} = [\hat{y}_1,\hat{y}_2,...,\hat{y}_N]$ is close the target $\mathbf{y} = [y_1,y_2,...,y_N]$, the mean squared-error (MSE) loss function is: $$ \mathcal{L}(\mathbf{y},\hat{\mathbf{y}}) = \frac{1}{N} \sum_{i=1}^N (y_i - \hat{y}_i)^2 $$ The parameters $\theta$ are then adjusted using the gradient descent update rule: $$ \theta_{k+1} \leftarrow \theta_{k} - \alpha \cdot \nabla \mathcal{L}(\mathbf{y},\hat{\mathbf{y}}_{\theta_k}) $$ Where $\alpha$ is the learning rate. I am aware that if $\alpha$ is too small, the parameters $\theta$ might never converge, or is too slow to converge, to the optimal set of parameters $\theta^*$, and if $\alpha$ is too large, the iterates of $\theta$ could oscillate and also never converge. My question has to do with the latter scenario, where $\alpha$ is too big, which leads to overshooting and oscillation.

A good way of choosing $\alpha$ is using backtracking line search. However, because the neural network has many parameters, it is not practical to perform line search, and $\alpha$ needs to be chosen using another way.

Is it possible to allow a larger value of the learning rate before overshooting and oscillation by "elongating" valleys in the MSE loss function $\mathcal{L}(\mathbf{y},\hat{\mathbf{y}})$? More precisely, by transforming $\mathbf{y}$ and $\hat{\mathbf{y}}$ in some way before computing the loss? For example, in practice, I have found the following modification to the MSE loss function to be very helpful in avoiding overshooting and oscillation, even with large learning rates: $$ \mathcal{L}(\mathbf{y},\hat{\mathbf{y}}) = \frac{1}{N} \sum_{i=1}^N (\log(y_i + \epsilon) - \log(\hat{y}_i + \epsilon))^2 $$ Where $\epsilon$ is a small value. However, I am not sure why this modification helps.

",41856,,40434,,4/17/2021 12:14,4/17/2021 12:14,Could the inputs of the mean squared-error loss function be transformed to allow larger learning rates?,,0,3,,,,CC BY-SA 4.0 27373,2,,27316,4/16/2021 17:02,,3,,"

In the discussion about Neil Slater's answer (that he, sadly, deleted) it was pointed out that the policy $\pi$ should also depend on the horizon $h$. The decision of action $a$ can be influenced by how many steps are left. So, the "policy" in that case is actually a collection of policies $\pi_h(a|s)$ indexed by $h$ - the distance to horizon.

Alternatively, one can look at it as if our state space $\mathcal{S}$ is now extended with that integer. So we can "lift" our original finite-horizon MDP $(\mathcal{S},\mathcal{A},P,R)$ into a new infinite-horizon MDP by substituting:

$$\mathcal{S} \to \left(\mathcal{S}\times\{0,1,\dots,h\}\right) \cup \{\epsilon\}\\ \mathcal{A} \to \mathcal{A},\; R \to \tilde R,\; P \to \tilde P $$ The new reward and transition functions make sure that the horizon is decreasing, and that we are ending up in a capture state $\epsilon$ that has no future effects: $$ \tilde P(s'_{n-1}|s_n,a) = P(s'|s,a)\quad\quad \tilde P(\epsilon|s_0,a) =\tilde P(\epsilon|\epsilon,a) = 1\\ \tilde R(s_n,a,s'_{n-1}) = R(s,a,s')\quad\quad \tilde R(s_0,a,\epsilon) =\tilde R(\epsilon,a,\epsilon) = 0 $$ This way I've reduced the finite-horizon MDP to an infinite-horizon one. Thus I can reuse the result for the policy iteration convergence of infinite MDPs.

Couple of notes:

  • At first this feels like a huge increase in the state space making the whole problem unnecessary complex. But this complexity is intrinsic to the problem: both the policy and the value function depend on the distance to horizon. So it is necessary to consider the extended number of unknowns in a single self-consistent manner.
  • The infinite-horizon policy iteration convergence relies on a discounting factor $\gamma < 1$. The finite-horizon doesn't need $\gamma$ for convergence. That's where I feel like I've cheated a bit.
  • I came up with this approach myself. It feels quite obvious though. I'd expect this approach to either be mistaken or be already mentioned somewhere in the literature - comments pointing out one or the other are welcome.
",20538,,20538,,4/16/2021 22:53,4/16/2021 22:53,,,,19,,,,CC BY-SA 4.0 27374,2,,27260,4/16/2021 18:16,,-2,,"

To some extend, it get rid of low intensity numerical noise. Condition properties of the optimization problem is always an issue, i suspect BatchNorm alleviate this instability.

",46325,,,,,4/16/2021 18:16,,,,2,,,,CC BY-SA 4.0 27375,1,,,4/16/2021 20:25,,0,171,"

I was reading a research paper titled A Comparative Study of A-star Algorithms for Search and rescue in Perfect Maze (2011).

I have some doubts regarding it:

1.

The Evaluation Function of $\mathrm{A}^{*}(2)[5]$ is: $$ f_{2}(i)=g_{2}(i)+h_{2}(i)+h_{2}(j) $$ Where, $j$ is the father point of the current point, $h_{2}(j)$ is the Euclidean distance from the father point of the current point to the target point. This term is added to the father point for improving the search speed because it reduces the number of nodes.

In this section (page 2 middle-right) it says that the father point is added to improve search speed as it reduces the number of nodes searched. Is this because the added father point in some way overestimates the cost function, similar to Greedy Best First Search. Can it be interpreted as something between $A^{*}$ and Greedy BFS? If not what is the reason for the increase in speed?

2.

$\mathrm{A}^{*}(3)$ that employed a heuristic function with angle and distance has not been demonstrated well in this experiment, the reason is: in this experiment, we have added deviations not only on distance but also on an angle, so the $A^{*}(3)$ algorithm has no advantage in this searching.

In this section (page 3 upper-right) it is saying that $\mathrm{A}^{*}(3)$ is not so useful as there are deviations in angle also. What does this statement mean how are deviations in angle added? Request help in understanding $\mathrm{A}^{*}(3)$?

I need to understand why one heuristic is better than another. Is there some way to determine that apart from experimental evidence?

",46328,,2444,,5/11/2021 10:25,5/11/2021 10:25,Comparing heuristics in A* search and rescue operation,,0,4,,,,CC BY-SA 4.0 27380,1,,,4/17/2021 2:44,,0,81,"

I am working on a Neural Network and have plotted the performance of my model. However the plots seem not to fit the "trends" (which help you identify the issue with your model) presented in this illustration.

Here is the performance of my model The loss metric I used was Binary Cross Entropy (due to my problem being a binary classification task). Is my model over or under-fitting? and how can you tell?

",32636,,,,,4/17/2021 2:44,Identifying if a model is over or under-fitting via graphs,,0,4,,,,CC BY-SA 4.0 27382,1,27389,,4/17/2021 10:57,,6,647,"

I was reading DT-LET: Deep transfer learning by exploring where to transfer, and it contains the following:

It should be noted direct use of labeled source domain data on a new scene of target domain would result in poor performance due to the semantic gap between the two domains, even they are representing the same objects.

Can someone please explain what the semantic gap is?

",36578,,2444,,4/19/2021 11:49,4/19/2021 23:07,"What does ""semantic gap"" mean?",,2,0,,,,CC BY-SA 4.0 27383,1,27391,,4/17/2021 14:12,,1,40,"

A company entrusts a Data Scientist with the mission of processing and valuing data for the research or treatment of events related to traces of computer attacks. I was wondering how would he get the train data.

I guess he would need to exploit the logs of the different devices of the clients and use statistical, Machine Learning and visualization techniques in order to bring a better understanding of the attacks in progress and to identify the weak signals of attacks... But how would he get labelled data?

He might get the logs of attacks received before, but that might not have the same signature with the attacks that are going to come later? So it might be difficult to create a reliable product?

",4738,,40434,,4/19/2021 22:41,4/19/2021 22:41,How to source training data in ML for information security?,,1,0,,,,CC BY-SA 4.0 27384,2,,27332,4/17/2021 18:33,,2,,"

Quoting the original paper:

For each target vector $x_{i,G}$ ,a mutant vector is generated according to $$ v_{i,G+1} = x_{r_1,G} + F\left(x_{r_2,G} + x_{r_3,G}\right)$$

And later

To decide whether or not it should become a member of generation $G + 1$, the trial vector $v_{i,G+1}$ is compared to the target vector $x_{i,G}$ using the greedy criterion.

I'd say it is pretty unambiguous that author's intent was to split evolving agents' populations into "generations" indexed by $G$. A new generation $G+1$ is created by applying the DE algorithm to the previous generation $G$. No agents are changed within the generation $G$.

So, it looks like the answer to your question is "no" - the population of the current generation $G$ is not mutated. Your second approach is the correct one.

",20538,,,,,4/17/2021 18:33,,,,1,,,,CC BY-SA 4.0 27385,1,,,4/17/2021 19:44,,1,49,"

I was reading Deep Learning of Representations for Unsupervised and Transfer Learning, and they state the following:

They have only a small number of unlabeled examples (4096) and very few labeled examples (1 to 64 per class) available to a Hebbian linear classifier (which discriminates according to the median between the centroids of two classes compared) applied separately to each class against the others.

I have searched about what a Hebbian linear classifier is, but I couldn't find more than an explanation about what Hebbian learning is, so can anybody explain what Hebbian linear classifier is?

",36578,,2193,,4/18/2021 13:27,4/18/2021 13:27,What is a Hebbian linear classifier?,,0,0,,,,CC BY-SA 4.0 27386,2,,27338,4/17/2021 22:16,,1,,"

The paper has sources, that contains these plots in pdf. I've converted those to svg and managed to rip the actual data:

xdata = [-0.6,-3.33,-1.93,0.67,-2.63,-3.1,-5.93,-1.87,-3.93,-3.47,1.0,-3.43,-3.83,0.6,-0.47,-3.6,-1.37,-4.87,-3.63,3.53,4.33,1.73,2.63,4.13,-0.13,2.33,-1.6,-6.83,-1.5,-6.37,3.73,0.2,-0.73,-3.17,2.43,-0.3,-1.2,-1.97,-0.3,-2.9,-2.13,-1.4,4.67,0.5,1.13,-6.7,1.1,1.97,0.73,-1.27,4.9,-1.7,-2.3,-5.83,-4.27,-3.93,-5.67,-6.83,-3.7,-0.73,-3.5,-3.63,-6.0,1.5,-1.23,2.23,0.17,-1.43,-0.2,-1.43,1.17,-4.03,2.87,-1.13,-0.7,0.63,-6.27,2.03,-1.23,-2.63,-1.97,-2.3,-1.33,-3.77,-6.07,4.47,0.37,-4.37,3.27,1.3,-5.47,-5.5,2.47,-0.23,-3.97,1.87,-0.37,-3.63,0.77,-3.3,-0.08,-0.95,-0.52,0.32,-0.72,-0.88,-1.78,-0.48,-1.15,-0.98,0.42,-0.98,-1.12,0.28,-0.05,-1.02,-0.32,-1.45,-1.05,1.22,1.48,0.65,0.92,1.42,0.05,0.85,-0.42,-2.05,-0.38,-1.92,1.28,0.15,-0.12,-0.88,0.88,0.02,-0.28,-0.52,0.02,-0.82,-0.58,-0.35,1.58,0.25,0.45,-2.02,0.45,0.72,0.35,-0.28,1.65,-0.42,-0.62,-1.75,-1.25,-1.15,-1.68,-2.05,-1.08,-0.12,-1.02,-1.05,-1.78,0.58,-0.28,0.82,0.15,-0.35,0.05,-0.35,0.48,-1.18,1.02,-0.25,-0.12,0.32,-1.88,0.75,-0.28,-0.72,-0.52,-0.62,-0.32,-1.08,-1.82,1.52,0.22,-1.28,1.15,0.52,-1.62,-1.65,0.88,0.02,-1.15,0.68,-0.02,-1.05,0.35,-0.95]
ydata = [-1.47,-3.18,3.97,5.82,3.47,-1.12,4.58,-6.23,3.03,5.52,-1.07,3.38,8.57,0.43,2.33,4.42,1.12,-4.57,-0.07,1.82,-0.18,0.93,-0.57,1.38,0.57,0.03,-4.82,0.68,-0.97,-1.82,7.82,-3.38,3.78,-1.03,7.52,2.53,3.32,1.43,-2.32,2.42,-5.57,0.53,-1.97,-3.07,-0.28,7.88,-3.07,2.88,1.93,-3.68,3.12,-0.62,7.12,2.22,0.32,6.28,0.78,1.72,-0.07,4.78,2.42,1.12,-1.93,-6.68,3.18,0.53,3.32,-5.18,2.68,-1.93,2.68,8.48,-1.28,2.58,-0.72,6.18,4.82,3.08,2.22,-0.93,0.57,-0.28,4.12,-1.03,0.62,1.43,-4.53,-3.62,3.18,0.53,0.22,-2.72,4.22,1.03,5.88,2.62,3.88,-0.07,-0.47,4.82,-0.88,-1.53,0.82,1.53,0.62,-0.82,0.88,-2.43,0.43,1.28,-0.68,0.57,2.22,-0.22,0.32,0.93,-0.07,-1.97,-0.53,0.32,-0.28,-0.03,-0.47,0.22,-0.18,-0.28,-1.97,-0.38,-0.72,-1.18,2.22,-1.43,0.82,-0.82,2.12,0.43,0.62,0.03,-1.12,0.28,-2.22,-0.28,-0.82,-1.32,-0.43,1.88,-1.32,0.62,0.28,-1.57,0.78,-0.62,1.82,0.12,-0.43,1.47,-0.32,-0.07,-0.53,1.12,0.28,-0.12,-1.18,-2.43,0.57,-0.12,0.68,-2.07,0.47,-1.03,0.53,2.17,-0.68,0.43,-0.62,1.62,0.93,0.68,0.28,-0.78,-0.28,-0.53,0.88,-0.82,-0.38,0.22,-1.82,-1.68,0.72,-0.18,-0.47,-1.43,1.07,-0.07,1.38,0.53,0.88,-0.53,-0.53,1.03]
fake = [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]

I feed that into LogisticRegression classifier - which is another way to say that you use a sigmoid for discrimination:

X , y = np.array([xdata,ydata]).T , np.array(fake, dtype=bool)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression()
lr.fit(X,y) 

And I'm plotting the true data on the left and the LogisticRegression on the right:

subplot(121);xlim(-10,10);ylim(-10,10);title("Truth")
scatter(X[~y][:,0], X[~y][:,1],c="r", marker=".")
scatter(X[ y][:,0], X[ y][:,1],c="b", marker="+")
subplot(122);xlim(-10,10);ylim(-10,10);title("LogisticRegression prediciton")
p = lr.predict(X)
scatter(X[~p][:,0], X[~p][:,1],c="r", marker=".")
scatter(X[ p][:,0], X[ p][:,1],c="b", marker="+")

Here you can see the sigmoid decision boundary reproduced.

",20538,,,,,4/17/2021 22:16,,,,4,,,,CC BY-SA 4.0 27387,2,,8976,4/17/2021 22:42,,1,,"

I would turn this problem into graph search in which vertices are "equivalent sets of equations" and edges are "valid operations" such as such as substitution, division and so on, which always ensure that all states are mathematically equivalent. A very simple example of a few states and edges (to requote from your question):

State 0:
  Eq. 1: a = b + c;
  Eq. 2: s = b / a;
  Eq. 3: r = b / c.

Operation 0-->1:
  Multiply Eq. 3 by c

State 1:
  Eq. 1: a = b + c;
  Eq. 2: s = b / a;
  Eq. 3: rc = b.

Operation 0-->2:
  Substitute Eq 1 into 2

State 2:
  Eq. 1: a = b + c;
  Eq. 2: s = b / (b+c);
  Eq. 3: r = b / c.

Operation 2-->3:
  Rearrange Eq 3 so B is on right hand side.   

etc etc

Depending on how many operations you have, this does mean that the graph could be quite large, so you may not want to generate the entire graph (actually, the graph will be infinite if the number of operations allowed is unlimited). It's also possible that the graph will contain cycles, since there may be multiple ways to reach the same state.

You will have to think about how to develop data structures for representing the states and operations. Since this is a search problem, it will help to define what your "goal" states look like (e.g. states where s is defined in terms of r).

To search the graph, I recommend one of these approaches:

  • Breadth-first search: this will find the shortest number of operations needed to reach any goal states
  • Best-first search: if the graph is too large for breadth first, you may need to guide the search to the goal states using a heuristic, e.g. you could rate states by how few variables are needed on the LHS of equations or something like that -- some experimentation will be needed here.

The exact choice of search algorithm will depend on how big the graph is, which in turns depends on how many operations/edges you want.

",7760,,7760,,4/17/2021 23:02,4/17/2021 23:02,,,,0,,,,CC BY-SA 4.0 27388,2,,25459,4/17/2021 23:51,,4,,"

The Attention is All you Need has this footnote at the passage motivating the introduction of the $1/\sqrt{d_k}$ factor:

  1. To illustrate why the dot products get large, assume that the components of $q$ and $k$ are independent random variables with mean 0 and variance 1. Then their dot product, $q \cdot k = \sum^{d_k}_{i=1}q_ik_i$ has mean 0 and variance $d_k$.

I suspect that it hints on the cosine-vs-dot difference intuition. The cosine similarity ignores magnitudes of the input vectors - you can scale $h^{enc}$ and $h^{dec}$ by arbitrary factors and still get the same value of the cosine distance.

The footnote talks about vectors with normally distributed components, clearly implying that their magnitudes are important. This suggests that the dot product attention is preferable, since it takes into account magnitudes of input vectors. And the magnitude might contain some useful information about the "absolute relevance" of the $Q$ and $K$ embeddings.

Edit after more digging: Note that transformer architecture has the Add & Norm blocks after each attention and FF block. At first I thought that it settles your question: since every input vector is normalized then cosine distance should be equal to the dot product. That's incorrect though - the "Norm" here means Layer Normalization - analogously to batch normalization it has trainable mean and scale parameters, so my point above about the vector norms still holds.

Finally, since apparently we don't really know why the BatchNorm works same thing holds for the LayerNorm. Interestingly, it seems like (1) BatchNorm (2) LayerNorm and (3) your question about normalization in the attention mechanism - all of it look like different ways at looking at the same, yet undiscovered and clearly stated thing.

",20538,,16871,,4/19/2021 8:55,4/19/2021 8:55,,,,5,,,,CC BY-SA 4.0 27389,2,,27382,4/18/2021 4:52,,5,,"

In terms of transfer learning, semantic gap means different meanings and purposes behind the same syntax between two or more domains. For example, suppose that we have a deep learning application to detect and label a sequence of actions/words $a_1, a_2, \ldots, a_n$ in a video/text as a "greeting" in a society A. However, this knowledge in Society A cannot be transferred to another society B that the same sequence of actions in that society means "criticizing"! Although the example is very abstract, it shows the semantic gap between the two domains. You can see the different meanings behind the same syntax or sequence of actions in two domains: Societies A and B. This phenomenon is called the "semantic gap".

",4446,,,,,4/18/2021 4:52,,,,0,,,,CC BY-SA 4.0 27390,1,27427,,4/18/2021 9:35,,0,306,"

I'm looking at the loss function: mean squared error with gradient descent in machine learning. I'm building a single-neuron network (perceptron) that outputs a linear number. For example:

Input * Weight + Bias > linear activation > output.

Let's say the output is 40 while I expect the number 20. That means the loss function has to correct the weights+bias from 40 towards 20.

What I don't understand about mean squared error + gradient descent is: why is this number 40 displayed as a point on a parabola?

Does this parabola represent all possible outcomes? Why isn't it just a line? How do I know where on the parabola the point "40" is?

",11620,,11620,,4/19/2021 21:38,4/20/2021 12:55,Why is loss displayed as a parabola in mean squared error with gradient descent?,,1,5,,,,CC BY-SA 4.0 27391,2,,27383,4/18/2021 9:49,,2,,"

You have implicitly assumed that supervised learning is being used, given the assumption that labels are needed. But this might lead to the following potential problems:

  • Log file data tends to be huge, and it may be infeasible to label due to the time/expertise required;
  • Then there's the class imbalance problem, in that attack examples are far far rarer than "normal" behaviour, and this can mess up supervised models during both learning and evaluation stages;
  • And finally even if the data is labelled, a supervised model is unlikely to be useful in detecting completely novel attacks because it has not been trained to recognise these.

I think a far easy way to approach these kinds of problem would be unsupervised learning: model normal patterns and behaviours in the logs, and then flag any deviations from normality. It may be an attack or it may be new normal behaviour. In the latter case, the model can be updated. There are various approaches here such as clustering, outlier detection and possibly self-supervised learning that might be useful. Dimensionality techniques might also be useful to visualise clusters of "normal" behaviour that can be compared to abnormal patterns.

",7760,,,,,4/18/2021 9:49,,,,1,,,,CC BY-SA 4.0 27392,2,,27341,4/18/2021 9:56,,2,,"

If $p(x|z) \sim \mathcal{N}(f(z), I)$, then

\begin{align} \log\ p(x|z) &\sim \log\ \exp(-(x-f(z))^2) \\ &\sim -(x-f(z))^2 \\ &= -(x-\hat{x})^2, \end{align}

where $\hat{x}$, the reconstructed image, is just the distribution mean $f(z)$.

It also makes sense to use the distribution mean when using the decoder (vs. just when training), as it is the one with the highest pdf value. So, the decoder produces a distribution from which we take the mean as our result.

",46267,,2444,,6/12/2022 8:14,6/12/2022 8:14,,,,1,,,,CC BY-SA 4.0 27393,1,27406,,4/18/2021 10:49,,1,87,"

I am currently building a neural network with genetic algorithms that learns to fly a 2D drone to a target. My goal is that it achieves all tasks as fast as possible, but I want the drone to also fly stable and upright. The way I tried to calculate the fitness was to create a function that has the greatest value when the drone does everything I want right.

fitness += 1/distToTarget + cos(drone_angle)

My current inputs are:

difference_target_X
difference_target_Y
velocity_X
velocity_Y
angular_velocity (degree per second)
drone_angle     | = 0;     |_ = 90     _| = -90

The output (I don't think it is important but)

left_thruster_angle
left_thruster_boost
right_thruster_angle
right_thruster_boost

The NN is programmed in unity and the drone uses a 2D rigid body and the NN adds a force to the thruster at the right angle.

How do I get the drone to set the best weights to fulfill all tasks: fly stable, fly fast, fly to the target?

",46298,,46298,,4/19/2021 18:24,4/19/2021 18:24,How to design fitness function for multiple objectives?,,1,0,,,,CC BY-SA 4.0 27397,1,27398,,4/18/2021 14:52,,-1,40,"

new to RL here.
As far as i understood from RL courses, that there is two sides of reinforcement learning. Policy Evaluation, which is the task of knowing the value function for certain policy. and Control, which is maximizing the reward or the value function. what if i have a heuristic agent that performs almost acceptable performance in an environment but i want to find a policy that tends to be the optimal policy, is there a way to cut the first half of the task by teaching the agent ? will be a side by side buffer of the (states, actions) be sufficient ?

",4929,,,,,4/18/2021 18:06,Using states (features) and actions from a heuristic model to estimate the value function of a reinforcement learning agent,,1,0,,4/19/2021 11:47,,CC BY-SA 4.0 27398,2,,27397,4/18/2021 18:06,,0,,"

Not sure I fully understand your question, are you asking:

  1. if you can skip the policy evaluation part?
  2. if you can speedup the training of your agent by mimicking the behaviour of your heuristic agent first and then optimise from there?

If the first, I am not sure exactly what you are asking as most RL algorithm doesn't have an explicit policy evaluation phase and even for ones which does (policy iteration for example) you would be using the wrong policy to evaluate.

If you are asking about the second one (warm start your agent) that's very much possible. You would do it differently based on the algorithm:

  • With DQN, you have your buffer which contains (state, action, next_state, reward) tuples. You could just let loose your heuristic agent in your environment and record these tuples and put them into the buffer. Then you could train your DQN agent offline (without interacting with the environment) for a while until it performs roughly as well as your heuristic one and then you could let it loose in your environment so it can get better.
  • With something like an Actor Critic you might not have a buffer so the process is slightly different. You also have two components, the policy function and the value function. You can train the policy function to mimic what your heuristic agent does by gathering lot of experience from the heuristic agent and basically do supervised learning on that experience. You would also need to train your value function but that would be very similar to it's normal training.

Hope this helps, let me know if anything is unclear.

",43565,,,,,4/18/2021 18:06,,,,3,,,,CC BY-SA 4.0 27400,2,,23672,4/18/2021 20:53,,1,,"

I think the answer is yes, this problem is known as "Where To transfer", please refer to this paper Learning What and Where to Transfer to know what I am talking about, and correct me if I understand something wrongly.

",36578,,,,,4/18/2021 20:53,,,,0,,,,CC BY-SA 4.0 27403,2,,27335,4/19/2021 2:09,,0,,"

I think I understand the process now. Let $x$ has the dimension $(n, p)$ and W has the dimension $(p, q)$. In neural network, $n$ denotes the samples in the batch and $p, q$ denotes input dimension and output dimension of each layer, respectively. We don't use bias $b$ here just for simplicity.

I had trouble in understanding the process of differentiation because of some confusion in vector(matrix) derivative. By changing matrix as set of column vector, now I can solve the problem.

Let $x$ = $\begin{bmatrix} x_{11}, &x_{12}, &\cdots, &x_{1p}\\ x_{21}, &x_{22}, &\cdots, &x_{2p}\\ \vdots, &\vdots, &\vdots, &\vdots\\ x_{n1}, &x_{n2}, &\cdots, &x_{np} \end{bmatrix}$, which can be written as $\begin{bmatrix} \vec{x}_{1}, &\vec{x}_{2}, &\cdots, &\vec{x}_{p} \end{bmatrix}$.

Similarly, $W$ = $\begin{bmatrix} w_{11}, &w_{12}, &\cdots, &w_{1q}\\ w_{21}, &w_{22}, &\cdots, &w_{2q}\\ \vdots, &\vdots, &\vdots, &\vdots\\ w_{p1}, &w_{n2}, &\cdots, &w_{pq} \end{bmatrix}$ = $\begin{bmatrix} \vec{w}_{1}\\ \vec{w}_{2}, \\ \vdots, \\ \vec{w}_{p} \end{bmatrix}$.

Then $x\times W$ = $\vec{x}_1 \vec{w}_1 + \vec{x}_2 \vec{w}_2 + \cdots \vec{x}_p \vec{w}_p$. So $\frac{\partial(\vec{x}_1 \vec{w}_1 + \vec{x}_2 \vec{w}_2 + \cdots \vec{x}_p \vec{w}_p)}{\partial{W}} = \begin{bmatrix} \vec{x}_{1}, &\vec{x}_{2}, &\cdots, &\vec{x}_{p} \end{bmatrix}^T.$

I still have a little trouble in understanding the last part since actually, $\vec{x}_1 \vec{w}_1 + \vec{x}_2 \vec{w}_2 + \cdots \vec{x}_p \vec{w}_p$ is not scalar, it's $(n, q)$ matrix, but at least now I understand why it is written as above result.

(Also hope to fully understand the last part.)

",46274,,,,,4/19/2021 2:09,,,,0,,,,CC BY-SA 4.0 27404,1,27405,,4/19/2021 2:58,,2,62,"

My supervised learning training data are obtained from actual data; and in real cases, there's one class that happens less often than other classes, just around 5% of all cases.

To be precise, the first 2 classes are in 95% of training data and the last one is in 5%. Training while keeping the data ratio intact will make accuracy reach 50% at the right first step and reaches 90%+ immediately that doesn't make sense.

Should I exclude some data of classes 1 and 2, to make the numbers of samples of 3 classes equal? But it's not a real-world ratio.

",2844,,32410,,12/11/2021 8:49,12/11/2021 8:50,How to handle class imbalance when the actual data are that way,,1,0,,,,CC BY-SA 4.0 27405,2,,27404,4/19/2021 3:56,,2,,"

You can use stratified cross-validation combined with an imbalanced learning technique applied to the training data. Stratification ensures that when you split your data into train and test, the ratio of frequencies between the classes will stay the same, and therefore the test data will always be "realistic".

However, when training a model (using only the training data, of course), the imbalance may have a negative impact. Therefore, have a look at some of the imbalanced learning techniques that are out there to remedy this situation. For example, you could try these:

  • random undersampling: discard random examples from the majority classes until the ratios of class frequencies are close to 1
  • random oversampling: make random duplicates of minority class examples until the ratios of class frequencies are close to 1
  • SMOTE: like random oversampling, except that synthetic examples are created instead of random duplicates
  • balanced bagging: performs random undersampling but does so multiple times to create an ensemble of models trained on balanced subsets of the training data

etc.

You should also take care about the metrics you use to assess predictive performance on the test data. Accuracy could be misleading here, so you may instead find metrics like sensitivity and specificity (calculated for each class individually) more informative.

",7760,,32410,,12/11/2021 8:50,12/11/2021 8:50,,,,0,,,,CC BY-SA 4.0 27406,2,,27393,4/19/2021 4:20,,1,,"

Your fitness function has two objectives that are added together, but they are not necessarily on the same scale. The component cos(drone_angle) must have a value from 0..1. The component 1/distToTarget will have a range that depends on how you measure distToTarget; e.g. if distToTarget has a range 0..1000, then this part of the fitness function will always be small far from the target (e.g. distance of 500) and massive when it gets very close (e.g 0.1 distance from the target). So the contribution of both components may not always be equal. Another potential complication is that the cosine function is nonlinear and make a very rapid transition between 0 and 1, as opposed to a smooth transition.

I recommend reworking the fitness function to make the two components more equal, e.g. something like

fitness(angle, distance)= 
    w1 * |desired_angle - angle| / maximum_angle_error
  + w2 * distance / maximum_distance

In this function that should be minimised to zero, both components contribute linearly to the final fitness, both components have a range of 0..1, and you can tune w1 and w2 to give different importance to different components depending on your preference (e.g. you may prefer w1>w2 if staying upright is the most important).

",7760,,,,,4/19/2021 4:20,,,,2,,,,CC BY-SA 4.0 27407,1,27410,,4/19/2021 5:34,,5,361,"

Consider the following statement(s) from Deep Learning book (p. 333, chapter 9: Convolutional Networks) by Ian Goodfellow et al.

Convolution is thus dramatically more efficient than dense matrix multiplication in terms of the memory requirements and statistical efficiency.

Book is saying that statistical efficiency is due to the decrease in the number of parameters due to convolution (using kernel) compared to fully connected feed forward neural networks.

What is meant by statistical efficiency in this context? And how does decrease in the number of parameters increase statistical efficiency?

",18758,,18758,,12/17/2021 0:42,12/17/2021 0:42,"What does ""statistical efficiency"" mean in this context?",,1,0,,,,CC BY-SA 4.0 27410,2,,27407,4/19/2021 8:26,,5,,"

Statistical efficiency in this context essentially means that a CNN would require fewer training examples than a fully connected network to learn. Intuitively this seems reasonable: more parameters to learn should mean more samples needed. Of course it is always desirable to minimise the number of training samples needed, so that's a definite advantage of CNNs.

There is a paper on the efficiency of CNNs which attempts to make that statement more precise. They examine the case of a convolutional network using a linear activation function.

",44413,,,,,4/19/2021 8:26,,,,0,,,,CC BY-SA 4.0 27411,2,,18594,4/19/2021 9:32,,0,,"

The state space itself seems OK but your approach has some disadvantages. First, you need to pick an initial state of size $n$. What if the optimal solution is of size $\frac{n}{4}$ or $10n$? Then the search could take a long time using this approach because you are starting far away. If there are multiple solutions, do you care which one you find or is the smallest zero subset preferred? I'm going to assume that the empty set $X'=\{\}$ is not a solution and that solutions with smaller cardinality are preferred.

I would suggest using a generative heuristic to construct the initial state and then running your hill climbing procedure from that state. For example, a generative heuristic could be something like: "pick the pair of elements ($x_1$,$x_2$) from $X$ such that $x_1 + x_2$ is as close to zero as possible, and set $X' =\{x_1, x_2\}$ as the initial state". To apply this heuristic you will need to initially check $n(n-1)$ subsets of pairs. If this is not a solution, it will hopefully be very close to the solution in the state space and therefore a good place to start.

Another problem is the hill climbing does not remember previously visited states, and you could end up infinitely looping when you allow moves to both add and delete elements from $X'$. For example, suppose $X'=\{1\},X=\{-3,-50,-50,100\}$. Clearly the optimal solution is $X''=\{-50,-50,100\}$ but your method will add element $-3$ to $X'$ first, then remove $-3$, then pick $-3$ again etc, because these states have a lower objective value than any state with one of $-50$ or $100$ in it. So I think you need to use a more sophisticated approach that prevents visiting the same state twice.

",7760,,44920,,9/17/2021 7:46,9/17/2021 7:46,,,,0,,,,CC BY-SA 4.0 27413,1,27417,,4/19/2021 14:04,,5,302,"

OCR is still a very hard problem. We don't have universal powerful solutions. We use the CTC loss function

An Intuitive Explanation of Connectionist Temporal Classification | Towards Data Science
Sequence Modeling With CTC | Distill

which is very popular, but it's still not enough.

The simple solution would be to use object detection algorithms for recognizing every single character and combine them to form words and sentences. We already have really powerful object detection algorithms like Faster-RCNN, YOLO, SSD. They can detect even very complicated objects that are not fully visible. But I read that these object detection algorithms are very poor if you use them for recognizing characters. It's very strange since these are very simple objects, just a few lines and circles. And mainly grayscale images. I know that we use object detection algorithms to detect the regions of text on big images. And then we recognize this text. Why can't we just use object detection algorithms (small versions of popular neural networks) for recognizing single characters?

Why we use CTC or other approaches (besides the fact that it would require much more labeling)? Why not object detection?

",,user40943,32410,,4/22/2021 12:13,4/22/2021 12:13,Why object detection algorithms are poor in optical character recognition?,,1,0,,,,CC BY-SA 4.0 27415,1,,,4/19/2021 16:33,,2,61,"

I can't find what's new in LaBSE v2 (https://tfhub.dev/google/LaBSE/2). What are the main highlights of v2 versus v1? And how did you find out?

",46270,,,,,4/19/2021 16:33,What's new in LaBSE v2?,,0,1,,,,CC BY-SA 4.0 27417,2,,27413,4/19/2021 19:47,,0,,"

Good question! Using Yolo to recognise characters would be a good experiment to try. It may be because of the density of characters on a page -- systems like Yolo are very good at detecting a small number e.g. 2,3 or 10, objects, but don't work so well when the number of objects is the hundreds as you might have with OCR. A better approach might be to try face detection methods that work well with large crowds.

",7760,,,,,4/19/2021 19:47,,,,0,,,,CC BY-SA 4.0 27418,1,,,4/19/2021 22:11,,-1,84,"

I am trying to train my model to classify 10 classes of hand gestures but I don't get why am I getting validation accuracy approx. double than training accuracy.

My dataset is from kaggle:
https://www.kaggle.com/gti-upm/leapgestrecog/version/1

My code for training model:

print(x.shape, y.shape)
# ((10000, 240, 320), (10000,))

# preprocessing
x_data = x/255  
le = LabelEncoder()  
y_data = le.fit_transform(y)  
x_data = x_data.reshape(-1,240,320,1)   
x_train,x_test,y_train,y_test = train_test_split(x_data,y_data,test_size=0.25,shuffle=True)  
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)

# Training

base_model = keras.applications.InceptionV3(input_tensor=Input(shape=(240,320,3)),
                                           include_top=False, 
                                           weights='imagenet')
base_model.trainable = False
    
CLASSES = 10
input_tensor = Input(shape=(240,320,1) )
model = Sequential()
model.add(input_tensor)
model.add(Conv2D(3,(3,3),padding='same'))
model.add(base_model)
model.add(GlobalAveragePooling2D())
model.add(Dropout(0.4))
model.add(Dense(CLASSES, activation='softmax'))
model.compile(loss='categorical_crossentropy', 
              optimizer=optimizers.Adam(lr=1e-5), metrics=['accuracy'])

history = model.fit(
x_train,
y_train,
batch_size=64,
epochs=20,
validation_data=(x_test, y_test)
)

I am getting accuracy like:

Epoch 1/20  
118/118 [==============================] - 117s 620ms/step - loss: 2.4571 - accuracy: 0.1020 - val_loss: 2.2566 - val_accuracy: 0.1640
Epoch 2/20  
118/118 [==============================] - 70s 589ms/step - loss: 2.3253 - accuracy: 0.1324 - val_loss: 2.1569 - val_accuracy: 0.2512

I have tried removing the Dropout layer, changing train_test_split, but nothing works.

EDIT:

On changing the dataset to color images from https://www.kaggle.com/vbookshelf/v2-plant-seedlings-dataset , I am still getting higher validation accuracy in initial epochs, is it acceptable or I am doing something wrong?

",46396,,46396,,4/21/2021 20:54,4/21/2021 20:54,How to train my model using transfer learning on inception_v3 pre-trained model?,,1,2,,,,CC BY-SA 4.0 27420,2,,1390,4/19/2021 22:45,,1,,"

No one mentioned Planning chemical syntheses with deep neural networks and symbolic AI (published in Nature - here's arxiv link). Very impressive application of deep reinforcement learning - they use Monte Carlo Tree Search with a policy network (a-la AlphaZero) to do chemical synthesis planning. Authors claim that double blind test shown that professional chemists cannot distinguish between human- and AI-generated synthesis pathways.

Speaking of Alpha* stuff - AlphaFold is a quite recent result in protein folding, which shown breakthrough-level performance compared to all the competition.

",20538,,,,,4/19/2021 22:45,,,,0,,,,CC BY-SA 4.0 27421,2,,27382,4/19/2021 23:07,,1,,"

The wiki has a concise quote by Andreas Hein, where the gap is defined by "the difference in meaning between constructs formed within different representation systems". This connotes the core problem of translating meaning between an informal language (typically natural language) and a formal language (programing language or other formal symbolic system).

Informally, the problem could be defined as "the gap in meaning between two different contexts", and we might observe this in something as simple as gestures having different meanings in different cultures. It would be less of a problem translating meaning between two formal systems.

At its core, "Semantic gap" seems to relate to the difficulty in formalizing certain fuzzy concepts in symbolic systems.

",1671,,,,,4/19/2021 23:07,,,,0,,,,CC BY-SA 4.0 27422,2,,2872,4/19/2021 23:35,,1,,"

This is a subject Dr. Joanna Bryson has been writing about from a couple of decades. The website has a list of published papers and drafts, and one that immediately leaps to mind is "How Do We Hold AI Itself Accountable? We Can’t."

  • A core argument of Bryson is that we can't offload responsibility to AI because we can't meaningfully punish AI—current algorithms cannot be said to experience suffering

This is partly explicated in the linked paper:

What matters is that none of the costs that courts can impose on persons will matter to an AI system in the way the matter to a human. While we can easily write a program that says “Don’t put me in jail!” the fully systemic aversion to the loss of social status and years of one’s short life that a human has cannot easily be programmed into a digital artefact.

This leads to a deeper argument about the nature of computing applications in general:

generally speaking, well-designed systems are modular, and systemic stress and aversion is therefore not something that they can experience. We could add a module to a robot that consists of a timer and a bomb, and the timer is initiated whenever the robot is alone, and the bomb goes off if the timer has been running for five minutes. This would be far more destructive to the robot than ten minutes of loneliness is to a human, but it would not necessarily be any kind of motivation for that robot. For example again of a smart phone, if you added that module to your smart phone, what other component of that phone would know or care? The GPS navigator? The alarm clock? The address book? This just isn’t the way we build artefacts to work.

Genetic and other learning algorithms can certainly be designed to maximize rewards and minimize penalties, but, if they are not conscious and sentient, they can't be said to suffer—there is no coherent self to experience suffering.

",1671,,,,,4/19/2021 23:35,,,,0,,,,CC BY-SA 4.0 27423,2,,10529,4/20/2021 1:51,,0,,"

Chiming in because I had the same question and stumbled across your post. It seems like the general version of your question still has not been answered.

In general, a well-formed gradient update rule is all you need to be able to train the network. We are thinking of converting to a "loss function" because that is the typical flow in the structure that pytorch provides for training networks via their autograd framework.

But the pytorch loss function is really just meant to be the last in a list of nested operations over which we are going to compute gradients. So if you already know your gradient update rule, the easiest way to write it down as a pytorch "loss function" is as the integral of your gradient update rule with respect to the network's output -- then pytorch will compute your gradient update rule for you automatically.

So in your example, the loss function would be:

$-\alpha^tG \ln\pi(A_t | S_t, \theta_t)$

as already noted by @Brale and others. The minus sign is there because pytorch optimizers will typically carry out minimization, but your gradient update rule is intended to maximize.

Hopefully this explanation is helpful to others that are still learning pytorch! Also, I would highly recommend that anyone just getting started with pytorch read their latest autograd tutorial here

",46399,,,,,4/20/2021 1:51,,,,0,,,,CC BY-SA 4.0 27424,2,,27418,4/20/2021 5:10,,1,,"

The problem is that you're not creating a model with InceptionV3 as the backbone. What you want to do is this:

input_tensor = Input(shape=(240,320,1) )
    
base_model = tf.keras.applications.InceptionV3(input_tensor=Input(shape=(240,320,3)),
include_top=False, weights='imagenet')
    
base_model.trainable = False

x = base(input_tensor)
x = tf.keras.layers.GlobalAveragePooling2D()(x)
x = tf.keras.layers.Dense(1024, activation='relu')(x)
x = tf.keras.layers.Dense(N_CLASSES, activation='softmax')(x)

...
    
model = tf.keras.Model(inputs = input_tensor, outputs = x)

model.compile(...)

The inputs and outputs parameters are of importance here.

",40434,,,,,4/20/2021 5:10,,,,1,,,,CC BY-SA 4.0 27427,2,,27390,4/20/2021 6:40,,1,,"

Mean Square Error (MSE) is a quadratic function and the further you go away from your optimum the bigger (quadratic) the MSE gets. Take $o_{expected}=20$ and $o_{net}=40$ as example. Your MSE is then 400, because $MSE = (o_{expected}-o_{net})^2$.

Just imagine $y = x^2$ with x being the output of your network. If you want to shift the parabola with optimum at $20$ the formula you get is $y = (20-x)^2$. For every new case you train the net on, you get a different parabola with different parameters.

",46315,,46315,,4/20/2021 12:55,4/20/2021 12:55,,,,0,,,,CC BY-SA 4.0 27428,1,27435,,4/20/2021 7:51,,1,134,"

I'm trying to implement a neural network that can capture the drift in a measured angle as a way of dynamic calibration. i.e, I have a reference system that may change throughout the course of the data gathering and would like to train a network layer which actually converts the drifting reference to the desired reference by updating the angle parameter.

For example: Consider the 2d case. We would have a set of 2d points $X\in \mathbb{R}^2$ and a trainable parameter called $\theta$ in the layer. The output of the layer would then be: $$X_o = XR$$ where $$R = \begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{bmatrix}$$

Using Adam optimizer I then try to find the $\theta$ which transforms a given angle to the desired reference.

However, the $\theta$ value seems to fluctuate around the initial value probably because of a diverging gradient(?). How can I overcome this issue?

The code is below.

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt


class Rotation2D(tf.keras.layers.Layer):
  def __init__(self):
    super(Rotation2D, self).__init__()

  def build(self, input_shape):
    self.kernel = self.add_weight("kernel", initializer=tf.keras.initializers.Constant(90),
                                  shape=[1, 1])

  def call(self, input):
    matrix = ([[tf.cos(self.kernel[0, 0]), -tf.sin(self.kernel[0, 0])],
              [tf.sin(self.kernel[0, 0]), tf.cos(self.kernel[0, 0])]])
    return tf.matmul(input, tf.transpose(matrix))

layer = Rotation2D()

t = np.arange(0, 1000)/200.

y_in = np.array([np.sin(t), np.cos(t)]).T
y_ta = np.array([np.cos(t), np.sin(t)]).T

model = tf.keras.Sequential()
model.add(layer)

model.compile(tf.keras.optimizers.SGD(lr=1.), loss='MSE')
model.fit(y_in, y_ta, epochs=1)
for i in range(100):
  print(layer.get_weights())
  model.fit(y_in, y_ta,verbose=0, batch_size=5)
y_out = (model.predict(y_in))

fig, axes = plt.subplots(2, 1)

for i in range(2):
  ax = axes[i]

  ax.plot(y_in.T[i], label = 'input')
  ax.plot(y_ta.T[i], label = 'target')
  ax.plot(y_out.T[i], label = 'prediction')

plt.legend()

plt.show()```
",46407,,46407,,4/20/2021 22:46,4/20/2021 22:46,Unable to 'learn' a rotational angle by parametrising the angle as a neural network layer,,1,0,,,,CC BY-SA 4.0 27429,1,27432,,4/20/2021 8:27,,4,227,"

I am interested in learning about the inverse of neural networks and I would like to understand about the invertibility of neural networks, as for example described in On the Invertibility of Invertible Neural Networks.

Researchers who are working on this domain, can you help me understand these two questions.

  1. Are all neural network invertible ?
  2. What exactly qualifies a neural network to be invertible ?
",45853,,2193,,4/20/2021 8:44,4/20/2021 9:11,Are neural networks invertible?,,1,0,,,,CC BY-SA 4.0 27430,1,27434,,4/20/2021 8:53,,0,80,"

During a course review, I have provided my opinion on the course overall. I stated that MATLAB is also a great environment to program and do research for ML/AI, but my professor seemed to have taken my comments as a joke and told me "If you take a look at the statistics, then you'll see that MATLAB is not a feasible environment to innovate and research topics in ML/AI".

As an undergraduate who is new to machine learning, I would hope to understand more and not debate on which is better (MATLAB vs Python), but rather to know whether there is a bias against MATLAB or there are actually reasons that make MATLAB not a good environment to research and program ML topics.

",46408,,2444,,4/20/2021 15:00,4/20/2021 16:52,"On what basis is MATLAB ""inflexible"" to perform ML/AI research on it?",,1,1,,,,CC BY-SA 4.0 27431,2,,2872,4/20/2021 9:07,,0,,"

Perhaps this strikes at the heart of the provincial way humans interpret their world. Is suffering pain the optimal way of fending off entropy? If asked, would a dog marvel at how we get through our days without smelling everything? I think the true value of this alien lifeform we are creating called AI (perhaps more accurately MI, machine intelligence, intelligence just is) is that it will not be like us. Of course this is it's big risk also.

",46410,,,,,4/20/2021 9:07,,,,0,,,,CC BY-SA 4.0 27432,2,,27429,4/20/2021 9:11,,3,,"

The meaning of invertible here is the standard definition of invertibility for a mathematical function $f \colon X \to Y$. Invertible simply means "the function has an inverse map $f^{-1} \colon Y \to X$". Equivalently the function $f$ is bijective, which means the following two conditions hold:

  1. $f$ is injective: for any two distinct $x_1, x_2 \in X$, $f(x_1) \ne f(x_2)$.

  2. $f$ is surjective: for any $y \in Y$, there exists an $x \in X$ such that $f(x) = y$.

If this is unfamiliar, you should be able to find some helpful references on Google using these terms.

Most obvious neural network architectures cannot possibly be invertible. Consider for example a classifier which takes an image or some other high-dimensional input, and outputs a classification label. This network could only be invertible if there was only one possible input1 which corresponds to each label, which is not the goal of the network.


1 I mean this rather literally: if one pixel is even slightly different, this would be a different input to the network and would have to have a different output.

",44413,,,,,4/20/2021 9:11,,,,6,,,,CC BY-SA 4.0 27434,2,,27430,4/20/2021 13:50,,2,,"

(This question is could be considered off-topic or opinion-based, but I will answer it by providing facts that could hinder the adoption of MATLAB by AI researchers).

There are 2 main reasons why MATLAB may not be the "best" programming language/environment for research (in AI and other areas too)

  • It's closed source (i.e. it's more difficult to extend it; the most similar open-source alternative to MATLAB would be Octave)
  • It's owned by a company (i.e., it's proprietary, so, for you to use MATLAB, you need a license, which could be given to you by your university, which is my case, or you need to buy it, but not everyone can afford this).

In my view, these are clear obstacles for people that want to share their research and make sure that people can reproduce their results or build new ideas on top of them. However, in some cases, using proprietary software for research may be the best (or even only) option (see e.g. this for more info). In fact, I have also seen MATLAB being used for research in artificial intelligence (here is an example), but the choice of MATLAB might have been due to the familiarity of the authors with the language/environment.

",2444,,2444,,4/20/2021 16:52,4/20/2021 16:52,,,,0,,,,CC BY-SA 4.0 27435,2,,27428,4/20/2021 14:46,,0,,"

There are two basic problems with your code:

  • The functions sin(t) and cos(t) (both in numpy and tensorflow) take radians as inputs. Seeing Constant(90) in your code, and the learning rate of 1. I'm guessing that you assume that it is in degrees - that's incorrect.

  • In your training data y_ta is not a rotation of y_in:

    y_in = np.array([np.sin(t), np.cos(t)]).T
    y_ta = np.array([np.cos(t), np.sin(t)]).T
    

It is a reflection about y=x diagonal. No wonder it fails to find an appropriate rotation.

I just had to change y_ta to:

  y_ta = np.array([-np.cos(t), np.sin(t)]).T

And train with more sensible learning rate:

  model.compile(tf.keras.optimizers.SGD(lr=0.1), loss='MSE')
  model.fit(y_in, y_ta, epochs=10)

To get the angle (which I then convert to degrees):

  (180 * model.layers[0].kernel[0,0].numpy() / np.pi) % 360
  > 90.00187079311581
",20538,,,,,4/20/2021 14:46,,,,1,,,,CC BY-SA 4.0 27438,2,,8716,4/20/2021 20:58,,0,,"

In addition to the first answer about feature selection, you could also add a global max or average pooling layer at the end of your network. This would reduce the dimensionality to 512 or 1024. If that's still too much, another option would be to add an additional convolutional layer with reduced channels and then do the global pooling. You will have to experiment with which option is best for your data.

",7760,,,,,4/20/2021 20:58,,,,0,,,,CC BY-SA 4.0 27442,2,,25775,4/21/2021 3:02,,1,,"

One can use simpler approach with deepC compiler and convert exported onnx model to c++.

Check out simple example at deepC compiler sample test Compile onnx model for your target machine

Checkout mnist.ir

Step 1:

Generate intermediate code

% onnx2cpp mnist.onnx

Step 2:

Optimize and compile

% g++ -O3 mnist.cpp -I ../../../include/ -isystem ../../../packages/eigen-eigen-323c052e1731/ -o mnist.exe

Step 3:

Test run

% ./mnist.exe

Here is a link to YouTube Video for elaborate instructions.

Fur details, read more at https://link.medium.com/IhbcJzi4jgb

",46435,,46435,,5/23/2021 13:47,5/23/2021 13:47,,,,0,,,,CC BY-SA 4.0 27443,1,27446,,4/21/2021 4:05,,1,158,"

The off-policy TD learning control using state value function from page 34 of David Silver's RL lecture is: $$ V(S_t) \leftarrow V(S_t) + \alpha \left( \frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)} (R_{t+1} + \gamma V(S_{t+1})) - V(S_t) \right). $$

I'd like to change this update rule to action value function Q, something like:

$$ Q(S_t,A_t) \leftarrow Q(S_t,A_t) + \alpha \left( \frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)} (R_{t+1} + \gamma Q(S_{t+1},A_{t+1})) - Q(S_t,A_t) \right). $$

Then what is the corresponding importance sampling ratio?

Since $A_t$ is already determined (because we are calculating $Q(S_t,A_t)$), I think $\pi(A_t|S_t)$ is definitely 1. But what about $\mu (A_t|S_t)$? Is it 1 or not?

",46437,,2444,,4/25/2021 11:11,4/25/2021 11:11,What would be the importance sampling ratio for off-policy TD learning control using Q values?,,1,0,,,,CC BY-SA 4.0 27445,2,,6289,4/21/2021 4:54,,0,,"

For the multiclass SVM, there will be an ensembling effect since you are learning 5*4=20 1vs1 classifiers. It could be an interesting experiment to try the same thing with simple neural networks. Also, since you are standardizing the inputs you could try tanh activations on the first layer after the input. I presume you are using softmax on the output layer.

",7760,,,,,4/21/2021 4:54,,,,0,,,,CC BY-SA 4.0 27446,2,,27443,4/21/2021 7:14,,2,,"

Since $A_t$ is already determined (because we are calculating $Q(S_t,A_t)$), I think $\pi(A_t|S_t)$ is definitely 1. But what about $\mu (A_t|S_t)$? Is it 1 or not?

You could assign values of 1 to each to get the right answer, but the situation is different. You can see that more clearly in the definition of action value, $q(s,a)$:

$$q_{\pi}(s,a) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty} \gamma^k R_{t+k+1}|S_t=s, A_t=a]$$

The condition on the expectation $|S_t=s, A_t=a$ means that the value of $A_t$ is already assumed. There is no need to know or use the associated probability, and no need to adjust between probabilities of different policies when estimating returns for the first action choice.

In addition, the last action choice used to bootstrap in Q learning is always the maximising action over current Q values. That is an on-policy choice with respect to the target policy $\pi$, so does not need to be adjusted for.

For single-step Q learning, there is no need to use importance sampling to adjust for policy differences between behaviour and target policies. The correct update equation rule is:

$$ Q(S_t,A_t) \leftarrow Q(S_t,A_t) + \alpha \left( R_{t+1} + \gamma Q(S_{t+1},A_{t+1}) - Q(S_t,A_t) \right). $$

This does change for n-step updates, where you do need to take account of differences in action choice between target and behaviour policies for $A_{t+1}, A_{t+2}$ up to $A_{t+n-1}$. However, you should bear in mind that $\pi(a|s)$ is zero for any non-maximising action - if you use weighted importance sampling then any trajectory with an exploring action before calculating the TD update contributes zero update.

So, if you are calculating over longer trajectories, in e.g. Q($\lambda$), it is common to see this simplified to a logical test and some kind of shortcut update. You rarely see explicit use of importance sampling - with probability calculations and ratios - in Q learning.

",1847,,1847,,4/21/2021 15:08,4/21/2021 15:08,,,,0,,,,CC BY-SA 4.0 27450,1,27451,,4/21/2021 13:08,,1,55,"

Is padding, before feature extraction with VGGish, a good practice?

Our padding technique is to find the longest signal (which is loaded .wav signal), and then, in every shorter signal, put zeros to the size of the longest one. We need to use it because one size of input data is desirable.

Perhaps there is any other techniques you recommend?

The difference between padding before and after the features extraction by accuracy is quite big - more than 20%. Using padding before extraction gives 97% accuracy.

I'd be glad to read your feedback, and explain me why that happens, and tell me if that kind of padding is correct action or is there a better solution.

",46445,,2444,,4/22/2021 0:58,4/22/2021 0:58,Is it a good practice to pad signal before feature extraction?,,1,0,,,,CC BY-SA 4.0 27451,2,,27450,4/21/2021 14:13,,1,,"

Padding is a common practice both in image-processing (typically via CNNs) and in sequence-processing tasks (RNNs, Transformers).

For CNNs all the standard convolutional layers - Conv1D, Conv2D and Conv3D,- have the padding argument. The padding values can be valid or same for 2d and 3d convolutions. And extra causal type of padding is possible for 1d convolutions and the documentation refers to this paper: WaveNet: A Generative Model for Raw Audio - which sounds quite close to what you are interested in.

This animations might be useful to get a bit more intuition about the convolutions and strides/padding. The general consensus is that using same padding is advantageous for model performance - your network gets more information about the borders of your inputs (and deeper networks are possible).

For sequential models padding is even more important. Training samples are usually of unequal lengths, so you have to pad them with a special token (usually called [PAD] that gets encoded as 0). Here are some examples of this mentioned in tensorflow docs, huggingface.transformers or BERT tutorial.

",20538,,,,,4/21/2021 14:13,,,,0,,,,CC BY-SA 4.0 27452,1,,,4/21/2021 16:09,,0,80,"

I've read in this: F. Rosenblatt, Principles of neurodynamics. perceptrons and the theory of brain mechanisms that in the Multilayer Perceptron the activation functions in the second, third, ..., are all non linear, and they all can be different. And in the first layer, they are all linear.

Why?

On what does this depends?

  1. When it is said "the neural network learns automatically", in colloquial words, what does it mean?

AFAIK, one first train the NN, then at some point NN learns. When does the "automatically" enters then?

Thanks in advance for your help.

",44999,,44999,,4/22/2021 16:02,4/22/2021 16:02,"About the choice of the activation functions in the Multilayer Perceptron, and on what does this depends?",,1,3,0,,,CC BY-SA 4.0 27453,2,,27452,4/21/2021 18:28,,1,,"

Rosenblatt was probably discussing a specific architecture, for which there are many. However, for general purpose feed-forward back-propagation ANNs used for function aproximation and classification analysis, you can use whatever activation functions you want on the input-side, hidden layers, and output-side. Examples are identity, logistic, tanh, exponential, Hermite, Laguerre, RBFN, ReLu, softmax, etc. "Automatically," likely refers to the iterative learning process, which tends to be similar to gradient descent, during which partial derivatives of prediction error w.r.t to coefficients decrease.

",,user46301,,,,4/21/2021 18:28,,,,6,,,,CC BY-SA 4.0 27457,1,27469,,4/22/2021 2:53,,3,594,"

While reading the AlphaZero paper in preparation to code my own RL algorithm to play Chess decently well, I saw that the

"The board is oriented to the perspective of the current player."

I was wondering why this is the case if there are two agents (black and white). Is it because there is only one central DCNN network used for board and move evaluation (i.e. there aren't two separate networks/policies used for the respective players - black and white) in the algorithm AlphaZero uses to generate moves?

If I were to implement a black move policy and a white move policy for the respective agents in my environment, would reflecting the board to match the perspective of the current player be necessary since theoretically the black agent should learn black's perspective of moves while the white agent should learn white's perspective of moves?

",46275,,,,,10/6/2022 5:47,Why does Alpha Zero's Neural Network flip the board to be oriented towards the current player?,,3,0,,,,CC BY-SA 4.0 27458,1,,,4/22/2021 10:57,,0,33,"

I understand conceptually how backpropagation works according to the chain rule, and I understand that partial derivatives calculate the rate of change of a function containing multiple variables with respect to one of those variables, the rest being fixed.

What I'm struggling with is what the value from these partial derivatives actually relates to. I found this https://activecalculus.org/multi/S-10-2-First-Order-Partial-Derivatives.html which gives some good examples. But with a NN I'm not sure what units the results of the derivatives relate to.

One of the examples on the website used z = f(x,y) z horizontal distance travelled of a projectile, x initial speed in feet per second, and y was the angle. So if taking the partial derivative with respect to x the results tell us how much the distance travelled changes with respect to the change in speed. So it might be that for every one foot per second increase of the initial speed, we get an increase of 8 feet horizontal travel if using a fixed value for y.

But when calculating the derivatives for backpropagation, does this mean that if we get an answer of (random value) 0.08, this means that for every change of 1 to the non-static variable we would get a change of 0.08 to our output? And what units (if any) do these values relate to?

",46470,,,,,4/22/2021 10:57,Backpropagation - what does rate of change calculated from the partial derivatives actually relate to?,,0,3,,,,CC BY-SA 4.0 27463,1,,,4/22/2021 14:02,,1,28,"

I have an AI/ML challenge in relation to video analysis and am unsure where to start.

I am investigating an application that will grade students performance of carrying out a task, based on analysis of a video of them carrying out the task.

The problem has not been sufficiently defined yet but to get a high level idea, imagine video showing close up of a trainee doctor performing stitches to close a wound. the AI model would be trained using many videos of someone performing the stitches correctly and score the trainee on a number of criteria.

Most frameworks will allow detection of objects but taking a video of a person carrying out a task and assessing their success using an AI/ML model feels a step above regular object analysis.

Assumption is we will create the training material, having professionals video themselves carrying out the task successfully, which will also be graded by other professionals to provide a rubric of scores.

I understand this is not something that can be simply answered but an idea of where to start would be very helpful.

  • are there specific areas of AI i should investigate?
  • are there frameworks that can actually do this ( i have not found any)?

Appreciate any advice.

Thank you

",45155,,45155,,4/23/2021 10:51,4/23/2021 10:51,Video Analysis: Providing a success score for a of a student carrying out a specific task,,1,0,,,,CC BY-SA 4.0 27466,2,,27457,4/22/2021 14:58,,3,,"

I am not an expert in RL. I have been playing Go for some years.

Let's quote from AlphaZero's paper first:

Aside from komi, the rules of Go are also invariant to colour transposition; this knowledge is exploited by representing the board from the perspective of the current player (see Neural network architecture).

In the Game of Go, the difference between Black and White except the board representation is the komi (the amount of points that Black has to compensate White in the final count for playing first). Except the presence of komi, there should be no difference in strategy under the same position if colours exchanged. In other words, given a state $s$ of black stones and white stones on the board, if the optimal policy of Black playing first is $\pi$, then if colours of stones on the board exchanged and it is White's turn, the optimal policy for White should be the same as $\pi$.

With this in consideration, there are at least 2 advantages of using a network that represents the board in the perspective of Self/Opponent rather than Black/White.

The first is that it prevents the network from the possibility of giving inconsistent strategies under two representations of the same state. Consider a network $f_\theta$ that accepts the board representation in the order of $(B,W)$, and a state $s = (X_t,Y_t)$ in which $X_t$ is a feature map for black stones and $Y_t$ is a feature map for white stones and it is black's turn. Now consider a state $s' = (Y_t,X_t)$ (i.e. colours flipped) and it is white's turn. $s$ and $s'$ are essentially representation of the same state (except Komi which does not affect optimal policy). There could be a possibility that the network $f_\theta$ gives different policies for these two representations. However, if $f_\theta$ accepts the state as $(Self,Opponent)$, the input to the network would be the same (except the komi feature).

Therefore, this representation would significantly reduce number of states represented by the features vector $(X_t,Y_t)$, which would be the second advantage to training the neural network. If we consider that in Go, the same local position could appear in exchanged colour in another position, the network could by this implementation, recognize them as the same position. A decrease in the number of states could mean a significant drop in parameters and power needed from the network.

The same principle of making use of different representations of the same state is followed in AlphaGo's other training implementations as well, such as augmenting its training data to include rotations and reflections of the same board position.

However, in the game of Chess, this would be a different case. For a chess position, if the pieces' colours are exchanged and it becomes opponent's turn, it would be a different state because the positions of the KING and the QUEEN are not the same for the two colours.

",46474,,,,,4/22/2021 14:58,,,,1,,,,CC BY-SA 4.0 27467,2,,7643,4/22/2021 15:22,,0,,"

As you said, a CNN would be able to detect objects in different positions if the dataset contains enough examples of such cases, though the network is able to generalize and should be able to detect objects in slightly changed positions and orientations.

The term "translation invariance" does not mean that translating an object in the image would yield the same output for this object, but that translating the whole image would yield the same result. So the relative position of object IS important, modern CNN's takes decisions on the whole image (with strong local cues, of course).

To maximize the ability of your CNN to detect multiple orientation, you can train with data augmentation that rotate the images.

the same reasoning can be applied to partial occlusions: if there are enough samples with occlusion in the training set the network should be able to detect those ones. The network ability to generalize should also help a little when occlusions are small, and still be able to detect the object.

Some papers tried different experiment to demonstrate the robustness to occlusion and translation, for instance by looking at the network activation when artificially occluding a portion of the image with a gray rectangle, though I do not have a paper name in mind.

",19859,,,,,4/22/2021 15:22,,,,0,,,,CC BY-SA 4.0 27468,2,,3903,4/22/2021 16:45,,1,,"

I think we give ourselves too much credit by already referring to our algorithms and machines as actually thinking and acting on motivations. In my opinion we still have a bit to go before we can actually refer to a human creation as thinking or being able to have motivations more then basic physical ones.

By that I would say that a Machines' or AI algorithms' motivations are similar to a car engine. Simple and basic, the "motivations" of a car engine to run are just the first and second laws of thermodynamics, namely the conservation of energy and the exchange between energy types, and the always increasing level of entropy in a closed system.

By having a really specific design, we can insert fuel in the system and create a lot of potential energy which will "motivate" the engine to transform it in other types of energy (heat, sound, etc.)

An AI algorithm is exactly the same, it's just that now we're playing with electricity. By putting multiple levels of abstractions from the actual level of electrons moving through wire, up to your python Deep Learning algorithm training to learn how to recognize images of dogs. The concept is similar in my opinion, for now we do not have machines that are complex enough to have higher-level motivations, or even develop them by themselves.

As the other answers pointed out, specific algorithms, namely reinforcement learning try to emulate those "needs" and "motivations", but in the end in my opinion, for now, they are still just emulations. Similar to other Deep learning algorithms, the same basic concept described at the beginning applies, trying to minimize the error by emulating different concepts that we know, as conservation of energy, following the path of least resistance, maintaining the laws of entropy, etc.

",31966,,,,,4/22/2021 16:45,,,,0,,,,CC BY-SA 4.0 27469,2,,27457,4/22/2021 16:58,,2,,"

There is a single neural network that guides self-plays in the Monte Carlo Tree Search algorithm. The neural network gets the current state of the board $s$ as an input and outputs current policy $\pi(a|s)$ and value $v(s)$.

The action probabilities are encoded in a (8,8,73) tensor. First two dimensions encode the coordinates of the figure to "pick" from the board. The third dimension encode where to move this figure: check out this question for a discussion on how all possible moves are encoded in a 73 dimensional vector.

Similarly, the inputs of the network are organized in the (8, 8, 14 * 8 + 7 = 119) tensor. The first two 8 x 8 dimensions, again, encode the positions on the board. Then the positions of the figures one plane per 6 figure types: first 6 planes for player's figures, next 6 planes for opponent's figures and two repetition planes. The 14 planes are repeated 8 times supplying predecessor positions to the network. Finally, there are 7 extra planes encoding as a single uniform value over the board - castling rights (4 planes), total move count (2 planes) and the current player color (1 plane).

Note that the positions of player's figures and opponent's figures are encoded in fixed layers of the state tensor. If you don't flip the board to the perspective of the player then the network will have very different training inputs for black and white states. It also will have to figure out which direction the pawns can move depending on the current player color. None of that it is impossible, of course - but that unnecessarily complicates something that is already a very hard problem for the DNN to learn.

You can go further and completely split the training for white and black players, as you've described. But that'll essentially double the work you'll have to do train your nets (and, I suspect, there would be some stability troubles typical for adversarial training).

To summarize - you are generally right - there is no fundamental need to flip the board. All the above details in state encoding are done to simplify the learning task for the deep neural network.

",20538,,,,,4/22/2021 16:58,,,,1,,,,CC BY-SA 4.0 27470,1,,,4/22/2021 17:18,,0,45,"

Let's assume I have dataset of image-like 2D samples where values can be divided into few discrete levels (for example 1, 2, 3 and 4) like in the image below, where each color maps different value, from 1 to 4. Number of how many times given color occurs on the picture varies from sample to sample though.

I would like to classify these images into different classes but based on the spatial relations of these values between each other (not the values themselves). By spatial relations I mean basically (left, right, up, down), for example:

  • If blue is above and to the right of the red
  • Another blue is above and to the left of the same red
  • Yellow is to the right of one blue (same height)
  • One green is below red
  • ...

My question is, what algorithm (probably some deep neural network) I should use for that task? I would appreciate even just some keywords or clues of what might help.

",22659,,,,,1/12/2023 22:06,What algorithm to use to classify data by spatial relations?,,1,0,,,,CC BY-SA 4.0 27471,1,,,4/22/2021 18:50,,2,23,"

I have a set of data that contains the different lengths of sequences. On average the sequence length is 600. The dataset is like this:

S1 = ['Walk','Eat','Going school','Eat','Watching movie','Walk'......,'Sleep']
S2 = ['Eat','Eat','Going school','Walk','Walk','Watching movie'.......,'Eat']
.........................................
.........................................
S50 = ['Walk','Going school','Eat','Eat','Watching movie','Sleep',.......,'Walk']

The number of unique actions in the dataset are fixed. That means some sentences may not contain all of the actions.

By using Doc2Vec (Gensim library particularly), I was able to extract embedding for each of the sequences and used that for later task (i.e., clustering or similarity measure)

As transformer is the state-of-the-art method for NLP task. I am thinking if Transformer-based model can be used for similar task. While searching for this technique I came across the "sentence-Transformer"- https://github.com/UKPLab/sentence-transformers. But it uses a pretrained BERT model (which is probably for language but my case is not related to language) to encode the sentences. Is there any way I can get embedding from my dataset using Transformer-based model?

",18795,,,,,4/22/2021 18:50,Embedding from Transformer-based model from paragraph or documnet (like Doc2Vec),,0,0,,,,CC BY-SA 4.0 27472,2,,27470,4/22/2021 19:39,,0,,"

The spatial relationships that you describe would correspond to features, and it's not clear that you need to use a neural network for detecting or discovering these features since you have just described them. Could you instead define a feature extractor that detects the correct patterns and returns you a vector of counts of feature occurrences across the image?

In fact gray level co-occurrance matrices might be adaptable to this problem. Since there are only four colours, each matrix would be very small, and you would define one matrix for each local spatial configuration. For example, one matrix might encode the number of times that colour x is below colour y. From these you could then extract counts of interest to use for further downstream processing.

",7760,,,,,4/22/2021 19:39,,,,1,,,,CC BY-SA 4.0 27473,1,,,4/22/2021 21:11,,1,38,"

I am trying to sort the instances within each of 5 classification categories in a dataset that has been put through both a random forest classifier and a neural network with 99% accuracy on each.

Essentially what I am trying to do is stack a sorting algorithm on top of a random forest or a neural net (depending on which will boast a much more efficient and swift process in being integrated with a sorting algorithm post-output) so that the correctly classified instances within each category can be sorted into separate organized and easily comprehensible lists.

I have tried researching this but all I have found are examples of ensemble learning and traditional stacking in order to achieve higher overall accuracy on an algorithm.

I am trying to see how I can take the correct predictions from either of these algorithms and sort them by some arbitrary features that I will engineer, but all I am wondering at the moment is how to integrate a sorting algorithm in the first place into the output of a classification algorithm.

",36613,,,,,4/22/2021 23:10,How do I take the correct classification predictions of an ml algo (i.e. random forest/neural net) and sort the instances in each category?,,1,0,,,,CC BY-SA 4.0 27475,2,,13861,4/22/2021 21:57,,0,,"

There are multiple standard ways of feature selection, for example ranking features by information gain, that you could use, and then you can train the neural network on just those features.

However, let's assume you have trained a neural network on all of the features and now want to estimate their importance. One approach you could take is to perform a sensitivity analysis on the inputs: add random noise in a controlled fashion to different features and see what effect it has. If the training dataset has been centered (so each feature has zero mean) then you could set the inputs to zero (the "average" training example) and then perturb each feature in turn to see what the effect is. You could also fix a feature permanently to zero and then run your validation data through the network and see how accuracy changes. There should be no major effect for insignificant features, but important features being zeroed should lead to a decrease in accuracy. You can also do something like this when predicting specific examples: perturb the example's features and see how much the prediction changes. LIME does something like this to explain why a black box like a neural network makes the predictions it does.

",7760,,,,,4/22/2021 21:57,,,,0,,,,CC BY-SA 4.0 27476,2,,27473,4/22/2021 22:57,,1,,"

For the ANN, it should be the average of the error per instance from testing (prediction) when each instance is left out of training. ANNs can unfortunately learn based on the order of instances used for training, so it helps to train/test and then shuffle (permute, or randomly re-order) and then assign to k-folds, then train/test again in order to prevent the ANN from learning based on instance order. The RF won't do this, since the in-bag objects in all the bootstraps (trees) that were sampled with replacement will vary appreciably.

(FYI - there's commonly an option to train using "batch" or "online" error updates during back-propagation. Batch sums the error over the training objects, then updates coeffs. Online updates coeffs as each training instance is processed. I tend to like online, since online seems to reduce error faster. Batch can get off the gradient path easier, since the summed error can come in large "chunks" and throw the ANN off course)

As far as sorting instances within each class label after training/testing, you can do anything you wish. Certainly tracking each instance as it is tested/predicted and building an array/vector of errors will give you a distribution of error per instance.

Quite often, test objects can widely very in their feature values, so e.g. k-means cluster analysis can be used during ANN training. See my blog on this.

In short, it's not just sorting when done. Ensembles are on the back end of an attack plan, but the checklist for best practices would look like:

  1. Randomly shuffle the order of all instances.
  2. Assign instances to $k$-folds (partitions), e.g. 10-fold.
  3. Train with instances in folds 1-9, test (predict class) of instances in fold 10. Increment confusion matrix. Then train with instances in folds 1-8,10 and test instances in fold 9, update confusion matrix, ...., cycle through all test folds.
  4. Repeat steps 1-3 nine more times (this is called doing another "re-partition").
  5. Calculate average error for each predicted instance, then sort.

The above steps are only for the ANN. The RF performs its own bookkeeping.

",,user46301,,user46301,4/22/2021 23:10,4/22/2021 23:10,,,,0,,,,CC BY-SA 4.0 27477,2,,27463,4/23/2021 5:45,,2,,"

You have not described exactly what the tasks will be, but there are some open source libraries for real time pose tracking. For example, OpenPose is one that can be configured to track the body, the hands and the face. However, this is only going to give you predicted pose information for each frame. If the subjects are meant to be doing specific tasks, e.g. picking up objects and moving them, OpenPose won't help and you may need to do further image recognition/analysis or object tracking in order to interpret what is going on.

",7760,,,,,4/23/2021 5:45,,,,1,,,,CC BY-SA 4.0 27478,1,,,4/23/2021 6:57,,1,107,"

I am reading Artificial Intelligence: A Modern Approach.

In Chapter 3, Section 3.3.1, The best-first search algorithm is introduced. We learn that in each iteration, this algorithm chooses which node to expand based on minimizing an evaluation function, $f(n)$, for new nodes. And if the expanded nodes are either not already reached, or they generate a less costly path to a reached state, they will be added to the frontier. So, the kind of queue used in the best-first search is a priority queue, i.e., ordering nodes by the function $f(n)$. If we set the $f(n)$ as the depth of the nodes, the queue type will be changed to FIFO (first-in-first-out), which is used in the breadth-first search algorithm.

Therefore, we can change the nature of algorithms using the $f(n)$ function.

I am wondering what would happen if we set $f(n)$ as the cost of the paths taken from the common parent node of new nodes to each new node $n$. Since new nodes might stem from different previous nodes, we might have to measure the cost of these nodes' path all the way back till we find a common parent of them (which, in the worst case, is the root node, indicating the initial state). In this way, each time a new node is chosen for expansion (using $f(n)$), and each time an expanded node is chosen for joining the frontier (using cost function), the choice is taken by the similar criterion since $f(n)$ and the cost function is now identical.

What would be the nature of such an algorithm? Is measuring the cost of paths to new nodes computationally feasible? Can this be a practical algorithm?

I read later sections and realized that the Dijkstra’s algorithm (uniform-cost search) is so similar to what I had in mind. However, it sets the evaluation function as the cost of the path from the root to the current node. I proposed grouping new nodes by their common parent and compare the cost of nodes within a certain group first. Then, after selecting out best nodes in each group, form a new group based on the selected nodes' new common parent, and do this until we reach the root node, when we will have the last group and comparing costs within that group will find the optimal node for us.

Would the algorithm I have in mind have any advantage over Dijkstra's algorithm?

",46085,,2444,,12/12/2021 21:14,12/12/2021 21:14,What would happen if we set the evaluation function in the best-first search algorithm as the cost of paths taken to new nodes?,,0,1,,,,CC BY-SA 4.0 27481,2,,2330,4/23/2021 13:08,,0,,"

2035, 2056? Those predictions are hilarious :)

2019 - 1,6Billion parameter model (GPT-2)

2020 -175Billion parameter model (GPT-3) more than 100x jump in a year

2021(April) - "Microsoft's ZeRO-Infinity can now run a model with over a trillion parameters on a single NVIDIA DGX-2 node and over 30 trillion parameters on 32 nodes (512 GPUs). With a hundred DGX-2 nodes in a cluster, Microsoft projects ZeRO-Infinity can train models with over a hundred trillion parameters"

https://www.microsoft.com/en-us/research/blog/zero-infinity-and-deepspeed-unlocking-unprecedented-model-scale-for-deep-learning-training/

So in 2021 we now have tech to train 30Trillion- 100 Trillion parameters/neurons Model

100 trillion = Human Brain

With this tech, OpenAI together with Microsoft, other company or gov will train 100Trillion model in 2021 or 2022 at the latest

",46498,,,,,4/23/2021 13:08,,,,1,,,,CC BY-SA 4.0 27482,1,,,4/23/2021 14:36,,3,171,"

The ELBO objective is described as follows

$$ ELBO(\phi,\theta) = E_{q_\phi(z|x)}[log p_\theta (x|z)] - KL[q_\phi (z|x)||p(z)] $$

This form of ELBO includes a regularisation term in the form of the KL divergence which drives $q_\phi(z|x) \rightarrow p(z)$ when optimising ELBO.

However we also have the overall expression for the loglikelihood which is defined as follows (proof provided here)

$$ p_\theta(x) = ELBO(\phi,\theta) + KL[q_\phi(z|x)||p_\theta(z|x)] $$

Rearranging the above equation as follows

$$ \max\limits_\phi ELBO(\phi,\theta) = \max\limits_\phi p_\theta(x) - KL[q_\phi(z|x)||p_\theta(z|x)] $$

We can see that maximising ELBO w.r.t $\phi$ in this form causes $q_\phi(z|x) \rightarrow p_\theta(z|x)$

These two ways of describing how VAEs learn conflicts my understanding of what happens to the approximate distribution during training.

Is it simply just trying to match both the prior $p(z)$ and the posterior $p_\theta(z|x)$ or am I missing something

",42514,,42514,,4/23/2021 18:18,6/22/2021 23:29,"What does the approximate posterior on latent variables, $q_\phi(z|x)$, tend to when optimising VAE's",,1,0,,,,CC BY-SA 4.0 27483,2,,27304,4/23/2021 14:37,,0,,"

It seems Alex has just used the Matlab function mat2gray, as described here: https://www.mathworks.com/help/vision/ug/image-category-classification-using-deep-learning.html

The visual outcome of the features is very similar. mat2gray will simply scale the weights between 0 and 1 (no clipping).

Leaving the (slightly adapted) example code of Mathworks here for future reference:

layer1weights = mat2gray(layer1weights) ;
layer1weights = imresize(layer1weights,5) ; 

figure
montage(layer1weights)
title('First convolutional layer weights')
",5548,,,,,4/23/2021 14:37,,,,0,,,,CC BY-SA 4.0 27484,1,,,4/23/2021 15:55,,2,260,"

I notice the following behavior when running experiments with $\epsilon$-greedy and UCB1. If the reward is kept binary (0 or 1) both algorithm's performances are on par with each other. However, if I make the reward continuous (and bounded [0, 1]) then $\epsilon$-greedy remains good but UCB1 performance plummets. As an experiment, I just scaled the reward of 1 by a factor of 1/10 which negatively influences the performance.

I have plotted the reward values estimated by the algorithms and see that (due to the confidence interval term) UCB1 largely overestimates the rewards.

Is there a practical trick to fix that? My guess is that the scaling coefficient $c$ in front of the upper confidence bound is meant just for this case. Nevertheless, the difference in performance is staggering to me. How do I know when and what scaling coefficient will be appropriate?

===

update 1 The reward distribution is very simple. There are 17 arms, for arm 3 and 4 the learning algorithm gets a reward of 1, the other arms return reward of 0. No stochasticity, the algorithm has 1000 iterations.

If I scale the reward by a factor 1/10, for instance, then UCB1 takes a whole lot of time to start catching up with $\epsilon$-greedy

",2254,,2254,,4/26/2021 8:55,4/26/2021 13:55,Difference in UCB performance when scaling the rewards,,1,5,,,,CC BY-SA 4.0 27485,2,,11004,4/23/2021 19:15,,0,,"

So the training data is a small number of examples (400) drawn from a small set of fonts, and the test data is a much larger dataset drawn from a much larger set of fonts and is therefore much more variable than the training data. Two issues here are the small training data size and the difference in distributions between the training and test data. I would try the following:

  • Instead of defining your own architecture, try some of the pretrained architectures available in Keras such as ResNet50 or VGG16. You can start them either with random weights or imagenet weights. Remove the top layer and put your own layers on. You can also selectively unfreeze layers and see if that makes any difference.
  • To deal with the issues I mention above, use data augmentation to introduce variability into the training set.

You also have not mentioned in the post how you decide to stop training the network. In case the network is overfitting, you could try finishing training earlier.

",7760,,,,,4/23/2021 19:15,,,,0,,,,CC BY-SA 4.0 27487,2,,5482,4/24/2021 0:01,,0,,"

I would like to highlight an import step for face recognition which is features extraction. Based on my experience, you can evaluated robust feature extraction methods like, Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF) using several matching approaches such as Brute Force Matcher, K-Nearest Neighbor (KNN), Best-Bin-First (BBF) and RANdom SAmple Consensus (RANSAC). The purpose is to identify the method(s) that is/are most appropriate to be used in your application. Then comes your machine learning model, which you need to test several model options like mentioned in the previous message.

",46509,,,,,4/24/2021 0:01,,,,0,,,,CC BY-SA 4.0 27488,2,,27257,4/24/2021 3:08,,2,,"

When in an environment with competing agents, from the perspective of each agent, the environment becomes non-markovian. That occurs because each agent is constantly adapting its own strategy to other's actions, so a transition that occurred to a pair (s,a) before, resulting in a positive reward, might result in zero or negative reward in future iterations of the game.

I didn't see mentioned, but I imagine that you are using some DQN variation to train the network, since you use a replay buffer. To use this framework, you assume that the environment, from the perspective of the agent, follows a MDP. But, as I argued above, some tuples from the replay buffer might not represent valid data for training, so the corresponding network that is trained with it becomes unstable.

A solution might be use the idea of centralized training with decentralized execution, in conjunction with some policy gradient (PG) algorithm, like REINFORCE or Actor-Critic. Since PG are on-policy algorithms, the data used to train the network is generated by the current policy, so you don't have the replay buffer issue. On the other hand, since is on-policy, it's sample inefficient. The centralized training might help to increase the sample efficiency (it's in fact a good solution to partial observable environments, but from what I understand is not the case with the your game). An additional solution to the sample inefficiency is to use off-policy PG, using, for example, past policies, with respective experience, in a importance sampling framework.

Some related references:

Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments: https://arxiv.org/abs/1706.02275

Off-Policy Policy Gradient: https://lilianweng.github.io/lil-log/2018/04/08/policy-gradient-algorithms.html#off-policy-policy-gradient

",38402,,,,,4/24/2021 3:08,,,,0,,,,CC BY-SA 4.0 27489,1,,,4/24/2021 6:15,,3,52,"

I am reading The Book of Why: The New Science of Cause and Effect by Judea Pearl, and in page 12 I see the following diagram.

The box on the right side of box 5 "Can the query be answered?" is located before box 6 and box 9 which are the processes to actually answer the question. I thought that means that telling if we can answer a question would be easier than actually answering it.

Questions

  1. Do we need less information to tell if we can answer a problem (epistemic uncertainty) than actually answer it?
  2. Do we need to try to answer the problem and then realize that we cannot answer it?
  3. Do we answer the problem and at the same time provide an uncertainty estimation?
",5351,,1671,,5/5/2021 0:45,5/5/2021 0:45,Do we need as much information to know if we can can answer a question as we need to actually answer the question?,,0,4,,,,CC BY-SA 4.0 27491,2,,19856,4/24/2021 10:30,,2,,"

I was reading online that tic-tac-toe has a state space of $3^9 = 19,683$. From my basic understanding, this sounds too large to use tabular Q-learning, as the Q table would be huge. Is this correct?

That is a relatively small number of states that can easily be represented in a table on a modern computer. For a Q table, you would multiply by the number of moves possible in each state, but this is still a small amount of memory. Even with the most naive implementation that tracked impossible states and state/action pairs, using string state representations (so 10 bytes for each state/action key), and using double precision floating point for each action value (another 8 bytes), the full table would be around 3 megabytes in size.

So it is definitely possible to use tabular Q learning here. I have done just that whilst learning about RL - my Q-learning Tic Tac Toe agent is written in Python and available on Github. There are many ways to optimise the required space, e.g. only representing reachable states. Also in games with perfect control (moves directly result in a new state), it is common to use afterstate values instead of action values.

If that is the case, can you suggest other (non-NN) algorithms I could use to create a tic-tac-toe agent to play against a human player?

It is not the case, as explained. However, there are two basic approaches at the top level that are worth learning about for working with game trees:

  • Forward planning or search algorithms, as mentioned by skillsmuggler in their answer. These use game rules plus a heuristic (measure of likely success) to explore the future states of the game and pick the search result with the best heuristic. Minimax with a heuristic only based on win/lose at the end is a very basic approach, but would cope just fine with Tic Tac Toe.

    • Other planning algorithms include Negamax (a minor variation of Minimax for zero-sum games) and MCTS (famously used in Alpha Go).
  • Policy function improvement, which generate policies - maps of current state to actions - and have ways to assess them and select better policies.

    • Q learning, and the many algorithms of reinforcement learning are in this category, as well as genetic algorithms.
    • Tic Tac Toe is simple enough that you could hard-code a policy function that mapped any state to the next action. The xkcd webcomic encoded this policy into a diagram.

These two approaches are complementary, in that both can be used together to solve more sophisticated problems. For instance, value-based reinforcement learning algorithms - including Q learning - can be used to provide learned heuristics for a search algorithm. The categories I suggest above are not strict either, in that some algorithms are not clearly one thing or another.

",1847,,2444,,4/25/2021 10:48,4/25/2021 10:48,,,,0,,,,CC BY-SA 4.0 27492,2,,20591,4/24/2021 11:35,,1,,"

Possibly a bit late to the answer, but I doubt you'd be able to run GPT-2 774M in FP32 on 2070 Super which has 8GB VRAM. I know it's not an exact comparison, but fine-tuning BERT Large (345M) in FP32 easily takes more than 10GB of VRAM. You might be able to run GPT-2 774M if you run it in FP16.

Alternatively, you can use Google Collab TPUs which provide at 11GB+ VRAM. Here's a good source listing a few posts about fine tuning GTP-2 1.5B on Google Collab TPUs: https://news.ycombinator.com/item?id=21456025

And here's the notebook itself demonstrating the process: https://colab.research.google.com/drive/1BXry0kcm869-RVHHiY6NZmY9uBzbkf1Q#scrollTo=lP1InuxJTD6a

",28160,,,,,4/24/2021 11:35,,,,0,,,,CC BY-SA 4.0 27493,1,27503,,4/24/2021 15:15,,2,161,"

In researching genetic algorithms, it seems that there are various methods of selection and other operator methods that can significantly change the performance. For example, this picture contains some of the methods that could be used:

Presumably, you can mix & match these operators to optimize whatever problem you're trying to solve.

What most people care about is how many iterations it takes to get to the target. This is understandable, but I've seen things that would be inefficient in real systems such as:

  • using sort on the current population $\mathcal{O}(n \log n)$ and picking the first n members for the mating pool

  • appending to a constantly resizing slice to create a mating pool instead of rewriting on the current array

What I am looking for is, how can I arrive at the target using the least amount of computation and memory possible. The number of iterations and the time taken to get there is still a secondary priority.

It's possible that it may be the process of picking the right operators, but what I am also considering is how parallelizable the implementation could be as well.

",46522,,2444,,4/25/2021 10:57,4/25/2021 21:26,What is the most computationally efficient genetic algorithm?,,1,1,,,,CC BY-SA 4.0 27497,1,27499,,4/25/2021 10:42,,1,147,"

Any (AGI)-KERAS like libraries? Any deep-learning framework to develop AGI applications?

Existing frameworks/algorithms used in NN, NLP, ML, etc are not enough in my opinion. In my opinion any framework has to be based on building blocks from: Cognitive Science, Neuroscience, Mathematics, Artificial Intelligence, Computer Science, psycology, sociology, etc.

",46540,,46540,,4/28/2021 16:00,4/28/2021 16:00,"Is there any specific SW framework, libraries or algorithms (supported by any theory) designed for implementing a practical AGI system?",,2,6,,4/27/2021 11:03,,CC BY-SA 4.0 27499,2,,27497,4/25/2021 12:08,,2,,"

Not to my knowledge. The problem is that this is such an enormous task, it cannot really be tackled at once. So the obvious solution is to reduce the scope. In early AI people were using toy domains, whereas nowadays AI systems work more generally (but still perform better if the domain is restricted).

So while (slow) progress is being made putting the individual building blocks together, we're still a long way off building an overarching general system from them.

",2193,,,,,4/25/2021 12:08,,,,1,,,,CC BY-SA 4.0 27500,1,,,4/25/2021 12:30,,2,400,"

In the maximum entropy inverse reinforcement learning paper, Ziebart et al. show that the state visitation frequency $\rho(s)$ of a state $s$ can be computed as $$ \rho_{\pi}(s) = \sum_{t}^{T} P(s_t=s|\pi), $$ which is the sum of the probability that the state being visited at each time step.

I just don't understand why is it the sum? From my perspective, a frequency should be the less than one, so that it should be the average value $$ \rho_{\pi}(s) = \frac{1}{T}\sum_{t}^{T} P(s_t=s|\pi). $$

",40242,,,,,6/29/2022 12:08,Why is it that the state visitation frequency equals the sum of state visitation frequency from initial time step to the horizon?,,1,2,,,,CC BY-SA 4.0 27502,1,,,4/25/2021 17:04,,0,548,"

My framework is an encoder-decoder (LSTM-to-LSTM) model, similar to this post. The model basically reads a sentence and generate another sentence. But, the thing is, after a few epochs training, the model cannot produce any meaningful output, instead it keeps generating repeating words.

The framework can translate Franch-Enlgish very effectively, but for my problem it generates the result like this.

Can you explain why it produces such result, Thank you

  • Here is my printed output:

",38455,,38455,,4/25/2021 20:27,4/25/2021 23:08,Seq2Seq model produces repeating words,,1,2,,,,CC BY-SA 4.0 27503,2,,27493,4/25/2021 20:48,,1,,"

First of all, for a lot of realistic problems, the fitness function evaluation is usually orders of magnitude greater in complexity than the rest of the genetic algorithm. This is not always true, but often is true (e.g. imagine trying to optimise a simulation where you need to execute the simulation completely to obtain the fitness). So optimising the GA itself might only be helpful when the fitness function is lightweight (e.g. it's a mathematical function as used in some competitions).

While your diagram shows a lot of operators, it doesn't show variations of the GA itself. Here are two:

  • generational: selects a new population at each iteration after performing mutation and crossover on the existing population to generate offspring (which are usually more in number than the population size)
  • steady state: maintains a single population and replaces only individuals during each iteration with their offspring according to selection/replacement rules, so uses less memory than a traditional GA

If you really wanted to maximise efficiency in the GA, a steady state approach would probably work best combined with tournament selection, as tournament selection does not require any sorting. The mutation/crossover operators you list are likely to all be quite efficient, but simpler methods are likely to to be the most efficient (e.g. bit flip mutation+1 point crossover). For a realistic problem, however, problem-specific operators usually work best.

",7760,,7760,,4/25/2021 21:26,4/25/2021 21:26,,,,0,,,,CC BY-SA 4.0 27504,2,,27497,4/25/2021 20:57,,2,,"

Would OpenCog fit the bill? I have had tremendous amounts of trouble building up the demos, which include some non-AGI stuff, but if I’ve read the manual correctly, I think there’s something here — https://opencog.org/

",46552,,,,,4/25/2021 20:57,,,,1,,,,CC BY-SA 4.0 27505,2,,27502,4/25/2021 23:08,,1,,"

The trained model predicts the probability of a given sequence of tokens. Whatever NLP task you are doing, you usually want to get a high-probability sample from that probability distribution. This sampling task could be quite non-trivial.

What you are seeing is most likely the result of a greedy sampling - the most probable next word is chosen from the probability distribution. This quite often leads to infinite repletion loops as you experience.

Simplest way to solve the repetition problem is to actually randomly sample from the probability distribution of the next word - the distribution is usually augmented with so-called temperature parameter.

Finally, there's beam search - you just perform a tree search for a best N-gram sample in the future. That's the most sophisticated, and the most computationally hard one.

",20538,,,,,4/25/2021 23:08,,,,1,,,,CC BY-SA 4.0 27507,1,,,4/26/2021 4:54,,1,816,"

I have some free text (think: blog articles, interview transcripts, chat comments), and would like to explore the text data by analysing the proper nouns it contains.

I know of many ways to simply look up the text against a 'list' of proper nouns. The problem with the approach is many false positives and false negatives, as well as inaccuracies where one proper noun (e.g. "John Allen") is identified as two proper nouns ("John" and "Allen"), as well as other problems, mostly to do with long or unusual proper nouns (e.g. "the Gulf of Carpentaria" - a single proper noun containing the word "of", and long names like "Joost van der Westhuizen"). These kinds of longer, non-conformist proper nouns tend to really trip up grep-style proper noun identification models.

Does anyone know if any AI available to the public can more accurately identify proper nouns in free text?

",44605,,44605,,4/26/2021 5:04,4/27/2021 14:10,Is there an AI that can extract proper nouns from free text?,,1,0,,,,CC BY-SA 4.0 27508,1,,,4/26/2021 6:47,,0,39,"

I would like to create a chat bot for an e-commerce website that sells a wide range of general merchandize items, from t-shirts, jumpers to calculators. Its primary objective is to develop a Q&A option for visitors/potential customers, to improve engagement on the website. As such, the chat bot is required to be fairly conversational.

I am experienced in classification et al, but know only the very basics on NLP. Can you provide suggestions on where to begin, e.g., recommended readings/ sources?

Also note, there is currently has no chat bot system in place, and hence no historical conversation data of any form.

",46566,,40434,,5/8/2021 17:01,5/8/2021 17:01,Creating a NLP driven chatbot,,1,1,,4/27/2021 11:08,,CC BY-SA 4.0 27509,1,35629,,4/26/2021 7:18,,3,122,"

I am training an object detection machine learning pipeline. Among the many metrics provided out of the box by tensorflow object detection API, I look at total_loss and DetectionBoxes_Precision/mAP@.75IOU:

Here the x-axis is the number of steps i.e. model experience. The orange line is for the training data and the blue line is for the validation data.

From the loss graph I would conclude, that at approx 2k steps overfitting starts, so using the model at approx 2k steps would be the best choice. But looking at the precision graph, training e.g. until 24k steps would be a much better model. Which one is the best model?

Here, loss and the precision metric where picked just for illustrating the dilemma, there are many more metrics available, leading to multiple conclusions about when overfitting actually starts.

",46570,,46570,,4/28/2021 6:06,5/23/2022 14:52,When exactly am I overfitting -- contradicting metrics,,2,0,,,,CC BY-SA 4.0 27510,2,,27508,4/26/2021 8:52,,1,,"
  1. The deeplearning.ai Natural Language Processing Specialization, is a great place to start. It'll brush up some basic concepts of NLP and then get into SOTA methods such as Transformers. There's even a project in one of the courses about building a chatbot using transformers, it also has a ton of papers and other sources mentioned.

  2. Rasa is a great software tool/package to get started with chatbots too. It's a great engaging community so your queries should be quickly answered.

",40434,,,,,4/26/2021 8:52,,,,0,,,,CC BY-SA 4.0 27511,1,,,4/26/2021 9:07,,0,116,"

I am building a solution for an environment with stochastic rewards in an online setting. I am wondering what the state of the art is in this setting. Is it $\epsilon$-greedy (with logistic regression), LinUCB (with ridge regression), Thompson Sampling (with some approximator)? Could maybe point me to the relevant papers/articles?

",2254,,2444,,4/27/2021 11:07,4/27/2021 11:07,What are the state-of-the-art learning algorithms for contextual bandits with stochastic rewards,,0,2,,,,CC BY-SA 4.0 27512,1,27520,,4/26/2021 9:47,,1,34,"

How is the jump from line 1 to line 2 done in equation 10 of Show, Attend and Tell?

While we're at it, another thing that might be muddying the waters for me is that I'm not clear on what the sum is over. I know that $s$ is indexed as $s_{t,i}$, but the one-hot trick from line 2 to 3 makes me believe that the sum is over just $i$.

",16871,,2444,,4/30/2021 9:37,4/30/2021 9:38,"How is the variational lower bound for hard attention derived in Show, Attend and Tell",,1,0,,,,CC BY-SA 4.0 27514,1,,,4/26/2021 10:18,,2,366,"

I am trying to add more data points in my (almost) balanced dataset for training my neural network. I have come across techniques such as SMOTE or Random Over Sampling, but they work best for imbalanced data (as they balance the dataset). How can I do this and is it even worth it?

P.S. I know copying the same data points and appending them at the end doesn't add much value, but can we do it, and can it help to increase the prediction accuracy?

",35760,,2444,,5/29/2021 13:29,1/28/2023 1:03,How do I increase the size of an (almost) balanced dataset?,,6,0,,,,CC BY-SA 4.0 27516,2,,27507,4/26/2021 10:59,,2,,"

This is a hard problem, unless you have a list of proper nouns you want to recognise. If John Allen is in this list, then you can easily use a longest match to prefer it over John or Allen. The same applies to the other examples you give.

Capitalisation on its own is not very reliable, as words at the beginnings of sentences are also capitalised, and sometimes technical terms or emphasis are expressed By Capitalising Them In Mid-Sentence. It's not what I would do, but you really have to expect anything in free text.

You could go some way by looking for sequences of proper nouns with of, de or van etc between them. There is probably a reasobaly small list of those connectors.

You can set up a grammar to capture complex names, but by far the most reliable way is to have a list. Unfortunately there is no easy solution for this. I would approach it iteratively, ie process a segment of text, tidy up the results, generate a list from them, and repeat with more text. Everything that is in the list you accept, other candidates you vet before adding them to the list. Eventually you should get fewer candidates that are not in the list.

You can probably kick-start your list with a gazetteer, ie a list of names and places. There are several of those on the web.

In general this procedure is referred to as Named Entity Recognition (NER). There are many ways to solve it, and my suggestion here is a pragmatic approach, which works reasonably well with not too much effort.

",2193,,2193,,4/27/2021 14:10,4/27/2021 14:10,,,,4,,,,CC BY-SA 4.0 27517,1,,,4/26/2021 11:28,,1,581,"

I have PPO agent for discrete action space for LunarLander-v2 env in gym and it works well. However, when i am trying to solve continuous version of the same env - LunarLanderContinuous-v2 it is totally failing. I guess i made some mistakes in converting algorithm to continuous version. So, my steps of changing at algorithm are:

  1. Change network: return mu and var of shape n_actions. I have 2 different last layers for that, for mu i return Tanh of logits and for var i return Softplus of logits.
  2. Choosing action: sampling action from normal distribution with expectation mu and variance var - torch.distributions.multivariate_normal.MultivariateNormal(torch.squeeze(mu), torch.torch.diag_embed(var))
  3. For log of action probability i am using dist.log_prob(actions)

With this small changes my algorithm totally doesn't work. Is it right steps to convert algorithm with discrete action space to algorithm with continuous action space? I really confused, because my PPO for discrete action space work very well and with only this changes it is failing. Could you please suggest what i am doing wrong here?

",46574,,46574,,4/26/2021 12:03,4/26/2021 12:03,PPO in continuous control not working,,0,5,,,,CC BY-SA 4.0 27520,2,,27512,4/26/2021 13:11,,2,,"

This is Jensen's inequality at work.

First of all, note that the first line can be rewritten as an expectation

$$\sum_{s} p(s \mid \mathbf{a}) \log p(\mathbf{y} \mid s, \mathbf{a}) = \mathbb{E}_{p(s|a)}[\log p(\mathbf{y} \mid s, \mathbf{a})]$$

Then Jensen's inequality gives (Note that a log function is a concave function so gives the opposite inequality to what is normally given when explaining Jensen's inequality with respect to convex functions):

$$\mathbb{E}_{p(s|a)}[\log p(\mathbf{y} \mid s, \mathbf{a})] \leq \log\mathbb{E}_{p(s|a)}[ p(\mathbf{y} \mid s, \mathbf{a})] $$

and then finally you can rewrite the Expectation as a summation.

$$\log\mathbb{E}_{p(s|a)}[ p(\mathbf{y} \mid s, \mathbf{a})] = \log \sum_{s} p(s \mid \mathbf{a}) p(\mathbf{y} \mid s, \mathbf{a})$$

",46543,,2444,,4/30/2021 9:38,4/30/2021 9:38,,,,0,,,,CC BY-SA 4.0 27521,2,,27484,4/26/2021 13:45,,1,,"

Epsilon greedy is unaffected by scaling of rewards, it always selects a random action with a probability of epsilon.

On the other hand, if we look at the formulation of UCB (Section 2.7 of Reinforcement Learning, Sutton and Barto):

$$A_t \doteq \underset{a}{\operatorname{argmax}} [\mathcal{Q}_t(a) + c \sqrt{\frac{\ln t}{N_t(a)}}]$$

Where $Q_t(a)= \frac{R_1 + R_2 + \dots + R_{n-1}}{n-1}$ is the average of rewards seen so far for that action. By scaling the rewards, for example by 1/10, you are essentially scaling $Q_t(a)$ by 1/10 which will change UCBs behaviour to favour the exploratory right hand term $c\sqrt{\frac{\ln t}{N_t(a)}}$. If you want to offset your scaling of the rewards you should scale c by the same amount. Theoretically, I believe this should result in performance the same as that achieved pre-scaling.

In fact, another way of looking at it is that when using UCB, you could dispense with the $c$ constant altogether and simply scale the rewards to tune the exploitation/exploration trade-off. Smaller rewards favour exploration, larger rewards favour exploitation.

",46543,,46543,,4/26/2021 13:55,4/26/2021 13:55,,,,7,,,,CC BY-SA 4.0 27526,1,,,4/26/2021 21:06,,1,411,"

My RL project has all positive continuous rewards for every step and the goal is to have the maximum cumulative reward (episodic reward). The problem is that the rewards are too close and all between 5 and 6, therefore achieving the optimum episodic reward will be harder.

What scaling methods are recommended? (like min-max scaling or reward ** 3)

How can I emphasize the episodic reward?

",46577,,2444,,4/30/2021 11:13,1/20/2023 17:00,How to scale all positive continuous reward?,,1,0,,,,CC BY-SA 4.0 27527,2,,27514,4/26/2021 22:30,,0,,"

Random over sampling creates duplicates of existing examples, so applying this to your training data would be the same as increasing the weight of the oversampled examples. If it's done to all of the examples uniformly then the effects will probably cancel out. SMOTE, on the other hand, creates synthetic examples that are linear combinations of existing examples. Thus it can be thought of as a type of data augmentation, and in some situations this might improve your model's predictions.

",7760,,,,,4/26/2021 22:30,,,,0,,,,CC BY-SA 4.0 27528,2,,4774,4/27/2021 0:17,,2,,"

This is a whole sub-field of reinforcement learning known as model-based reinforcement learning. The idea in model based RL is to learn the mapping from current state/action to next state in order to facilitate learning good policies.

If you are dealing with images as inputs I would recommend checking out the Dreamer papers. The most recent being this one.

",46543,,,,,4/27/2021 0:17,,,,1,,,,CC BY-SA 4.0 27529,2,,27309,4/27/2021 0:30,,2,,"

This is how I understand it.

Batch normalization is used to remove internal covariate shift by normalizing the input for each hidden layer using the statistics across the entire mini-batch, which averages each individual sample, so the input for each layer is always in the same range. This can be seen from the BN equation:

$$ \textrm{BN}(x)= \gamma\left(\frac{x-\mu(x)}{\sigma(x)}\right)+\beta $$

where $\gamma$ and $\beta$ are affine parameters learned from data; $\mu(x)$ and $\sigma(x)$ are the mean and standard deviation, computed across batch size and spatial dimensions independently for each feature channel. First, we normalize each channel with 0 mean and standard deviation of 1 according to batch statistics. We then scale and shift each channel with $\gamma$ and $\beta$.

This is fine if you want to classify an average object on an image from different viewing angles and lighting conditions. It is defined similarly to BN:

$$ \textrm{LN}(x)= \gamma\left(\frac{x-\mu(x)}{\sigma(x)}\right)+\beta $$

but now $\mu(x)$ and $\sigma(x)$ are computed across all channels for each individual sample. Here's an illustration of the difference:

So layer normalization averages input across channels (for 2d input), which preserves the statistics of an individual sample. In some cases, we want to penalize the weights norm with respect to an individual sample rather than to the entire batch, as was done in WGAN-GP.

In terms of style transfer for images, it is also important to preserve the individual color statistics of a sample. Therefore, StyleGAN uses adaptive instance normalization, which is an extension of the original instance normalization, where each channel is normalized individually.

In addition, BN has several problems: the batch size must be large enough to capture overall statistics, which is sometimes impossible if you are working with large images since the model won't fit in memory. The concept of a batch is not always present, or it may change from time to time.

I strongly encourage you to read the original BN paper and also:
Adaptive instance normalization
Group Normalization

",12841,,12841,,4/27/2021 14:03,4/27/2021 14:03,,,,6,,,,CC BY-SA 4.0 27530,1,,,4/27/2021 0:32,,1,35,"

I want to build a model that when given two vectors, outputs the probability of one vector being the encoded form of the other. I have 2 strategies for this: (Dataset available)

  1. I can directly feed them in concatenation to a neural network and take the output as the probability.

  2. I can train a conditional GAN with the conditional vector being the encoded vector and using the original vector as the generated one. In this case, the discriminator works as the network that I train in the first approach.

Which approach is better? Am I thinking in the right direction?

",46591,,46591,,4/27/2021 8:59,4/27/2021 8:59,Which approach best suits vector encodings?,,0,2,,,,CC BY-SA 4.0 27533,2,,27514,4/27/2021 3:15,,-1,,"

Gretel is a good tool for processing data. Facets is good for the visualizations.

Is it worth it? most learners will exhibit bias towards the majority class, and in extreme cases, may ignore the minority class altogether.

It really depends on the goal and requirements of your project. Not because it's desirable it's better for your particular project, if your dataset is almost balanced probably you should continue with something else and consider balancing your data later in the project.

Edit: What's new to learn from the same example?

",46594,,46594,,8/5/2021 20:07,8/5/2021 20:07,,,,1,,,,CC BY-SA 4.0 27534,1,27560,,4/27/2021 7:48,,1,69,"

I have a set of time series data which gives me fibonacci levels and the duration at which the value is at this level. Data structure to look like:

Date / Duration (minutes) / Level

201201 / 380 / 2
.....

210422 / 400 / 4

I'd like to create a NN model (LSTM maybe) that would forecast the next level, the probability it reaches it and this for several steps ahead (1 step = 400 minutes). Which time series forecasting model would you recommend ? Thanks in advance.

",46600,,40434,,4/28/2021 17:23,4/28/2021 20:20,Recommended Time serie forecasting model for Fibonacci levels classification,,1,4,,,,CC BY-SA 4.0 27536,2,,27514,4/27/2021 11:34,,0,,"

I presume you are attempting to solve a classification problem.

IMO, there's no decision-making template you could follow to know whether to use over sampling or not.I would typically compare results (ROC AUC, PRC curves) across datasets (Original vs Undersampled vs Oversampled) to decide.

You can consider some additional variants of SMOTE like SMOTE NC (SMOTE does not work if any of your predictors is categorical , SMOTE NC does), Borderline SMOTE, K-means SMOTE, as well as ADASYN. Alternately, you can also choose to undersample your majority class using techniques such as ENN, Tomek Links, Instance Hardness, CNN, One sided Selection etc. They usually generate better results than random over/under sampling.

Do note that over/under sampling methods are generally used for imbalanced datasets.

",46566,,,,,4/27/2021 11:34,,,,1,,,,CC BY-SA 4.0 27538,1,27557,,4/27/2021 13:58,,4,128,"

In the classical examples of deep q-learning, I often see neural networks in which the input represents the state of the agent, while the output is a tuple with all the values of $Q(s, a)$ predicted for all the possible $N$ actions.

Would it be cheaper to have $N$ neural networks with a single real-valued output, one for each of the $N$ actions?

With cheaper I mean cheaper in terms of the time complexity of a single training step of the network.

",46607,,46607,,4/29/2021 20:17,4/29/2021 20:17,"In DQN, would it be cheaper to have $N$ neural networks with a single real-valued output, one for each of the $N$ actions?",,1,8,,,,CC BY-SA 4.0 27539,1,,,4/27/2021 15:11,,0,24,"

I have a short question. I understand the concept of RPN but one small details keeps me from implementing it. How should I design the training loop given that I have to use only a subset of anchor boxes (128 positive and 128 negative). In other words, if the ouput of reg layer is a map of every bounding box, how do I update only the bounding boxes that match my 128 positive/128 negative samples

",46588,,46588,,5/8/2021 20:33,5/8/2021 20:33,How to design training loop in RPN?,,0,2,,,,CC BY-SA 4.0 27541,1,,,4/27/2021 23:11,,2,112,"

I am thinking about episodic MDPs. Usually, in episodic MDPs, it seems that we have a finite fixed horizon per episode and no discount factor. Then, a very intuitive notion of regret after $T$ episodes is to sum over the difference of optimal expected return and achieved expected return.

I was wondering about notions of regret for infinite horizon discounted MDPs. It is not clear to me what a reasonable notion of regret for this setting would be, and I am also not aware of any standard definition of regret in this setting.

Maybe, as a justification for infinite horizon episodic MDPs, this quote by Littman in his paper: Markov games as a framework for multi-agent reinforcement learning

As in MDP's, the discount factor can be thought of as the probability that the game will be allowed to continue after the current move. It is possible to define a notion of undiscounted rewards [Schwartz, 1993], but not all Markov games have optimal strategies in the undiscounted case [Owen, 1982]. This is because, in many games, it is best to postpone risky actions indefinitely. For current purposes, the discount factor has the desirable effect of goading the players into trying to win sooner rather than later.

",36978,,2444,,4/28/2021 9:54,4/28/2021 9:54,Is there any reasonable notion of regret for infinite horizon discounted MDPs?,,0,1,,,,CC BY-SA 4.0 27543,1,,,4/28/2021 1:16,,1,61,"

Consider the following statement from Deep Learning book (p. 327, chapter 9: Convolutional Networks)

In its most general form, convolution is an operation on two functions of a real-valued argument.

Suppose $f$ and $g$ are functions on which I want to apply convolution operation. What is meant by two functions of a "real-valued argument" in this context?

Does it mean $f$ and $g$ are real-valued functions? Or does it mean $f$ and $g$ are real functions? or any other?

  • Real-valued function: Function whose codomain is a subset of real numbers

  • Real function: Function whose domain and codomain are a subset of real numbers.

",18758,,2444,,4/28/2021 9:47,4/28/2021 9:47,"What is meant by ""real-valued argument"" in this context of the convolution operation?",,1,0,,,,CC BY-SA 4.0 27544,2,,27526,4/28/2021 2:09,,0,,"

I’ll try to find where I found it, but normalizing the rewards has always worked for me. Assuming you have a list of the discounted returns for each action, you subtract the whole list by its average value then divide it by its standard deviation. In Python with NumPy, that would look like:

returns -= np.mean(returns)
returns /= np.std(returns)

This puts the returns in a small and consistent range that keeps learning similar with different rewards.

",41026,,,,,4/28/2021 2:09,,,,6,,,,CC BY-SA 4.0 27546,2,,27543,4/28/2021 6:03,,1,,"

In its most raw form, convolution is defined as: $(f*g)(t) = \int_{-\infty}^\infty f(\tau) \cdot g(t-\tau) d\tau$.

Here, t doesn't represent the time domain. Infact, it represents the real valued argument the book is talking about. In this notion, at moment t, convolution can be thought of as a weighted average of the function $f(\tau)$ weighted by $g(–\tau)$, which is simply shifted by amount t.

",46214,,,,,4/28/2021 6:03,,,,1,,,,CC BY-SA 4.0 27547,2,,27509,4/28/2021 7:59,,0,,"

Generally, a model is considered to be overfitted when there's a huge gap between training and test/validation performances.

So during the training, you monitor the loss on validation data, and training data, and stop training if validation loss stagnates/increases given the training loss keeps decreasing.

In your scenario, I'm not sure what the total loss metric corresponds to. As I said, you've to measure loss on a held-out data other than training data to detect, and prevent overfitting.

",32621,,,,,4/28/2021 7:59,,,,1,,,,CC BY-SA 4.0 27550,1,27561,,4/28/2021 12:34,,1,55,"

Let's say I have the time-series dataset below-left. I would like to train a model in such a way that, if I feed the model with an input like the test sequence below, it should be able to classify each sample with the correct class label.

    Training Sequence:              Test Sequence:

Time,   Bitrate,   Class           Time,   Bitrate    Predicted Class
 0,       312,       1              0,       234      -----> 0
 0.3,     319,       1              0.2,     261      -----> 0
 0.5,     227,       0              0.4,     277      -----> 0
 0.6,     229,       0              0.7,     301      -----> 1 
 0.7,     219,       0              0.8,     305      -----> 1
 0.8,     341,       1              0.9,     343      -----> 1 
 0.9,     281,       0              1.0,     299      -----> 0 
          ...                                ...

So, what kind of neural network should I build to classify each instance of a time series sequence?

",41691,,2444,,4/29/2021 10:42,5/8/2021 17:03,What kind of neural network should I build to classify each instance of a time series sequence?,,1,0,,,,CC BY-SA 4.0 27557,2,,27538,4/28/2021 19:29,,3,,"

Would it be cheaper to have $N$ neural networks with a single real-valued output, one for each of the $N$ actions?

I think the "No Free Lunch" theorem applies here, or something like it.

Your proposed architecture would be an unusual choice in many cases, but might be more efficient in others. For instance, it could be more efficient in the following scenario:

The long term value is highly dependent on the immediate action choice, and in a way that relies on state variables differently, depending on the specific action. That means it would be difficult for a single NN to create shared features in its layers, and you could save processing by treating each action as a different prediction problem.

This is only an educated guess.

As usual, the only way to find out for sure is to try different approaches and compare them. I don't think there is anything other than experience and a little intuition to guide you.

",1847,,1847,,4/28/2021 21:19,4/28/2021 21:19,,,,0,,,,CC BY-SA 4.0 27558,1,,,4/28/2021 19:32,,1,29,"

I am currently researching the topic of weight initialization methods for (deep) neural networks and I'm a little stuck. The result of my work should be an overview of methods that are currently in use.

I already collected information about different methods (Xavier/He, LSUV, Pre-Training, etc.), but I was wondering if anyone has a method that comes to mind (maybe recently developed) that I should look at more closely?

",46634,,2444,,4/30/2021 11:08,4/30/2021 11:08,Which methods for weight initialization in Neural Networks are currently common practice?,,0,1,,,,CC BY-SA 4.0 27559,2,,18366,4/28/2021 19:39,,0,,"

I assume you want a model that uses the Softmax as the output layer.

Basically, the Softmax will produce a set of probabilities that all sum up to 1. So if you have three classes in your data the Softmax will produce these confidence values by default, even though this is not exactly its main functionality.

The Softmax is commonly used on multiclass data.

",32265,,,,,4/28/2021 19:39,,,,0,,,,CC BY-SA 4.0 27560,2,,27534,4/28/2021 19:53,,1,,"

It seems like you are trying to produce two outputs here. This usually makes the models more complex. What if you only predicted the Fibonacci levels for each time step? Then count how many time steps it stays at that level.

As for producing the Fibonacci levels themselves, you can look into categorical time series where the values are categories instead of real numbers.

You have a many-to-one problem. So, yes the model takes two inputs, Duration and Level, but only produces one output, the Level.

The times septs in a model will always be the same. You have to choose a range that is appropriate and all your predictions will be based on that. It's important to note that finding the best time step size is a trial and error process. For example, you could first try using 10 time steps and if you find the model needs more you retrain it with 20 and compare the results.

",32265,,32265,,4/28/2021 20:20,4/28/2021 20:20,,,,3,,,,CC BY-SA 4.0 27561,2,,27550,4/28/2021 20:07,,1,,"

A time series, usually, requires regular time intervals, but, from looking at your example, it seems that's not the case. You could try to use a MLP and give it as input the Time and Bitrate pairs and make it output the Class.

The activation function is what makes your neural network produce its output, i.e. activate the neurons. The loss function calculates the error of your model's predictions. This error is used by the backpropagation algorithm to adjust the model when training.

If your data can only be in one of two classes, for example 0 or 1, you have a binary classification problem. I recommend using the Sigmoid as the activation function. If you look at the Sigmoid's plot, you'll notice that, no matter the input, the output will always be within 0 and 1. You can think of this as the probability of the input being in class 0 or 1. So, for example, any input that produces a probability under 0.5 is class 0. Of course, if you want to be more precise you can simply procure the probabilities without the making it fit the labels. For example, an output of 0.7 represents that there is a 70% probability of the input belonging to class 1. Another term for this is Logistic Regression.

The Binary cross-entropy calculates the error between the model's prediction and the real class label. This loss function can be applied to many classes, but the binary version keeps it to 0 and 1.

Choosing the right activation functions and loss functions will depend on the problem you are trying to solve. Here are some useful guides on loss functions and activation functions.

Additional to the above, try to use scaling(normalize) the inputs before feeding to the network, so that the network may generalize, as it looks like bitrate value above 300 is likely to be associated with class 1. It may improve performance.

",32265,,28571,,5/8/2021 17:03,5/8/2021 17:03,,,,3,,,,CC BY-SA 4.0 27562,1,,,4/28/2021 21:54,,1,1525,"

I have this problem above and I'm trying to think of how to apply alpha-beta pruning to the above. The algorithm states that on you're opponents turn the (expecti turn) you just return the lowest value, but does that mean you apply the probabilities to those values? So for the far left you'd get 2 as the largest value then multiply that by 0.5, but then that set's $\\beta$ in the expecti node to $0.5*2=1$ and when it goes into the branch to the right it's comparing values without the probabilities applied to it when updating $\beta$.

",30885,,30885,,4/30/2021 20:19,1/16/2023 14:04,How to alpha-beta pruning to expectiminimax,,3,2,,,,CC BY-SA 4.0 27564,1,27572,,4/29/2021 2:56,,2,2212,"

I'm studying the U-Net CNN architecture. I'm new to CNNs and am confused regarding the "number of channels".

Referring to the U-Net diagram, the input image is convolved with a 3x3 mask which generates a 570x570 output. This output image is then convolved again by a 3x3 mask to produce a 568 x 568 signal. However, what do the 64's correspond to?

The U-net says something about a multi-channel feature map. But how does convolving an image by a 3x3 mask results in a "64".

",46640,,,,,4/29/2021 14:03,"What does the ""number of channels"" correspond to in U-Net?",,1,2,,,,CC BY-SA 4.0 27565,1,,,4/29/2021 6:42,,2,62,"

I have read about methods that apply continual learning strategies to reinforcement learning.

Since reinforcement learning also learns step by step (i.e., task by task, in a sense) during the training phase, why isn't it itself considered a continual learning strategy?

Of course, I understand that if an agent catastrophically forgets previously learned tasks, there is a need to prevent this and therefore develop strategies to mitigate catastrophic forgetting, but my question is more about the definition. If continuous learning (or online learning) is about learning one task at a time, and RL somehow does this, why is it not considered a continual learning strategy (regardless of the fact that it may not be as effective)?

To clarify, I haven't read anywhere the claim that RL is not a CL approach, but also none that it would be. Only the fact that CL methods are proposed for RL gives me the impression that RL is not considered an approach. Nor have I seen anyone mention RL for this purpose. I'm just wondering why that is.

",43542,,43542,,4/29/2021 11:10,4/29/2021 11:10,Why isn't RL considered a continual learning strategy itself?,,0,5,,,,CC BY-SA 4.0 27567,2,,27514,4/29/2021 8:44,,-1,,"

If your data is well balanced but small, I would recommend using a simpler algorithm to classify your data.

",32265,,,,,4/29/2021 8:44,,,,0,,,,CC BY-SA 4.0 27570,1,,,4/29/2021 13:21,,3,100,"

I am familiar with the currently popular neural network in deep learning, which has weights and is trained by gradient descent.

However, I found many papers entitled "Neural networks for solving XXX optimization problems." These papers were popular from the 1980s to the 2000s.

For example, the first one is that "Neural network to solve linear programming problems" 1. Later, Kennedy et al. used "Neural network" to solve nonlinear programming problems 2.

I summarize the difference between the current popular neural network and the "Neural networks":

  1. They do not have parameter weights and biases to train or to learn from data.
  2. They used a circuit diagram to present the model.
  3. The model can be simplified as an ODE system and has a Lyapunov function to prove stability.

So, my question is, why these ODE systems are called "Neural networks"?

Reference:

1: J. J. Hopfield, D. W. Tank, “neural” computation of decisions in optimization problems, Biological265 cybernetics 52 (3) (1985) 141–152.

2: M. P. Kennedy, L. O. Chua, Neural networks for nonlinear programming, IEEE Transactions on Circuits and Systems 35 (5) (1988) 554–562.

",43011,,43011,,10/2/2022 18:12,10/2/2022 18:12,Why some dynamical systems in the form of ODE system called Neural networks in the 90s,,2,4,,,,CC BY-SA 4.0 27572,2,,27564,4/29/2021 14:03,,2,,"

In this example you have a gray scale image of size 572x572 and 1 (gray) channel. The first convolution operation consists of 64 filters of size 3x3 and 1 channel per filter. The channel of the filters always fits the channel size of the previous layer (here: the Input). In the second convolution step of this explicit architecture, you again use 64 filters of size 3x3. In this case, each of these filters consists of 64 channels according to the previous output (64 feature maps/channels). The output of the second convolution consists of 64 feature maps according to the amount of 64 filters in the second convolution. This video from Andrew Ng visualizes it perfectly.

",46500,,,,,4/29/2021 14:03,,,,1,,,,CC BY-SA 4.0 27573,1,,,4/29/2021 14:43,,2,134,"

I was reading Constraint Satisfaction Problem chapter from Artificial Intelligence 3rd ed book by Peter Norvig et al. On page 219, section 6.3 it explains computation of conflict set for conflict directed backjumping as follows:

In our example, $SA$ fails, and its conflict set is (say) $\{WA, NT,Q\}$. We backjump to $Q$,and $Q$ absorbs the conflict set from $SA$ (minus $Q$ itself, of course) into its own direct conflict set, which is $\{NT, NSW\}$; the new conflict set is $\{WA, NT, NSW\}$. That is, there is no solution from $Q$ onward, given the preceding assignment to $\{WA, NT, NSW\}$. Therefore, we backtrack to $NT$, the most recent of these. NT absorbs $\{WA, NT, NSW\}−\{NT\}$ into its own direct conflict set $\{WA\}$,giving $\{WA, NSW\}$ (as stated in the previous paragraph). Now the algorithm backjumps to $NSW$, as we would hope.

To summarize: let $X_j$ be the current variable, and let $conf(X_j)$ be its conflict set. If every possible value for $X_j$ fails, backjump to the most recent variable $X_i$ in $conf(X_j)$, and set
$conf(X_i) ← conf(X_i) ∪ conf(X_j) −\{X_i\}$.

I didn't get the line:

Therefore, we backtrack to $NT$, the most recent of these.

How is $NT$ most recent of $Q$'s conflict set $\{WA,NT,NSW\}$?

PS: Here is the map coloring example the book discusses:

",41169,,2444,,5/4/2021 10:51,5/4/2021 10:51,Understanding conflict set generation for conflict directed backjumping,,0,0,,,,CC BY-SA 4.0 27575,1,,,4/30/2021 5:08,,0,66,"

I have a problem where I need to rearrange a particular user's mobile home screen icon layout. Let's say that the social media app usage of a user is high compared to other app usage. So I need the reinforcement algorithm to process this information and send back the instructions to the android operating system as to how the icons needs to be arranged. To address this problem, I have chosen three algorithms:

  1. Q-learning.
  2. State-Action-Reward-State-Action.
  3. Deep Deterministic Policy Gradient.

I have decided to first consider only Q-learning, so I am trying to understand the states, rewards, and the actions I need to pass in order to make this algorithm work.

The principles I have considered are:

– The environment is the mobile device operating system.
– Moving the apps up in the list depending on their usage can be the action where the app can be moved left, right, up or down.
– The reward can be a periodic reward, check if the user has rearranged an app which was given prominence in the list by the algorithm and receive a negative feedback if the user has rearranged the icon position or if it is in the same position receive a positive reward.

The initial challenge I am facing is to understand what inputs/states I need to pass into the algorithm and is there any reinforcement learning library I can use to mimic such an environment?
Are there any resources or papers I can use to solve this problem I am facing?

",46672,,46672,,5/2/2021 14:23,5/2/2021 16:17,Reinforcement learning for rearranging the mobile home screen icon layout: what inputs/states do I need to pass into the algorithm?,,0,4,,,,CC BY-SA 4.0 27580,2,,27570,4/30/2021 15:44,,2,,"

Well, the goal of any paper is to allow the reader to understand what the author is trying to describe.

A lot of people have a lot of experience looking at circuit diagrams and figuring out those circuits will do. For these people, a circuit diagram may be the clearest and easiest way for them to understand how a particular thing works. So, it makes sense that an author would include a circuit diagram, in order to make it easy for those people to understand the concepts.

There are two particular reasons why a circuit diagram is especially likely to show up in a paper about neural networks:

The first reason is that analog circuits are closely related to ordinary differential equations, and digital circuits are closely related to sequential logic. So, if you have a neural network or something that uses ordinary differential equations or sequential logic, then a circuit diagram might be a simple way to express how it works.

The second reason is that a lot of researchers who are familiar with computers are also familiar with electronic circuits. This was especially true in the early days of computers, when people had to be familiar with electronics, math, or both in order to understand how to program a computer.

",1834,,,,,4/30/2021 15:44,,,,2,,,,CC BY-SA 4.0 27581,2,,27570,4/30/2021 15:58,,3,,"

In the early days of neural networks the theorists and practitioners were educated in mathematics, psychology, neurophysiology, electrical engineering, and neurobiology. Computer science was still in its infancy. The first neural networks were modeled as electrical circuits.

There is evidence of this in the 1943 paper by Warren McCulloch and Walter Pitts [1], and a 1956 paper by Rochester et al. [2].

The latter paper uses terms such as 'circuits' and 'switching'. One idea in the paper is explained in terms of a "Eccles-Jordan Flip Flop circuit" although there are no drawings. Nathanial Rochester had designed the IBM 701k [3] and "led the first effort to simulate a neural network" [4]

Brain structure was discussed in terms of 'neural circuits' as early as 1937 [5].

I am not sure when the first electrical circuit diagram appeared in publication, but it makes sense that early neural network designers, would have thought of their implementation as such.

References:

",5763,,,,,4/30/2021 15:58,,,,0,,,,CC BY-SA 4.0 27582,2,,15743,4/30/2021 18:17,,0,,"

As explained here

only if larger function classes contain the smaller ones are we guaranteed that increasing them strictly increases the expressive power of the network. For deep neural networks, if we can train the newly-added layer into an identity function $f(x)=x$, the new model will be as effective as the original model. As the new model may get a better solution to fit the training dataset, the added layer might make it easier to reduce training errors.

",46686,,2444,,5/4/2021 10:12,5/4/2021 10:12,,,,0,,,,CC BY-SA 4.0 27583,1,,,4/30/2021 18:37,,1,23,"

I have been studying HMM implementation approaches on ASR for the last couple of weeks. This probabilistic model is very new to me. I am currently using a Python package called Pomegranate to implement an ASR model of my own for the Librispeech dataset. I plan to use my own small-size dataset once I feel comfortable assessing the results of this one with Librispeech.

My problem: Librispeech (and my own dataset) has word-level transcription labels of the audio. The audio files have several word utterances each. Upon generating the MFCCs, I am not sure how to initialize the HMM matrices since the MFCCs, to my knowledge, capture phoneme level context at 10ms windows whereas I am trying to use the current word-level labels. There is no match up to which word each MFCC window belongs. Are the unique words in my corpus to be considered as the individual states in the transition probability matrix? I’m missing the point of how the extracted MFCCs are fed to the model for initialization and/or training?

I’ve been stumped on this for several days and I can’t seem to understand a clear cut explanation in the literature I have read. Any advice and help is very very much appreciated.

",46687,,,,,4/30/2021 18:37,Looking for help on initializing continuous HMM model for word level ASR,,0,1,,,,CC BY-SA 4.0 27584,1,27611,,4/30/2021 20:55,,2,451,"

Imagine I have images with apples in them. I want to train a neural network which can count the number of apples in each image.

BUT, I don't want to use a detector, then count the number of bounding boxes. I'm interested in knowing if there's a way to bake this sort of logic into a differentiable neural network.

The simplest variation of this problem might be: I have an input vector $x \in \{0, 1\}^N$ and I want to count the number of 1s in it. I can make a single layer neural network, setting all the weights to 1, bias to 0, and linear activation. And my answer pops out. But how would I train the network to do this from scratch? Sure, I could regress to an output in $[0,1]$ and multiply the result by $N$, then the network is differentiable. But did the model really learn how to count? If so, would this behaviour be generalisable to counting multiple types of objects at once? Would it generalise to inputs where there can be any number of said object (like an image can have many apples in it, despite the size of the image)?

I want to know if there's a model which can learn how to count.

Here's another way I'm thinking about it: I can look at an aerial view of pine trees and say "yeah maybe there are 30 trees", but then I can do the task of looking at and identifying each tree individually, incrementing a counter, and making sure not to revisit that tree. The latter is what I consider truly "counting".

",16871,,16871,,5/1/2021 17:23,7/26/2022 18:39,How can we get a differentiable neural network to count things?,,1,7,,,,CC BY-SA 4.0 27585,1,27589,,5/1/2021 5:19,,1,396,"

Let $M$ be an MDP with two states, $A$ and $B$, where $A$ is the starting state, and you always transit to the final state $B$ using two possible actions. $A_1$ gives you rewards that are normally distributed $\mathcal{N}(0, 1)$ and $A_2$ gives you rewards that are normally distributed $\mathcal{N}(0, 3)$.

How many optimal policies do we have? What is the optimal value of the states? Is any policy preferred over the other? Why? If you prefer one over the other, is there some way to detect it using what we have studied?

In my view, there are infinite policies that give the same expected reward.

$\pi_\alpha$ be a policy that is stochastic, which maps as follows - $\pi_\alpha(s, A_1) = \alpha $ and $ \pi_\alpha (s, A_2) = 1 - \alpha$ for $ \alpha \in [0,1]$. It is clear that, for each $\alpha$, we get infinite policies but have the same expected return.

But, according to some google searches (for example, here it says optimal policies are generally deterministic), I found optimal policies are always deterministic. Hence this implies there are only 2 policies, i.e., either take action $A_1$ or $A_2$ but not probabilistic.

So, my doubt is: what are the optimal policies here? Is it deterministic (only 2 policies) or stochastic (infinite)? Or is it an assumption that optimal policies are deterministic?

",44685,,2444,,1/7/2022 18:42,1/7/2022 18:42,"Are optimal policies always deterministic, or can there also be optimal policies that are stochastic?",,1,1,,,,CC BY-SA 4.0 27586,2,,26235,5/1/2021 7:06,,0,,"

As "initial" word embeddings (those without any positional or context information for each word or sub word) are used from the very beginning It seems to me that someone has to provide a trained embedding for each word at the very beginning.

",46540,,46540,,5/1/2021 12:43,5/1/2021 12:43,,,,0,,,,CC BY-SA 4.0 27587,1,,,5/1/2021 7:34,,0,47,"

I want to train a model with python over the images, and these images are for a metal product. my aim is to detect the defects, to notice if a product is a failure.

what kind of architecture do you suggest? should I train over the class? or should I use an autoencoder?

",46542,,,,,10/5/2022 1:53,How to train a model for 1 image class to detect anomaly?,,1,1,,,,CC BY-SA 4.0 27589,2,,27585,5/1/2021 8:39,,5,,"

I think the result you are referring to is the one that says that there always exists a deterministic optimal policy for an MDP. This is true. But note that this does not imply that a stochastic optimal policy can not exist at the same time.

Suppose you have an MDP with one state and two actions $a_1$ and $a_2$, both yielding the reward 0 in expectation (as in your example). Then consider a policy that takes action $a_1$ with probability $\alpha \in [0,1]$ and $a_2$ with probability $1-\alpha$. Either of the two deterministic policies with $\alpha=0$ or $\alpha=1$ are optimal, but so is any stochastic policy with $\alpha \in (0,1)$. All of these policies yield the expected return of 0.

This is all assuming your optimality criterion is the expected cumulative (discounted) reward. If you have a different optimality criterion, such as something that accounts for risk, you might distinguish between rewards that have the same expected value but a different variance - but I think that is beyond the scope of the question.

",45529,,,,,5/1/2021 8:39,,,,0,,,,CC BY-SA 4.0 27590,2,,27587,5/1/2021 9:38,,1,,"

It sounds like you only have "normal" examples with which to train your model, so this makes the problem feel like an application for outlier detection algorithms. There are a variety of approaches here. You could indeed take an autoencoder approach and then use the reconstruction error to determine if a new image is normal or not, on the presumption that normal images will have lower error. You could also take the activations from the bottleneck layer and build an explicit outlier model using an algorithm like isolation forest. And you are not limited to just autoencoders if you take the latter approach -- other models pretrained for other tasks like image net classification could also provide good features for outlier detection.

",7760,,,,,5/1/2021 9:38,,,,0,,,,CC BY-SA 4.0 27591,2,,27562,5/1/2021 12:25,,-1,,"

alpha-beta-prunning algorithm is using for improve performance and reject the options, which not consist the condition - or it's possible to set some factors of probability to take into consideration or not.

This algorithm work on tree structure - and if there are a lot of levels (10-20) - It allows you to eliminate paths - which will logically not be used - saving memory and computing resources.

In this particular case - for finding the minimum value it works like this:

First branch:

  • Go to B
  • Go to D - and there is 2 and 3 - so return the min 2
  • Go From B to E - and choose 5 - the minimal value in B points is actually 3 - so there isn't need for checking the next - cause everything below E, will be higher than D (3)

Second branch:

  • Go to
  • Go to F - and check 0 and 1
  • If C have 1 - than not necessary to go to G - cause 1 is the smaller.

The given sequence is a simplification - and in the case of this algorithm there are also layers - min & max

Source: https://www.javatpoint.com/ai-alpha-beta-pruning#:~:text=Key%20points%20about%20alpha%2Dbeta,values%20to%20the%20child%20nodes.


Implementation, however, is a more complex matter - unless we do copy and paste :-)

https://gist.github.com/exallium/1446104/5109388cfc21578f555dcac0ba54da680326af7b

",32352,,,,,5/1/2021 12:25,,,,3,,,,CC BY-SA 4.0 27593,1,27598,,5/1/2021 14:00,,1,253,"

I have built a custom RL environment with gym, which simulates the RL vehicle and potential vehicles in front of the RL vehicle as well as traffic lights (including their state; red, yellow, green). I trained a PPO agent from stable_baselines3 without considering setting of hyperparameters and the agent learned to follow the vehicle in front of it without crashing. However it does not learn to stop at red lights after extensive training.

I tried training it without surrounding vehicles to get more interactions of the RL vehicle with traffic lights and this helped the agent to learn stopping a red light. However when I then continue training of the agent in a new environment with surrounding traffic, the agent again un-learns stopping at red lights.

I am still a novice with RL and do not understand as to why this happens and what I can do here. Should I set hyperparameters? Or try a different model? Or should I exchange the default policy of the PPO model?

",45843,,,,,5/2/2021 12:12,PPO agent for vehicle control does not learn to stop at traffic lights,,1,0,,,,CC BY-SA 4.0 27594,2,,14101,5/1/2021 14:11,,1,,"

Based on this repository:

https://github.com/arkm97/svm-from-scratch/blob/master/SVM_from_scratch.ipynb

I will try to reverse engineering that concept:

So firstly there is issue for DataCleaning(removing 0,values, serialize, normalize)

normalizing dataset (replaced by its difference from the mean)

credit_df_norm = (credit_df - credit_df.mean())/(credit_df.std())

Divide dataset into Train & Test:

train_df = credit_df_norm.drop(target, axis=1).loc[:training_points]
train_target = credit_df_norm[target].replace(0, -1).loc[:training_points]

And here is starting the clue of that algorithms (with maths etc.)

given data points $\vec x_j \in \mathbb{R}^{1 \times N}$ and targets $y_j = \pm 1$, where $j = 1, \dots, M$, find the maximum-margin hyperplane that separates the two classes ($y_j = 1$ and $y_j = -1$).

Let $\vec w$ be the vector normal to the hyperplane. We want to find $\vec w$ that satisfies

$$ y_j (\vec w \cdot \vec x_j + b) \geq 1 $$ The dual formulation of the above is equivalent to maximizing the following over the multipliers $\vec \alpha$:

$$ L(\vec \alpha) = \vec y \cdot \vec \alpha - \frac 1 2 \vec \alpha > K \vec \alpha^T$$ subject to the constraints $\sum_{j=1}^M \alpha_j = > 0$ and $y_j \alpha_j \geq 0$. The matrix $K$ defines the kernel of the SVM; I've chosen $K_{jk} = k(\vec x_j, \vec x_k) = \vec x_j \cdot \vec > x_k$. The parameters of the plane are recovered from $\vec w = \vec > \alpha \cdot \vec x$ and $b = y_j = \vec w \cdot \alpha_j$ for $j$ such that $\alpha_j \neq 0$

Let's divide it to the single factors:

we need to find maximum-margin-hyperplane for separate into two classes: Separation cause SVN is classification - so we need to classify elements into categories.

𝑦𝑗 = 1

𝑦𝑗 = −1

This pattern allow to find hyperplane(?)

𝑦𝑗(𝑤⃗ ⋅𝑥⃗ 𝑗+𝑏)≥1

The next pattern:

𝐿(𝛼⃗ )=𝑦⃗ ⋅𝛼⃗ −12𝛼⃗ >𝐾𝛼⃗ 𝑇

The author for kernel SVN choose those pattern:

𝐾𝑗𝑘=𝑘(𝑥⃗ 𝑗,𝑥⃗ 𝑘)=𝑥⃗ 𝑗⋅>⃗ 𝑥𝑘

And the parameters for plane are here:

𝑤⃗ =>⃗ 𝛼⋅𝑥⃗ and 𝑏=𝑦𝑗=𝑤⃗ ⋅𝛼𝑗 for 𝑗 such that 𝛼𝑗≠0

Reference: https://arxiv.org/pdf/1307.0471.pdf


In this line are added the kernel

Popular kernels are:

  • Polynomial Kernel,

  • Gaussian Kernel,

  • Radial Basis Function (RBF),

  • Laplace RBF Kernel,

  • Sigmoid Kernel,

  • Anove RBF Kernel

    k_value = np.array(train_df @ train_df.T + np.identity(len(train_target))*1e-12)

The structure of k_value (kernel matrix) is an array.

Next there is use of cholesky decomposition—cholesky - which is next complicated concept.

np.linalg.cholesky(k_value)

Next some bits of data converting:

alpha = cp.Variable(shape=train_target.shape)

beta = cp.multiply(alpha, train_target) # to simplify notation

K = cp.Parameter(shape=k_value.shape, PSD=True, value=k_value)

# objective function
obj = .5 * cp.quad_form(beta, K) - np.ones(alpha.shape).T @ alpha

# constraints
const = [np.array(train_target.T) @ alpha == 0,
        -alpha <= np.zeros(alpha.shape),
        alpha <= 10*np.ones(shape=alpha.shape)]
prob = cp.Problem(cp.Minimize(obj), const)

The next step is recreate the hyperplane:

w = np.multiply(train_target, alpha.value).T @ train_df
S = (alpha.value > 1e-4).flatten()
b = train_target[S] - train_df[S] @ w
b = b[0]
# b = np.mean(b)

And finally - the classification and evaluation:

def classify(x):
    result = w @ x + b
    return np.sign(result)

correct = 0
incorrect = 0
validation_set = credit_df_norm.drop(target, axis=1)
predictions = []
for i, x in validation_set.iterrows():
    my_svm = classify(x)
    if my_svm==credit_df_norm[target].replace(0, -1)[i]: correct +=1
    else: incorrect +=1
    predictions.append(my_svm)
predictions = np.array(predictions)


print(f"fraction correct: {correct/(correct + incorrect)}")

That was the more detailed implementation - for shorcut there is an interesting article:

https://towardsdatascience.com/svm-and-kernel-svm-fed02bef1200

There are existing function from another library - that can be reused in simple manner:

# Fitting SVM to the Training set
from sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', C = 0.1, gamma = 0.1)
classifier.fit(X_train, y_train)

%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats

# use seaborn plotting defaults
import seaborn as sns; sns.set()

Visualized Dataset:

Dataset after apply hyperplanes:

",32352,,,,,5/1/2021 14:11,,,,0,,,,CC BY-SA 4.0 27597,1,,,5/1/2021 16:14,,-1,105,"

Maybe it's silly to ask but for random exploration in an RL for choosing discrete action, that in the neural network last layer softmax will be used, what random samples should we provide? binary like (0,0,1,0,0,0) or continuous softmax like (0.1, 0.15, 0.45, 0.25, 0,5, 0.1)??

if the answer is continuous, what algorithm do you suggest? like generating random numbers between 0 and 1 and then using softmax? (this algorithm mostly provides close numbers and I think it's not the correct way)

",46577,,46577,,5/1/2021 16:20,11/10/2021 11:03,Exploration for softmax should be binary or continuous softmax?,,1,3,0,,,CC BY-SA 4.0 27598,2,,27593,5/1/2021 17:50,,2,,"

The way I dealt with it was by giving a (very) strong negative reward when committing the mistake (here going under a red light) and the agent should learn to do not do this mistake anymore.

It is often better to change actions, rewards or environment rather than acting on hyperparameters in my opinion, but I may be mistaken.

",46681,,40434,,5/2/2021 12:12,5/2/2021 12:12,,,,2,,,,CC BY-SA 4.0 27599,2,,20778,5/1/2021 20:01,,0,,"

NLP is one of the tools you may use for stock prediction. Here, are a couple of articles to help you get started:

",46540,,40434,,5/10/2021 0:01,5/10/2021 0:01,,,,1,,,,CC BY-SA 4.0 27600,1,,,5/1/2021 20:57,,4,164,"

Let $\mathcal{S}$ be the training data set, where each input $u^i \in \mathcal{S}$ has $d$ features.

I want to design an ANN so that the cost function below is minimized (the sum of the square of pairwise differences between model outputs) and the given constraint is satisfied, where $w$ is the ANN model parameter vector.

\begin{align} \min _{w}& \sum_{\{i, j\} \in \mathcal{S}}\left(f\left(w, u^{i}\right)-f\left(w, u^{j}\right)\right)^{2} \\ &f\left(w, u^{i}\right) \geq q_{\min }, \quad i \in \mathcal{S} \end{align}

What kind of ANN is suitable for this purpose?

",46703,,2444,,12/18/2021 9:54,1/12/2023 12:05,Which neural network can I use to solve this constrained optimisation problem?,,1,3,,,,CC BY-SA 4.0 27603,1,,,5/2/2021 0:31,,2,303,"

During image preprocessing pipeline, should one rescale each pixel value to [0, 1] by dividing 255 first, and then perform data transformation such as color distortion, gaussian blur? or vice versa?

I believe for correctness, it may depend on the particular image transformation algorithm, or if you use some libraries. Is there any general advice? If anyone has experience trying both before or after, please share, particularly if you use external library or framework that may make implicit assumption to the value range of the pixel.

",46707,,,,,5/2/2021 20:12,Should one rescale (normalize) image before or after data augmentation?,,1,0,,,,CC BY-SA 4.0 27606,2,,27600,5/2/2021 9:10,,0,,"

If I understand your query correctly, you want to create a latent space that groups similar objects. You should then probably look for Siamese networks. However, your loss function will need another term to increase dissimilarity between different labels. Otherwise, as pointed out by Mike NZ, the net would collapse(yes, it is possible). Perhaps this will give some insights.

Note that the above method is not completely unsupervised. There are, in fact, a few claims of unsupervised classifications via clustering, although your objective function would look very different. You could go through this paper(called SCAN) for more details.

Hope it helps.

[Edit]

If you want a (lower-dimensional)representation of objects themselves, browsing through this could help. For a complex problem, linear reductions like PCA, although helpful, aren't probably what you're looking for. Here you can try training autoencoders. The loss function would work, along with some regularization term.

",40843,,40843,,5/2/2021 17:26,5/2/2021 17:26,,,,3,,,,CC BY-SA 4.0 27607,2,,27597,5/2/2021 9:53,,0,,"

Firstly, I would suggest you do not use softmax for exploration, because it does not imply the model's uncertainty. Training with softmax and cross-entropy, your model may be very confident, but wrongly, because of overfitting.

Another reason why you should not use softmax to measure uncertainty is that your estimate of the variance (optimal estimate of the variance is the average error) may be quite low, explaining the low entropy of your model. However, there maybe exist regions (holes), where your model is highly uncertain (large error).

Thus, you should try other options for measuring uncertainty.

  1. One way to measure model (or epistemic) uncertainty is to use a Bayesian Network estimating the whole distribution $p(\theta | D)$, where \theta are your parameters and D is the collected dataset of experience. The entropy of $p(\theta | D)$ tells us the model uncertainty. Why Bayesian Network? Intuitively, if our estimator says all θs are equally likely given your data, then it means that you have no idea of what the model really is. On the other hand, if it says there is one only θ, that can possibly explain D, then you have high confidence in the model.

  2. Another way to measure epistemic uncertainty is to use bootstrap ensembling. Instead of estimating the uncertainty of every single parameter in the Bayesian Net, learn N different networks! Use bootstrapping in order to generate N datasets (of same size as D) by resampling with replacement. Then train each network on a corresponding generated dataset. Then in testing, you can either sample uniformly a network ($p(\theta | D) = \frac{1}{N}\sum_i^N{\delta({\theta}_i)}$) or just average all networks predictions.

For exploration (in Deep RL), there are a lot of different options. I recommend you check information gain, Thompson sampling and pseudo-counts. However, if you were using the standard deep RL agents, like DQN etc, you could begin with the standard ε-greedy behavior policy, ignoring that this is not an optimal exploration policy for the most cases.

",36055,,,,,5/2/2021 9:53,,,,0,,,,CC BY-SA 4.0 27611,2,,27584,5/2/2021 14:37,,5,,"

Estimating from an observation is a function, but "really counting" is a process. Feed-forward neural networks can learn arbitrary functions from training examples, but they cannot represent (and therefore cannot learn) processes. They can attempt to estimate the results of completing a process as a function, but that is not the same thing as actually performing the process.

To learn arbitrary processes from examples requires a model with some concept of state and evolution over time in addition to any functions that are required. Recurrent neural networks (RNNs) are suitable models for those kinds of learning problems, but so are other AI learning constructs.

This rabbit hole goes deep, and for any model you could build it is possible to ask:

  • Is the system really performing a counting task similar to how a human might attempt the same task?

  • Has the sytem really learned to count, or has it been constructed by the developer so that some parts of the process are inevitable?

  • Has the system learned to count in general within any sub-component, or are the things that it can count tightly coupled across components?

There are other questions you could ask too, depending on your goals for producing such a model.

It is worth noting that for small quantities and certain patterns, that human "counting" does more closely resemble the estimation process, or perhaps is more akin to an NLP problem. For example, consider how you "count" the pips on a die - although you can use a counting process to confirm what you see, typically you do not do so. Instead, some part of your brain is supplying a very accurate answer quickly and unconsciously:

More generally, counting in humans likely consists of multiple related strategies for converting observations into symbolic and/or sensory representations of quantity. Some may be instinct, some are learned conscious behaviours and some appear to be in-between, perhaps originally learned but turned into subconscious skill through repetition.

Ignoring whether or not you are implementing your AI project for counting items in an image wholly within a neural network for now, I think you need the following components:

  • A state that represents and tracks the start, progress so far and end of the counting process with respect to the input being processed.

  • An accumulator that represents quantity counted so far.

  • A detector that can trigger a counting event when it observes something that needs to be counted.

  • A strategy or planner for processing input so that detection events are separated and only triggered once for each valid event.

The last component can vary a lot, and humans will use a range of different strategies for counting depending on the difficulty of the task. For instance, you might use working memory (a limited resource in humans) to track a few key points in an image to help segment an image into smaller sections as you work across it. Or you might make visual marks on an image to track each object that had already been counted. For a human, counting strategies can be mixed and matched, and switched between sometimes even during the same counting task.

All of these components could be represented in learnable parts of an AI process, but they do not have to be. When you suggest in your question that

I don't want to use a detector, then count the number of bounding boxes.

then you are most likely saying that you don't want to use a fixed hard-coded strategy. You want to somehow create a neural network that can discover at least one strategy for processing the input.

The trouble you will face is that giving a large RNN model (e.g. LSTM) a dataset of examples with input images and output correct counts will likely be too much of a challenge. Discovery of a robust object counting system from scratch will be too hard.

There are a few things that may help you construct something that "really" learns to count. Here are a couple of ideas:

  • Curriculum learning, where you start training the neural network with some hand-holding examples, and slowly ramp up the complexity. This is analogous to how we teach children to count.

  • Designed-in modelling for processing strategy. For instance, you could add a virtual fovea and artificial visual saccades and require that the neural network output a sequence of locations within the image that it picks out to run the detector against. This will be a constraint that allows certain types of human-like counting strategy to work, and could simplify the problem to the point that the network has a chance to learn it.

A paper that uses a "fovea" model for object detection: Object detection through search with a foveated visual system

",1847,,1847,,5/2/2021 17:10,5/2/2021 17:10,,,,3,,,,CC BY-SA 4.0 27612,1,,,5/2/2021 16:32,,1,115,"

I wonder if creating data set only by augmentation base images is a bad practice.

I mean the situation when you have to train net to predict really simple patterns, for example printed-like digits. And all digits from specific group looks basically the same, for example all one's look the same and so on. The only difference is rotation/translation etc. in the image.

Is it bad way to create data set by taking digit image and randomly rotate, translate and maybe erode/dilate it?

My intuition tells me that something's wrong with that approach, but I cannot find any reason why it should be wrong.

",46718,,2444,,5/3/2021 11:24,1/28/2022 19:00,Is creating dataset only by augmentation a bad practice?,,2,0,,,,CC BY-SA 4.0 27613,2,,12114,5/2/2021 16:38,,0,,"

Today I have learned about the Cybersyn project, developed 1973 in Chile. It went into the "right" direction - but could not achieve its goal because of the coup d'état.

",25362,,,,,5/2/2021 16:38,,,,0,,,,CC BY-SA 4.0 27614,1,27632,,5/2/2021 16:46,,2,111,"

Let's say that I have a neural network with 2 heads. The first consists of X neurons. The second consists of Y neurons. I have these 2 heads because I want to predict 2 different variables. And I can see the loss for each head during training. Now, let's say that I have only one head that consists of X+Y neurons. I can interpret the output because I know that the first X neurons describe some variable and the latter Y neurons describe the second variable. I want to know if there is any difference between these 2 methods (maybe in performance or something). What are the pros and cons? Are there any advantages of one method over another for some particular tasks?

",,user40943,,,,5/3/2021 20:24,What is the difference between multi-head and normal output?,,1,0,,,,CC BY-SA 4.0 27616,2,,26957,5/2/2021 18:12,,2,,"

Thanks for asking the question. I'm the author of the paper.

The key point is that the weights $w$ cannot be updated directly with the new data as $w$ is not directly related with the output $y$ (see equation (1)). First, it is necessary to update $a$ first, which actually is directly related with $y$ via the activation function, i.e., $y = f(a)$ with $f(.)$ being the activation function (see equation (2)). Hence, you will find the Bayes rule in equation (16) with all the quantities you are missing (likelihood, prior, etc.). This equation is used to calculate the posterior $p(a|D_i)$. This posterior is then used to update the weights by plugging $p(a|D_i)$ into equation (14). The solution of equation (14), i.e., the posterior mean and covariance matrix of $w$, is provided by means of equation (22). Algorithm 2 provides a summary of all necessary calculations.

I hope this answers your question. BTW: we have extended this algorithm to a full neural network. The scientific paper describing the whole procedure is currently under review.

",46719,,,,,5/2/2021 18:12,,,,0,,,,CC BY-SA 4.0 27617,2,,27603,5/2/2021 20:12,,1,,"

So far, it seems this is more a software "integration" issue. One great tip from http://karpathy.github.io/2019/04/25/recipe/ is to visualize everything as often as you can during development. For data augmentation, try to visualize the image right before it enters your convnet.

What I found is a bug can happen if your particular image transform library try to detect if you have rescaled or not by just checking the data type. i.e. if it sees float, it will assume [0, 1] or if an integer, it will assume [0, 255].

This can end badly if you resize (without rescaling at the same time) using tf.image.resize(...). Resizing an image (e.g. to 224x224x3) has the "side effect" of converting the pixel value to a float, while still having an (approx.) range of 0.0-255.0. This is such that a downstream library will see a float and make the wrong assumption of [0, 1] range and totally mess up the transform. If you use any color related transforms, definitely be careful, they are usually dependent on pixel value range assumption.

Because of this, if you use tf.image, you may find it safer to do resize and rescale at the same time to ensure that float => [0, 1] such that your downstream color augmentation library may most likely work. Or you just have to manage the type carefully and do tf.cast(...) wherever necessary to satisfy any library assumption.

At the end, a human eye will still have to verify them.

",46707,,,,,5/2/2021 20:12,,,,0,,,,CC BY-SA 4.0 27618,2,,27612,5/2/2021 23:07,,1,,"

Data augmentation is usually rotating, cropping and translating images. And this makes sense if your network could meet these kind of images.

If I take a digit recognition like LeNet, it is useless to complicate the task of the network by forcing it to learn rotated digits, which could lead to a more complex architecture and training and less accuracy in the task. Another example I could think of is human pose recognition (openpose project). As we humans usually stands with our feet on the bottom of the image and the head on top, openpose project didn't use rotation of images on the dataset.

So I would say data augmentation is a great tool (especially when we lack data), but I would only use it when the augmented data could be met when performing the task. If the digits are always oriented and placed on the middle of the image, it doesn't make much sense to use translations and rotations on the data set to augment it unless we really lack data.

In your example, it does make sense to me to create the dataset using data augmentation.

",46681,,46681,,5/2/2021 23:13,5/2/2021 23:13,,,,0,,,,CC BY-SA 4.0 27621,1,,,5/3/2021 3:59,,3,249,"

I have noticed that DDPG does rather well at solving environments with a static target.

For example, the default of Lunar Lander, the flags do not change position. So the DDPG model learns how to get to the center of the screen and land fairly quickly.

As soon as I start moving the landing position around randomly and adding the landing position as an input to the model, the model has an extremely hard time putting this connection together.

A few questions/points about adding this complexity to the environment:

  1. Would more nodes per layer or more layers help in figuring out this connection? I have tested this but seems the bigger I go, the harder it is to learn anything.
  2. Is it a common RL AI issue that it has a hard time connecting data?
  3. I realize that I could change the environment to always have a static target and instead change the position of the lunar lander ship, which in effect accomplishes the same thing, but want to know if we could solve it with moving target
  4. Is there any good documentation on Actor/Critic analyzing models? I have some results where my critic target is falling out but my critic loss is going down nicely. At the same time my actor target is going up and up and eventually plateau. It is hard to really understand what is happening and would be great to understand actor loss vs critic loss vs critic target.

Essentially, I added a random int (left side of flags), added 1 to get x position middle of landing and add 1 more to get distance of flags to be 3 out of 11 chunks.

rand_chunk = random.randint(0, CHUNKS-3)
self.x_pos_middle_landing = chunk_x[rand_chunk + 1]
self.helipad_x1 = chunk_x[rand_chunk]
self.helipad_x2 = chunk_x[rand_chunk + 2]
height[rand_chunk] = self.helipad_y
height[rand_chunk + 1] = self.helipad_y
height[rand_chunk + 2] = self.helipad_y

Old State:

state = [
        (pos.x - VIEWPORT_W/SCALE/2) / (VIEWPORT_W/SCALE/2),
        (pos.y - (self.helipad_y+LEG_DOWN/SCALE)) / (VIEWPORT_H/SCALE/2),
        vel.x*(VIEWPORT_W/SCALE/2)/FPS,
        vel.y*(VIEWPORT_H/SCALE/2)/FPS,
        self.lander.angle,
        20.0*self.lander.angularVelocity/FPS,
        1.0 if self.legs[0].ground_contact else 0.0,
        1.0 if self.legs[1].ground_contact else 0.0
        ]

Added to State, same as state[0] but using middle of 3 for landing

(self.x_pos_middle_landing - VIEWPORT_W/SCALE/2) / (VIEWPORT_W/SCALE/2)

And update the obs space from 8 spaces to 9

self.observation_space = spaces.Box(-np.inf, np.inf, shape=(9,), dtype=np.float32)

Rewards need to be updated, Old Rewards:

shaping = \
        - 100*np.sqrt(state[0]*state[0] + state[1]*state[1]) \
        - 100*np.sqrt(state[2]*state[2] + state[3]*state[3]) \
        - 100*abs(state[4]) + 10*state[6] + 10*state[7]  # And ten points for legs contact, the idea is if you

New Rewards:

shaping = \
        - 100*np.sqrt((state[0]-state[8])*(state[0]-state[8]) + state[1]*state[1]) \
        - 100*np.sqrt(state[2]*state[2] + state[3]*state[3]) \
        - 100*abs(state[4]) + 10*state[6] + 10*state[7]  # And ten points for legs contact, the idea is if you

DDPG Model

    import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import gym
from tensorflow.keras.models import load_model
import os
import envs
import time
import scipy.stats as stats
# from stable_baselines.common.policies import MlpPolicy, MlpLstmPolicy
from stable_baselines.common.vec_env import SubprocVecEnv, DummyVecEnv, VecCheckNan
from stable_baselines.common import set_global_seeds, make_vec_env
from stable_baselines.ddpg import DDPG
from stable_baselines.ddpg.policies import MlpPolicy
# from stable_baselines.sac.policies import MlpPolicy
from stable_baselines import PPO2, SAC
from stable_baselines.common.noise import NormalActionNoise, OrnsteinUhlenbeckActionNoise, AdaptiveParamNoiseSpec

if __name__ == '__main__':
    num_cpu = 1  # Number of processes to use
    env = SubprocVecEnv([make_env(env_id, i) for i in range(num_cpu)])
    env = VecCheckNan(env, raise_exception=True, check_inf=True)
    n_actions = env.action_space.shape[-1]

    #### DDPG
    policy_kwargs = dict(act_fun=tf.nn.sigmoid, layers=[512, 512, 512], layer_norm=False)
    param_noise = None
    # action_noise = OrnsteinUhlenbeckActionNoise(mean=np.zeros(n_actions), sigma=float(0.5) * np.ones(n_actions))
    action_noise = NormalActionNoise(0, 0.1)
    model = DDPG

    # # Train Model
    model.learn(total_timesteps=int(3e5))
    model.save('./models/lunar_lander')

Full Code:

    """
Rocket trajectory optimization is a classic topic in Optimal Control.

According to Pontryagin's maximum principle it's optimal to fire engine full throttle or
turn it off. That's the reason this environment is OK to have discreet actions (engine on or off).

The landing pad is always at coordinates (0,0). The coordinates are the first two numbers in the state vector.
Reward for moving from the top of the screen to the landing pad and zero speed is about 100..140 points.
If the lander moves away from the landing pad it loses reward. The episode finishes if the lander crashes or
comes to rest, receiving an additional -100 or +100 points. Each leg with ground contact is +10 points.
Firing the main engine is -0.3 points each frame. Firing the side engine is -0.03 points each frame.
Solved is 200 points.

Landing outside the landing pad is possible. Fuel is infinite, so an agent can learn to fly and then land
on its first attempt. Please see the source code for details.

To see a heuristic landing, run:

python gym/envs/box2d/lunar_lander.py

To play yourself, run:

python examples/agents/keyboard_agent.py LunarLander-v2

Created by Oleg Klimov. Licensed on the same terms as the rest of OpenAI Gym.
"""


import sys, math
import numpy as np
import random

import Box2D
from Box2D.b2 import (edgeShape, circleShape, fixtureDef, polygonShape, revoluteJointDef, contactListener)

import gym
from gym import spaces
from gym.utils import seeding, EzPickle

FPS = 50
SCALE = 30.0   # affects how fast-paced the game is, forces should be adjusted as well

MAIN_ENGINE_POWER = 13.0
SIDE_ENGINE_POWER = 0.6

INITIAL_RANDOM = 1000.0   # Set 1500 to make game harder

LANDER_POLY =[
    (-14, +17), (-17, 0), (-17 ,-10),
    (+17, -10), (+17, 0), (+14, +17)
    ]
LEG_AWAY = 20
LEG_DOWN = 18
LEG_W, LEG_H = 2, 8
LEG_SPRING_TORQUE = 40

SIDE_ENGINE_HEIGHT = 14.0
SIDE_ENGINE_AWAY = 12.0

VIEWPORT_W = 600
VIEWPORT_H = 400


class ContactDetector(contactListener):
    def __init__(self, env):
        contactListener.__init__(self)
        self.env = env

    def BeginContact(self, contact):
        if self.env.lander == contact.fixtureA.body or self.env.lander == contact.fixtureB.body:
            self.env.game_over = True
        for i in range(2):
            if self.env.legs[i] in [contact.fixtureA.body, contact.fixtureB.body]:
                self.env.legs[i].ground_contact = True

    def EndContact(self, contact):
        for i in range(2):
            if self.env.legs[i] in [contact.fixtureA.body, contact.fixtureB.body]:
                self.env.legs[i].ground_contact = False


class LunarLander(gym.Env, EzPickle):
    metadata = {
        'render.modes': ['human', 'rgb_array'],
        'video.frames_per_second' : FPS
    }

    continuous = False

    def __init__(self):
        EzPickle.__init__(self)
        self.seed()
        self.viewer = None

        self.world = Box2D.b2World()
        self.moon = None
        self.lander = None
        self.particles = []

        self.prev_reward = None

        # useful range is -1 .. +1, but spikes can be higher
        self.observation_space = spaces.Box(-np.inf, np.inf, shape=(9,), dtype=np.float32)

        if self.continuous:
            # Action is two floats [main engine, left-right engines].
            # Main engine: -1..0 off, 0..+1 throttle from 50% to 100% power. Engine can't work with less than 50% power.
            # Left-right:  -1.0..-0.5 fire left engine, +0.5..+1.0 fire right engine, -0.5..0.5 off
            self.action_space = spaces.Box(-1, +1, (2,), dtype=np.float32)
        else:
            # Nop, fire left engine, main engine, right engine
            self.action_space = spaces.Discrete(4)

        self.reset()

    def seed(self, seed=None):
        self.np_random, seed = seeding.np_random(seed)
        return [seed]

    def _destroy(self):
        if not self.moon: return
        self.world.contactListener = None
        self._clean_particles(True)
        self.world.DestroyBody(self.moon)
        self.moon = None
        self.world.DestroyBody(self.lander)
        self.lander = None
        self.world.DestroyBody(self.legs[0])
        self.world.DestroyBody(self.legs[1])

    def reset(self):
        self._destroy()
        self.world.contactListener_keepref = ContactDetector(self)
        self.world.contactListener = self.world.contactListener_keepref
        self.game_over = False
        self.prev_shaping = None

        W = VIEWPORT_W/SCALE
        H = VIEWPORT_H/SCALE

        # terrain
        CHUNKS = 11
        height = self.np_random.uniform(0, H/2, size=(CHUNKS+1,))
        chunk_x = [W/(CHUNKS-1)*i for i in range(CHUNKS)]
        rand_chunk = random.randint(0, CHUNKS-3)
        self.x_pos_middle_landing = chunk_x[rand_chunk + 1]
        self.helipad_x1 = chunk_x[rand_chunk]
        self.helipad_x2 = chunk_x[rand_chunk + 2]
        self.helipad_y = H/4
        height[rand_chunk] = self.helipad_y
        height[rand_chunk + 1] = self.helipad_y
        height[rand_chunk + 2] = self.helipad_y

        self.moon = self.world.CreateStaticBody(shapes=edgeShape(vertices=[(0, 0), (W, 0)]))
        self.sky_polys = []
        for i in range(CHUNKS-1):
            p1 = (chunk_x[i], height[i])
            p2 = (chunk_x[i+1], height[i+1])
            self.moon.CreateEdgeFixture(
                vertices=[p1,p2],
                density=0,
                friction=0.1)
            self.sky_polys.append([p1, p2, (p2[0], H), (p1[0], H)])

        self.moon.color1 = (0.0, 0.0, 0.0)
        self.moon.color2 = (0.0, 0.0, 0.0)

        initial_y = VIEWPORT_H/SCALE
        self.lander = self.world.CreateDynamicBody(
            position=(VIEWPORT_W/SCALE/2, initial_y),
            angle=0.0,
            fixtures = fixtureDef(
                shape=polygonShape(vertices=[(x/SCALE, y/SCALE) for x, y in LANDER_POLY]),
                density=5.0,
                friction=0.1,
                categoryBits=0x0010,
                maskBits=0x001,   # collide only with ground
                restitution=0.0)  # 0.99 bouncy
                )
        self.lander.color1 = (0.5, 0.4, 0.9)
        self.lander.color2 = (0.3, 0.3, 0.5)
        self.lander.ApplyForceToCenter( (
            self.np_random.uniform(-INITIAL_RANDOM, INITIAL_RANDOM),
            self.np_random.uniform(-INITIAL_RANDOM, INITIAL_RANDOM)
            ), True)

        self.legs = []
        for i in [-1, +1]:
            leg = self.world.CreateDynamicBody(
                position=(VIEWPORT_W/SCALE/2 - i*LEG_AWAY/SCALE, initial_y),
                angle=(i * 0.05),
                fixtures=fixtureDef(
                    shape=polygonShape(box=(LEG_W/SCALE, LEG_H/SCALE)),
                    density=1.0,
                    restitution=0.0,
                    categoryBits=0x0020,
                    maskBits=0x001)
                )
            leg.ground_contact = False
            leg.color1 = (0.5, 0.4, 0.9)
            leg.color2 = (0.3, 0.3, 0.5)
            rjd = revoluteJointDef(
                bodyA=self.lander,
                bodyB=leg,
                localAnchorA=(0, 0),
                localAnchorB=(i * LEG_AWAY/SCALE, LEG_DOWN/SCALE),
                enableMotor=True,
                enableLimit=True,
                maxMotorTorque=LEG_SPRING_TORQUE,
                motorSpeed=+0.3 * i  # low enough not to jump back into the sky
                )
            if i == -1:
                rjd.lowerAngle = +0.9 - 0.5  # The most esoteric numbers here, angled legs have freedom to travel within
                rjd.upperAngle = +0.9
            else:
                rjd.lowerAngle = -0.9
                rjd.upperAngle = -0.9 + 0.5
            leg.joint = self.world.CreateJoint(rjd)
            self.legs.append(leg)

        self.drawlist = [self.lander] + self.legs

        return self.step(np.array([0, 0]) if self.continuous else 0)[0]

    def _create_particle(self, mass, x, y, ttl):
        p = self.world.CreateDynamicBody(
            position = (x, y),
            angle=0.0,
            fixtures = fixtureDef(
                shape=circleShape(radius=2/SCALE, pos=(0, 0)),
                density=mass,
                friction=0.1,
                categoryBits=0x0100,
                maskBits=0x001,  # collide only with ground
                restitution=0.3)
                )
        p.ttl = ttl
        self.particles.append(p)
        self._clean_particles(False)
        return p

    def _clean_particles(self, all):
        while self.particles and (all or self.particles[0].ttl < 0):
            self.world.DestroyBody(self.particles.pop(0))

    def step(self, action):
        if self.continuous:
            action = np.clip(action, -1, +1).astype(np.float32)
        else:
            assert self.action_space.contains(action), "%r (%s) invalid " % (action, type(action))

        # Engines
        tip  = (math.sin(self.lander.angle), math.cos(self.lander.angle))
        side = (-tip[1], tip[0])
        dispersion = [self.np_random.uniform(-1.0, +1.0) / SCALE for _ in range(2)]

        m_power = 0.0
        if (self.continuous and action[0] > 0.0) or (not self.continuous and action == 2):
            # Main engine
            if self.continuous:
                m_power = (np.clip(action[0], 0.0,1.0) + 1.0)*0.5   # 0.5..1.0
                assert m_power >= 0.5 and m_power <= 1.0
            else:
                m_power = 1.0
            ox = (tip[0] * (4/SCALE + 2 * dispersion[0]) +
                side[0] * dispersion[1])  # 4 is move a bit downwards, +-2 for randomness
            oy = -tip[1] * (4/SCALE + 2 * dispersion[0]) - side[1] * dispersion[1]
            impulse_pos = (self.lander.position[0] + ox, self.lander.position[1] + oy)
            p = self._create_particle(3.5,  # 3.5 is here to make particle speed adequate
                                    impulse_pos[0],
                                    impulse_pos[1],
                                    m_power)  # particles are just a decoration
            p.ApplyLinearImpulse((ox * MAIN_ENGINE_POWER * m_power, oy * MAIN_ENGINE_POWER * m_power),
                                impulse_pos,
                                True)
            self.lander.ApplyLinearImpulse((-ox * MAIN_ENGINE_POWER * m_power, -oy * MAIN_ENGINE_POWER * m_power),
                                        impulse_pos,
                                        True)

        s_power = 0.0
        if (self.continuous and np.abs(action[1]) > 0.5) or (not self.continuous and action in [1, 3]):
            # Orientation engines
            if self.continuous:
                direction = np.sign(action[1])
                s_power = np.clip(np.abs(action[1]), 0.5, 1.0)
                assert s_power >= 0.5 and s_power <= 1.0
            else:
                direction = action-2
                s_power = 1.0
            ox = tip[0] * dispersion[0] + side[0] * (3 * dispersion[1] + direction * SIDE_ENGINE_AWAY/SCALE)
            oy = -tip[1] * dispersion[0] - side[1] * (3 * dispersion[1] + direction * SIDE_ENGINE_AWAY/SCALE)
            impulse_pos = (self.lander.position[0] + ox - tip[0] * 17/SCALE,
                        self.lander.position[1] + oy + tip[1] * SIDE_ENGINE_HEIGHT/SCALE)
            p = self._create_particle(0.7, impulse_pos[0], impulse_pos[1], s_power)
            p.ApplyLinearImpulse((ox * SIDE_ENGINE_POWER * s_power, oy * SIDE_ENGINE_POWER * s_power),
                                impulse_pos
                                , True)
            self.lander.ApplyLinearImpulse((-ox * SIDE_ENGINE_POWER * s_power, -oy * SIDE_ENGINE_POWER * s_power),
                                        impulse_pos,
                                        True)

        self.world.Step(1.0/FPS, 6*30, 2*30)

        pos = self.lander.position
        vel = self.lander.linearVelocity
        state = [
            (pos.x - VIEWPORT_W/SCALE/2) / (VIEWPORT_W/SCALE/2),
            (pos.y - (self.helipad_y+LEG_DOWN/SCALE)) / (VIEWPORT_H/SCALE/2),
            vel.x*(VIEWPORT_W/SCALE/2)/FPS,
            vel.y*(VIEWPORT_H/SCALE/2)/FPS,
            self.lander.angle,
            20.0*self.lander.angularVelocity/FPS,
            1.0 if self.legs[0].ground_contact else 0.0,
            1.0 if self.legs[1].ground_contact else 0.0,
            (self.x_pos_middle_landing - VIEWPORT_W/SCALE/2) / (VIEWPORT_W/SCALE/2)
            ]
        assert len(state) == 9

        reward = 0
        shaping = \
            - 100*np.sqrt((state[0]-state[8])*(state[0]-state[8]) + state[1]*state[1]) \
            - 100*np.sqrt(state[2]*state[2] + state[3]*state[3]) \
            - 100*abs(state[4]) + 10*state[6] + 10*state[7]  # And ten points for legs contact, the idea is if you
                                                            # lose contact again after landing, you get negative reward
        if self.prev_shaping is not None:
            reward = shaping - self.prev_shaping
        self.prev_shaping = shaping

        reward -= m_power*0.30  # less fuel spent is better, about -30 for heuristic landing
        reward -= s_power*0.03

        done = False
        if self.game_over or abs(state[0]) >= 1.0:
            done = True
            reward = -100
        if not self.lander.awake:
            done = True
            reward = +100
        return np.array(state, dtype=np.float32), reward, done, {}

    def render(self, mode='human'):
        from gym.envs.classic_control import rendering
        if self.viewer is None:
            self.viewer = rendering.Viewer(VIEWPORT_W, VIEWPORT_H)
            self.viewer.set_bounds(0, VIEWPORT_W/SCALE, 0, VIEWPORT_H/SCALE)

        for obj in self.particles:
            obj.ttl -= 0.15
            obj.color1 = (max(0.2, 0.2+obj.ttl), max(0.2, 0.5*obj.ttl), max(0.2, 0.5*obj.ttl))
            obj.color2 = (max(0.2, 0.2+obj.ttl), max(0.2, 0.5*obj.ttl), max(0.2, 0.5*obj.ttl))

        self._clean_particles(False)

        for p in self.sky_polys:
            self.viewer.draw_polygon(p, color=(0, 0, 0))

        for obj in self.particles + self.drawlist:
            for f in obj.fixtures:
                trans = f.body.transform
                if type(f.shape) is circleShape:
                    t = rendering.Transform(translation=trans*f.shape.pos)
                    self.viewer.draw_circle(f.shape.radius, 20, color=obj.color1).add_attr(t)
                    self.viewer.draw_circle(f.shape.radius, 20, color=obj.color2, filled=False, linewidth=2).add_attr(t)
                else:
                    path = [trans*v for v in f.shape.vertices]
                    self.viewer.draw_polygon(path, color=obj.color1)
                    path.append(path[0])
                    self.viewer.draw_polyline(path, color=obj.color2, linewidth=2)

        for x in [self.helipad_x1, self.helipad_x2]:
            flagy1 = self.helipad_y
            flagy2 = flagy1 + 50/SCALE
            self.viewer.draw_polyline([(x, flagy1), (x, flagy2)], color=(1, 1, 1))
            self.viewer.draw_polygon([(x, flagy2), (x, flagy2-10/SCALE), (x + 25/SCALE, flagy2 - 5/SCALE)],
                                    color=(0.8, 0.8, 0))

        return self.viewer.render(return_rgb_array=mode == 'rgb_array')

    def close(self):
        if self.viewer is not None:
            self.viewer.close()
            self.viewer = None


class RandomTargetLunarLander(LunarLander):
    continuous = True

def heuristic(env, s):
    """
    The heuristic for
    1. Testing
    2. Demonstration rollout.

    Args:
        env: The environment
        s (list): The state. Attributes:
                s[0] is the horizontal coordinate
                s[1] is the vertical coordinate
                s[2] is the horizontal speed
                s[3] is the vertical speed
                s[4] is the angle
                s[5] is the angular speed
                s[6] 1 if first leg has contact, else 0
                s[7] 1 if second leg has contact, else 0
                s[8] is the target coordinate
    returns:
        a: The heuristic to be fed into the step function defined above to determine the next step and reward.
    """

    angle_targ = s[0]*0.5 + s[2]*1.0         # angle should point towards center
    if angle_targ > 0.4: angle_targ = 0.4    # more than 0.4 radians (22 degrees) is bad
    if angle_targ < -0.4: angle_targ = -0.4
    hover_targ = 0.55*np.abs(s[0])           # target y should be proportional to horizontal offset

    angle_todo = (angle_targ - s[4]) * 0.5 - (s[5])*1.0
    hover_todo = (hover_targ - s[1])*0.5 - (s[3])*0.5

    if s[6] or s[7]:  # legs have contact
        angle_todo = 0
        hover_todo = -(s[3])*0.5  # override to reduce fall speed, that's all we need after contact

    if env.continuous:
        a = np.array([hover_todo*20 - 1, -angle_todo*20])
        a = np.clip(a, -1, +1)
    else:
        a = 0
        if hover_todo > np.abs(angle_todo) and hover_todo > 0.05: a = 2
        elif angle_todo < -0.05: a = 3
        elif angle_todo > +0.05: a = 1
    return a

def demo_heuristic_lander(env, seed=None, render=False):
    env.seed(seed)
    total_reward = 0
    steps = 0
    s = env.reset()
    while True:
        a = heuristic(env, s)
        s, r, done, info = env.step(a)
        total_reward += r

        if render:
            still_open = env.render()
            if still_open == False: break

        if steps % 20 == 0 or done:
            print("observations:", " ".join(["{:+0.2f}".format(x) for x in s]))
            print("step {} total_reward {:+0.2f}".format(steps, total_reward))
        steps += 1
        if done: break
    return total_reward


if __name__ == '__main__':
    demo_heuristic_lander(LunarLander(), render=True)

Here are the results: Green is Normal Lunar Lander Continuous Pink is the Random Target Lunar Lander Continuous

",36730,,36730,,5/4/2021 0:45,5/4/2021 0:45,How to deal with a moving target in the Lunar Lander environment with DDPG?,,0,3,,,,CC BY-SA 4.0 27622,2,,24749,5/3/2021 6:00,,3,,"

Here is what I discovered empirically, trial and error. Since tuning the parameters are going to be environment specific, I'll lay out mine to give a better understanding of what I found to work for my case. Hopefully someone with better understanding of the algorithm will weigh in:

Environment: A 2D map where an agent controls a simulated PC mouse pad and must navigate from a random spawn point to a random reward point.

Action Space: Discrete(24) -- This was flattened from the original implementation to allow for use in the Q* algorithms.

  • 0-7, mouse buttons, each with one of two states pressed or depressed
public ButtonPressState Left;
public ButtonPressState Middle;
public ButtonPressState Right;
public ButtonPressState Touch;
public ButtonPressState ScrollUp;
public ButtonPressState ScrollDown;
public ButtonPressState IncreaseScroll;
public ButtonPressState DecreaseScroll;
  • 8-15: Angle to move if Touch is in state pressed, incremented by 45°, starting at 0 °up to 315°
public enum D8Dir
{
    UP = 0,         //   0°
    UP_RIGHT = 1,   //  45°
    RIGHT = 2,      //  90°
    DOWN_RIGHT = 3, // 135°
    DOWN = 4,       // 180°
    DOWN_LEFT = 5,  // 225°
    LEFT = 6,       // 270°
    UP_LEFT = 7     // 315°
}
  • 16-23: Speed, an exponent of 2^N representing how may cells to move per step.

Distance normalization: Trial and error led me to normalizing the space. This also brings the benefit of allowing the same trained algorithm to be used for different world sizes, eg a 2x3 vs a 3x2 vs a 3x3 on up to large common monitor dimensions. Specifically, distances are normalized between -1 and 1 for the height and the width of the space. So no matter the dimensions all points map to x and y between -1 and 1. In the case the height and the width differ the smaller is scaled to the size of the larger.

Reward space: After having agents reward exploit iteration after iteration of reward scheme, Rewards are given when an agent moves closer to the target - with closer taking into consideration both "birds eye" and Euclidian distance. After first, took "theoretical" maximum sum of the total rewards summed up to 1 and were assigned by taking the maximum distance an agent might move in a given 2D space to gain a reward. This works out to be the diagonal of the 2D space. I say "theoretical" because unless the agent randomly spawns exactly in one corner and the reward is placed exactly in the other corner, the actual total reward that can be achieved is lower. So in practice the maximum reward for a given episode is the sum of normalized distances between the agent and the reward.

To avoid reward exploitation, only .70% of the reward if the agent moves closer vertically or horizontally when a diagonal move was available. This is to get the agent to take the shortest path moving diagonally with only the sqrt(2) bird's eye reward. Otherwise and agent might move up, then left and collect a reward of 2 normalized units instead of the ~=1.4142 normalized reward units collected for moving diagonally. The .70% keeps the reward for an vertical then horizontal (or vice versa) move to 1.4 which is less than the ~=1.4142 diagonal reward.

Originally, the agent could collect up to a total reward of up to 1 and then 3 additional reward points for reach the target. Using the stock TensorFlow c51 DQN agent against this reward scheme, using the default min_q_value=-10 and max_q_value=10 values (I believe they are referred to as VMIN and VMAX in the literature) I found learning achieved the following results on very small world sizes:

[5/2/2021 5:39:19 PM] Training Started
[5/2/2021 5:39:33 PM] Training Environment (1, 2): 5/2/2021 5:39:33 PM
[5/2/2021 5:40:11 PM] Converged (1, 2) in 00:00:38.8244035 - Steps 2565 Episode 948  Average: 3.000000000000002 
[5/2/2021 5:40:11 PM] Training Environment (2, 1): 5/2/2021 5:40:11 PM
[5/2/2021 5:41:01 PM] Converged (2, 1) in 00:00:49.1426355 - Steps 5802 Episode 1242  Average: 3.000000000000002 
[5/2/2021 5:41:01 PM] Training Environment (2, 2): 5/2/2021 5:41:01 PM
[5/2/2021 5:43:42 PM] Converged (2, 2) in 00:02:41.2581146 - Steps 16573 Episode 3248  Average: 3.0551471839999977 
[5/2/2021 5:43:42 PM] Training Environment (2, 3): 5/2/2021 5:43:42 PM
[5/2/2021 5:53:44 PM] Converged (2, 3) in 00:10:01.8163250 - Steps 56732 Episode 11226  Average: 3.0981306360000063 
[5/2/2021 5:53:44 PM] Training Environment (3, 2): 5/2/2021 5:53:44 PM
[5/2/2021 5:59:13 PM] Converged (3, 2) in 00:05:29.5753945 - Steps 78612 Episode 6276  Average: 3.108986006000001 
[5/2/2021 5:59:13 PM] Training Environment (3, 3): 5/2/2021 5:59:13 PM
[5/2/2021 6:22:50 PM] Converged (3, 3) in 00:23:37.2302437 - Steps 173577 Episode 25603  Average: 3.1401708113999938 
[5/2/2021 6:22:50 PM] Training Environment (4, 4): 5/2/2021 6:22:50 PM
[5/2/2021 7:10:04 PM] Converged (4, 4) in 00:47:13.5879491 - Steps 363685 Episode 46441  Average: 3.1596743723999965 
[5/2/2021 7:10:04 PM] Training Environment (5, 5): 5/2/2021 7:10:04 PM
[5/2/2021 7:20:08 PM] Converged (5, 5) in 00:10:04.1529582 - Steps 404256 Episode 8916  Average: 3.182016046800004 
[5/2/2021 7:20:08 PM] Training Environment (6, 6): 5/2/2021 7:20:08 PM
[5/3/2021 12:33:49 AM] Killed 6x6 - No Convergence: Step 1671400: Reward 1.8765 loss = 2.90894175 (00:00:02.9459997) (eps: 2.9459996999999998)

As can be seen, even for a simple 6x6 world C-51 DQN did not converge even after over 4 hours of wall time on GTX-3080 sampling 1,671,400 steps so I killed it.

So I modified the total possible reward to for the agent moving to sum up to .5. This was simply a matter of dividing the old reward scheme in half (or rather *.5).

I then changed the target reached reward from 3 to .5.

The theoretical maximum reward then totaled to 1, so I changed min_q_value=0 and max_q_value=1 to match the sum of the rewards an agent might achieve. This resulted in wall times ever larger than before.

My latest attempt I used min_q_value=0 and max_q_value=.5 to match the maximum cumulative reward the agent could recieve before the collecting the .5 target reached reward, (which BTW is also a terminal state in my environment).

The new rewards, normalized to total to 1 run orders of a magnitude faster for the larger spaces. The 6x6 world "converged" after 05 minutes, 43 secs using 88721 total steps in 88721 in 4624 total episodes.

I still find this slow, but clearly the q values matching the range of cumulative rewards is an improvement.

[5/3/2021 12:49:35 AM] Training Environment (1, 2): 5/3/2021 12:49:35 AM
[5/3/2021 12:50:12 AM] Converged (1, 2) in 00:00:36.8244207 - Steps 2424 Episode 981  Average: 0.5000000000000002 
[5/3/2021 12:50:12 AM] Training Environment (2, 1): 5/3/2021 12:50:12 AM
[5/3/2021 12:50:37 AM] Converged (2, 1) in 00:00:24.6645133 - Steps 4075 Episode 700  Average: 0.5000000000000002
[5/3/2021 12:51:52 AM] Converged (2, 2) in 00:01:15.0585713 - Steps 8135 Episode 1419  Average: 0.5314644659999993 
[5/3/2021 12:51:52 AM] Training Environment (2, 3): 5/3/2021 12:51:52 AM
[5/3/2021 12:52:39 AM] Converged (2, 3) in 00:00:47.0126118 - Steps 11266 Episode 961  Average: 0.5455624971999997 
[5/3/2021 12:52:39 AM] Training Environment (3, 2): 5/3/2021 12:52:39 AM
[5/3/2021 12:52:56 AM] Converged (3, 2) in 00:00:17.6139995 - Steps 12447 Episode 395  Average: 0.5650996157999993 
[5/3/2021 12:52:56 AM] Training Environment (3, 3): 5/3/2021 12:52:56 AM
[5/3/2021 12:55:56 AM] Converged (3, 3) in 00:02:59.7056623 - Steps 24472 Episode 3403  Average: 0.5863068707999991 
[5/3/2021 12:55:56 AM] Training Environment (4, 4): 5/3/2021 12:55:56 AM
[5/3/2021 1:00:07 AM] Converged (4, 4) in 00:04:11.2932882 - Steps 41259 Episode 4240  Average: 0.6052226153999957 
[5/3/2021 1:00:07 AM] Training Environment (5, 5): 5/3/2021 1:00:07 AM
[5/3/2021 1:06:11 AM] Converged (5, 5) in 00:06:03.8448782 - Steps 65641 Episode 5502  Average: 0.6209708242799977 
[5/3/2021 1:06:11 AM] Training Environment (6, 6): 5/3/2021 1:06:11 AM
[5/3/2021 1:11:54 AM] Converged (6, 6) in 00:05:43.2033837 - Steps 88721 Episode 4624  Average: 0.6089312343400013 

FYI: The paper states that Transitions to a terminal state are handled with γt = 0.


Update:

The 9x9 environment took 294,897 steps, in 37,025 episodes over 51 minutes and 38 seconds. It may be the case that the max_q_value=1 works better in the larger world sizes where an agent might collect more movement reward. In any case, these values are nearly quadratically better than the default values from the tutorial. I will experiment with them more.

Additionally, tuning n-steps might help. My implementation may also have an issue using the replay buffer memories from smaller world sizes as I am changing the world size dynamically an currently not clearing the buffer.

[5/3/2021 2:03:33 AM] Converged (9, 9) in 00:51:38.4201434 - Steps 294897 Episode 37025  Average: 0.6425936848799915 
",46725,,46725,,10/30/2022 5:05,10/30/2022 5:05,,,,1,,,,CC BY-SA 4.0 27623,1,27624,,5/3/2021 6:59,,0,82,"

I was reading up a paper that did routing based on an MDP, and I was wondering because, in routing, there is a sender node and a receiver node, so if the receiver node changes (sending a message to someone else), would we have to train the MDP algorithm all over again?

This also got me thinking about what would happen even if one node in the process of transmission changes. Does using an MDP for training the agent mean that the obstacle and goals should never change?

",37797,,2444,,5/3/2021 14:00,5/3/2021 14:00,What would happen to an agent trained using Markov Decision Process if the goal node changes?,,1,3,,,,CC BY-SA 4.0 27624,2,,27623,5/3/2021 8:53,,2,,"

It is possible, at design time for a reinforcement learning problem, to allow for changes within an environment. You can make any element into a variable property of the state, that the agent can realistically be told at the start or sense from the environment.

If you do add new variable to model the possibility of change:

  • It allows the agent to learn to solve a more general problem where the chosen property can vary.

  • It increases the size of the state space.

  • It requires training to include variations of the new variable.

Usually this also increases the time taken to train.

It is not always possible to use a state variable for the task - perhaps a goal state is effectively hidden from the agent and the purpose of training is for it to be discovered. In which case, you will require at least some re-training. It may be faster to start with the existing trained agent if the difference is not large.

If you cannot simply extend the state representation, and the environment changes in a small enough way, then it may also be possible to use an agent which continuously explores which will re-train itself over time in repsonse to changes in the environment. The DynaQ+ algorithm is an example of a method which is designed to explore and find changes in the agent's environment to allow for this kind of online retraining when things change.

",1847,,,,,5/3/2021 8:53,,,,0,,,,CC BY-SA 4.0 27626,1,,,5/3/2021 16:17,,0,56,"

Recognition of optical patterns (as pixel maps) by neural networks is standard. But optical patterns may be only slightly distorted or noisy, and may not be arbitrarily scrambled – e.g. by permutations of rows and columns of the pixel map – without losing the possibility to recognize them. This in turn is the normal case for abstract graphs in their standard representation as adjacency matrices: only under some permutations of nodes a possible pattern is visible. In general, for almost all random graphs under no permutation a pattern is visible, but for all graphs under almost all permutations a pattern is invisible.

How can this be handled in the context of either unsupervised or supervised learning? Assume you have a huge set of graphs with 100 nodes and 1,000 edges, given as 100$\times$100 adjacency matrices under arbitrary permutations, but with only two isomorphism classes. How could a neural network find this out and learn from the samples? Is this possibly common knowledge: that it can not? Or are there any tricks?

(One trick might be to draw the graph force-directed and hope that it settles in a recognizable configuration. But this to be detectable would require a much larger pixel map than 100$\times$100. But why not?)

",25362,,2444,,5/6/2021 17:46,10/2/2022 23:05,How can abstract graphs be recognized by neural nets?,,1,0,,,,CC BY-SA 4.0 27627,2,,27612,5/3/2021 16:20,,0,,"

Outside of using a generated dataset to study machine learning, the typical purpose of a trained machine learning model is to process new inputs from some source.

For a model to be effective, the training data set inputs and new inputs should be taken from the same distribution. The loss function used in training, combined with cross-validation to measure and maintain generalisation, will have ensured that the most accurate results occur for a population of inputs that is similar to the training data.

The further that inputs stray from being like the training data, in any aspect, then the more likely that outputs from a trained model are inaccurate. This may include over-generalising - if you set much higher ranges for some variations - e.g. far more dilate and erode that would be seen in practice, then the neural network weights will be tuned to allow for this data and may score worse on your target data even though it will appear to score well overall on the training data. That is because the measurements for loss and accuracy from the more realistic generation will be diluted by measurements from training data that has no relevance to the real-world problem you are trying to solve. Maybe it will be OK, maybe worse, maybe even better - however your measurements of loss and accuracy during training will not tell you.

So there is significant danger in relying on a generated-only dataset for training. If any aspect of the simulated inputs does not match how the system will be used in practice, the impact is likely to be felt in terms of reduced accuracy.

For your digits example, you should consider where the "real" digits will come from later, and try to ensure that your data generation takes into account any complications, variations, imperfections that will occur when collecting the data. For instance, if the real digits are scanned from paper, then take a look at some typical scanned images, and check how close your generated data is to them.

If you can obtain a limited number of "real" values, perhaps not enough for training, but enough to get some accuracy statistics from, then consider using them for test and cross-validation phases. Remember, that using them for test should be done sparingly, and not used to select between models with similar results, but only to establish a rough estimate of accuracy at the end of training. Whilst using any for cross-validation may help select a model that generalises the best between the generated data set and reality, but precludes using the same examples for test.

",1847,,1847,,5/3/2021 16:31,5/3/2021 16:31,,,,0,,,,CC BY-SA 4.0 27628,1,27633,,5/3/2021 19:09,,2,64,"

Context

I'm trying to create net that will be able to recognize printed-like digits. Something like MNIST, but only for standard printing font.

Images are of the size 40x40 and I'd like to put them into feedforward net since ConvNet seems too powerful for this task.

Question

How should I use Flatten layer in this task?

Code

My current net:

X, test_X, y, test_y = train_test_split(X, y, test_size=0.25, random_state=42)

self.model = Sequential()
self.model.add(Flatten())
self.model.add(Dense(64, activation='relu', input_shape=X.shape[1:]))
self.model.add(Dense(no_classes, activation='softmax'))
self.model.compile(loss="categorical_crossentropy",
                   optimizer="rmsprop",
                   metrics=['accuracy'])

self.history = self.model.fit(X, y, batch_size=256, epochs=20, validation_data=(test_X, test_y))
print(self.model.summary())

Example images

Current results

",46718,,,,,5/3/2021 20:54,How to properly use Flatten layer?,,1,2,,,,CC BY-SA 4.0 27630,2,,13657,5/3/2021 20:07,,0,,"

If data collection is expensive, it is better to first try to improve your model.

You say your accuracy is bad, but have you using tried better performance metrics? A confusion matrix could help. Another potential problem is that your data may be imbalanced. What if your model is performing badly because, for example, there aren't enough samples from class 2. Now you know that to improve your model you need mode class 2 samples. This can be acquired by collecting more data, or by other class imbalance methods e.g. SMOTE.

You can also check whether the performance is bad only on the test set. Is the model overfitting? Or does it perform badly on the training set too? Underfitting. 4000 samples should be enough for a neural network, but not a very large deep learning model, maybe a smaller one. Testing other machine learning models on your data is definitely a good idea. Not only you can provide your model with a benchmark, but you also may find a better model.

If you think your data is good enough and your model is adequate, you can try hyperparameter optimization. This can be quire useful for getting the most out of neural networks, but it is trial and error, and it can take some time to find the best hyperparameters.

",32265,,,,,5/3/2021 20:07,,,,0,,,,CC BY-SA 4.0 27631,2,,16509,5/3/2021 20:16,,0,,"

This really depends on your data. MSE and its variant, the RMSE, are good for regression problems. In other words, when you want to produce a real number as an output, for example, in a time series forecasting situation. The MAPE is good and can be interpreted as a percentage, but it does not work well if you have zeros in your dataset. As the other answer mentions, BLEU is good too, but only if your model is doing machine translation, or working with some other type of categorical data; however, you would not use BLEU if you are trying to predict house prices, for example.

To find the best evaluation metric you have to consider your data and the problem you are trying to solve. Its good to go over a list of potential loss functions and see what fits your problem best.

",32265,,,,,5/3/2021 20:16,,,,0,,,,CC BY-SA 4.0 27632,2,,27614,5/3/2021 20:24,,0,,"

It depends on what your outputs are. For example, if both outputs are similar then you can use one output branch. However, what if the two outputs are different? With two output branches you can used two different loss functions. Now your model will optimize the two branches separately.

Imagine if you have a model that has to output a class label for the input and a real value describing something in the input. One is a classification task and the other is a regression task. And the two will require different loss functions, so you would use two branches.

",32265,,,,,5/3/2021 20:24,,,,0,,,,CC BY-SA 4.0 27633,2,,27628,5/3/2021 20:44,,3,,"

The Flatten layer is used for collapsing an ND tensor into a 1D tensor. In your case, the inputs appear to be $28\times28$ images, so Flatten will convert that into a tensor with shape $1\times768$. Note that no information is lost. Flatten layers are usually used where you have a convolutional layer with dimensions $N\times M \times C$ (where $N$,$M$ are the feature map sizes and $C$ is the number of channels) and want to fully connect with a Dense layer or another layer that only accepts 1D inputs. Flatten can also be used when the network is meant to output a feature vector from a final convolutional layer for image classification purposes using a different technique.

",7760,,7760,,5/3/2021 20:54,5/3/2021 20:54,,,,1,,,,CC BY-SA 4.0 27634,1,,,5/4/2021 2:19,,0,104,"

I'm getting started with DRL and have trouble distinguishing TD(0), MC, and GAE; and which scenarios one's better than others. Here is what I understand so far:

  • TD(0): increment learning, can learn after each step instead of waiting for the episode to end. Update bases on one reward, then low variance. However, high bias.

  • MC: Learn after each episode, the calculation of return is correct. However, its drawback is high variance. And you have to make decisions in the whole episode without an update of parameters.

  • GAE: combine returns in all steps, get a better trade-off between variance and bias. However, still has to wait until the end of the episode for an update.

I have some questions as follows:

  1. Is variance and bias about the return of each episode? What are their effects on the outcomes (convergence speed of training process, performance of the model)?

  2. Is increment learning important? The ability to correct behaviors after each step may improve convergence speed. However, it can lead to unstable learning ( if I understand correctly, this is why the Target model in Double DQL only updates its parameters for each k mini-batches ). Which scenarios should I use TD(0) or GAE?

  3. Concretely, in my case, I run parallelly a batch with 12 environments, each with 1000 steps. If I use GAE, I make 12000 decisions for each update. All losses of the model are summed up and calculate gradients, after that, I clip gradients to 2.0. Is that too expensive to learn the correct direction? Should I consider using TD(0) here?

",46753,,,,,5/4/2021 2:19,Comparison between TD(0) and MC ( or GAE )?,,0,3,,,,CC BY-SA 4.0 27641,1,,,5/4/2021 16:03,,5,69,"

Convolutional neural networks (CNNs) contain convolutional layers. In modern deep learning libraries such as Tensorflow and PyTorch among others, convolutional layers are implemented by using the cross-correlation operator instead of the convolution operator. The difference is that in convolution, the kernel is flipped before applying it on the input.

For example in the book "Deep Learning", it is explained as follows.

Many machine learning libraries implement cross-correlation but call it convolution. --- In the context of machine learning, the learning algorithm will learn the appropriate values of the kernel in the appropriate place, so an algorithm based on convolution with kernel flipping will learn a kernel that is flipped relative to the kernel learned by an algorithm without the flipping. It is also rare for convolution to be used alone in machine learning; instead convolution is used simultaneously with other functions, and the combination of these functions does not commute regardless of whether the convolution operation flips its kernel or not.

This makes perfect sense, and convincingly argues why implementing the flipping of the kernel would be unnecessary.

But how come CNNs are not commonly called "cross-correlational neural networks" instead of "convolutional neural networks"? To the best of my knowledge, the first concrete implementations of CNNs predate any of the above mentioned libraries. Did these early implementations of CNNs indeed use the convolution operator, leading to the name? Or is there another reason?

",45529,,2444,,5/6/2021 10:46,5/6/2021 10:46,Origins of the name of convolutional neural networks,,0,1,,,,CC BY-SA 4.0 27642,1,27659,,5/4/2021 19:06,,0,113,"

I have $N$ number of teachers each of which has an input feature vector ($25$ dimensional) consisting of positive numerical values for different quality of aspects (for example: lecturing ability, knowledge capacity, communication skills, etc.). I want to design an ANN to output a single quality index based on these quality features.

What type of ANN architecture is appropriate for this problem?

",46703,,1641,,5/5/2021 17:35,5/5/2021 18:37,What type of ANN architecture to choose?,,1,2,,,,CC BY-SA 4.0 27649,1,,,5/5/2021 1:10,,1,33,"

So basically I'm training a sequence to sequence model that translates English sentences to Arabic sentences. I'm using the data provided by Anki @ manythings. I realized that some of the sentences in English (source) have multiple sentences in Arabic (target), for example:

This is one case, where the Arabic harakat are not shown but the idea is that the same word has different translations (yes in arabic the first, fourth and fifth are not the same translations).

A better example is the following one:

I'm not sure how to deal with these cases, should I reduce the data and keep one translation, or should I have for each source key a list of target values. Any advice or "tips & tricks" in preparing the data before training translation models?

",33531,,40434,,5/5/2021 14:09,5/5/2021 14:09,Training seq2seq translation model with one source and multiple target,,0,0,,,,CC BY-SA 4.0 27650,1,,,5/5/2021 1:10,,1,53,"

I am tasked with making a machine learning model that predicts personality traits and behaviours of children based on simple and interactive quizzes. Currently I am lost and have no idea where to start!

I am looking for guidance and where can start my research and the actual coding part and is NLP a good place to start from.

",3894,,40434,,5/5/2021 9:33,5/5/2021 9:33,Machine Learning in relation to personality and behaviors predictions,,1,1,,,,CC BY-SA 4.0 27652,2,,27650,5/5/2021 8:19,,2,,"

I would start by reviewing any available tools in NLP that can help you. I know two: Watson Personality Insights and Symanto thar provide API to develop this kind of solutions.

The first one was one available in the past (Watson personality Insights) but unfortunately has been discontinued.

The IBM Watson Personality Insights service enables applications to derive insights from social media, enterprise data, or other digital communications. The service uses linguistic analytics to infer individuals' intrinsic personality characteristics, including Big Five, Needs, and Values, from digital communications such as email, text messages, tweets, and forum posts.

The service can automatically infer, from potentially noisy social media, portraits of individuals that reflect their personality characteristics. The service can infer consumption preferences based on the results of its analysis and, for JSON content that is timestamped, can report temporal behavior.

Another comercial tool is Symanto:

Symanto Insights Platform is an AI-powered analysis tool that gives you qualitative insights about your textual data. Just upload your data sets or crawl information from different sources such as online review sites or online surveys, and Symanto’s AI immediately analyses your data and provides you qualitative insights about the author. Symanto Insights Platform reveals the opinions of your customers and employees by analysing text data, giving you insights about current topics, general performances or the psychographic structure of your audience. Whether it’s psychographic marketing, customer service, product development or employee satisfaction, Symanto Insights Platform helps you answer a wide range of business questions.

I´m somehow working in this field in my Master thesis so if you need any further info just let me know.

",46540,,,,,5/5/2021 8:19,,,,0,,,,CC BY-SA 4.0 27655,1,,,5/5/2021 9:39,,1,358,"

In some learning algorithms, we don't directly train models by datasets with labels to predict, but rather we create 2 competing models and let them fight/compete against each other. As the many millions of epochs pass, the models fight each other, and every time each model improves itself (further optimise its weights) to win. After many epochs of the models smashing each other, eventually they become really strong super-hero models that can totally blow any human out of the water. This approach seems to be often used with machine learning models that are tasked to play multiplayer games. Instead of letting them play with the slow humans, they fight with each other to death for many many epochs to become way stronger than any human can naturally be.

What is the name of such kind of machine learning approach?

",2361,,,,,5/5/2021 9:39,What is the name of algorithms that train by competing each other?,,0,7,,,,CC BY-SA 4.0 27656,1,,,5/5/2021 10:46,,0,98,"

I have a linear tabular dataset made of floats. The dataset follows a simple rule like:

if features A and B are in a certain range then target class is 1, otherwise target class is 0.

Since I want to get some interpretability from my ANN model, I opted for using the integrated gradients method implemented by alibi.

Unfortunately, most of individual samples don't show A and B as the leading features as expected. Even more weird is the fact that, when I average the attributions of all the individual samples, A and B get the highest score. In other words, local explanations fail but, on average, the global explanation is correct.

Can anyone help me out to understand why this happens? Isn't integrated gradients method suitable for tabular datasets?

By the way, my baseline is based on a uniform distribution of random floats ranging from 0 to the maximum of each column.

",46784,,46817,,5/10/2021 0:02,10/5/2022 16:06,Why don't integrated gradients explain samples correctly?,,1,0,,,,CC BY-SA 4.0 27657,1,,,5/5/2021 13:53,,1,46,"

I want to train a model over variable-length sequential data (e.g. the temperature at different times of day) where the output depends on what the temperature is at a time T.

Ideally, I want to represent the input using a variable-length compacted format of [temperature, duration]. Alternatively, I can divide a matrix into time slices where each cell contains the current temperature.

I prefer the compacted format as it is more space-efficient and allows me to represent arbitrary-length durations, but I am afraid that a Transformer architecture won't be able to figure out what the temperature is at a time T using the compact format.

Is it safe to compact sequential inputs?

",46787,,18758,,12/17/2021 0:13,12/17/2021 0:13,Representing variable-length sequences,,0,0,,,,CC BY-SA 4.0 27659,2,,27642,5/5/2021 18:37,,1,,"

It sounds like you have structured/tabular data. So, a fully-connected feedforward network should do the job.

",32621,,,,,5/5/2021 18:37,,,,2,,,,CC BY-SA 4.0 27662,1,27700,,5/6/2021 4:45,,3,73,"

Artificial Intelligence (AI) is often defined as a machine that is intelligent, or one that can think rationally.

From a high-level perspective, things like self-driving car or Alpha-Go can easily be classified as an AI system, while things like a washing machine that follows a strict sequential program is not considered as AI.

However, what confused me is that when looking at the definition from a low-level perspective, there does not seem to be a clear distinction between AI and non-AI.

For example, consider an Artificial Neural Network from Deep Learning. Fundamentally, it is just a complex non-linear function. Why is this considered AI while a washing machine is not considered that?

Is it because of the learning involved? But then path-finding will not considered as AI too.

Is it because of the calculations? But then traditional calculators will be considered as AI.

Is there even a clear distinction between AI and a sequential program? Or is it just a vague term that is only valid when viewed from a high-level perspective?

",35852,,,,,5/9/2021 11:43,Is there a clear distinction between Artificial Intelligence and running a sequential program?,,1,3,,,,CC BY-SA 4.0 27667,1,27674,,5/6/2021 10:20,,1,143,"

In the book Deep Learning with Python, François Chollet writes (section 1.2.6, page 18)

In practice, there are fast-diminishing returns to successive applications of shallow-learning methods, because the optimal first representation layer in a three-layer model isn't the optimal first layer in a one-layer or two-layer model. What is transformative about deep learning is that it allows a model to learn all layers of representation jointly, at the same time, rather than in succession (greedily, as it's called).

By shallow learning, we mean traditional machine learning models that aren't deep learning, such as support vector machines.

I understood the above as below.

Using a model with three-layer shallow-learning methods has the same output (predicted) value as using one-layer shallow learning method. The effect of using multiple layers of shallow learning methods is to 'increase running time or repetition'.

Did I understand properly?

",46808,,2444,,5/11/2021 10:34,5/11/2021 10:34,What is the difference between applying shallow-learning methods repeatedly and deep learning?,,2,0,,,,CC BY-SA 4.0 27668,1,,,5/6/2021 11:12,,1,113,"

I've been working on Neural Collaborative Filtering (NCF) recently to build a recommender system using Tensorflow Recommenders. Doing some hyperparameter tuning with different optimizers available in the module tf.keras.optimizers, I found out that Adam and its other variants, such as Adamax and Nadam, work much slower than seemingly less advanced optimizers, like Adagrad, Adadelta, and SGD. With Adam and its variants, training each epoch takes about 30x longer.

It came out as a surprise to me, knowing one of the most cherished properties of Adam optimizer is its convergence speed, especially compared to SGD. What could be the reason for such a significant difference in computation speed?

",45211,,,,,5/6/2021 11:12,"Why does Adam optimizer work slower than Adagrad, Adadelta, and SGD for Neural Collaborative Filtering (NCF)?",,0,2,,,,CC BY-SA 4.0 27670,2,,22450,5/6/2021 11:22,,1,,"

Heatmap in the sense of Corner net is the heatmap of the pooled corner values. As discussed in the paper, there is a corner pooling operation that gives you the vector values for a pixel point being a corner(it may or not). The output from the corner pooling is CxHxW. Then to generate a heat map you train the network similar to the Grad-CAM method. Training for heatmap generation means that we are going to tell how much a given pixel point has a weight to be a corner point. i.e if it has a high weight it will generate more colored values on that pixel area.

The heatmap is used for the prediction of corner points. That means if you know which pixel has a higher weight then that pixel will have a higher probability to hold the top-right or bottom-left corner of the given bounding box.

",46811,,,,,5/6/2021 11:22,,,,0,,,,CC BY-SA 4.0 27673,1,,,5/6/2021 13:15,,0,413,"

I'm creating a neural network with 3 layers and no bias.

On internet I saw that the expression for the derivative of the weights between the hidden layer and the output layer was:

$$\Delta W_{j,k} = (o_k - t_k) \cdot f'\left[\sum_j (W_{j,k} \ \cdot o_j)\right] \cdot o_j,$$

where $t$ is the target output, $o$ is the activated output layer and $f'$ the derivative of the activation function.

But the shape of these weights is $\text{output nodes}\times\text{hidden nodes}$, and $\text{hidden nodes}$ can be bigger than $\text{output nodes}$, so the formula is wrong because of I'm taking $o_k$ and $o$ has length $\text{output nodes}$.

  1. In simple terms, what is the right formula for updating these weights?

  2. Also, what is the right formula for updating the weights between the input layer and the hidden layer?

",46816,,2444,,5/6/2021 17:52,1/27/2023 3:07,What is the correct formula for updating the weights in a 1-single hidden layer neural network?,,1,0,,,,CC BY-SA 4.0 27674,2,,27667,5/6/2021 13:26,,2,,"

Quite surprising to find someone reading this same book. I read this part a week ago and the explanation is quite clear in the book :

  • If you use successive shallow learning methods, you first train one model, then you train another model with the outputs of your first model, and then a third with the outputs of your 2nd model. The problem with that is that each model is trained to get good results at his task, not to send information, so there can be an increase when adding successive models, but it is a very weak increase.
  • If you use deep learning, all the layers are trained at the same time, so each layer learns how to efficiently transfer important information to the next layer of the model. This is why it is much more efficient.

Hope I made it clearer

",46681,,,,,5/6/2021 13:26,,,,0,,,,CC BY-SA 4.0 27675,1,,,5/6/2021 13:45,,0,151,"

Is binary classification using CNN possible if the training data only consists of one class?

I am working on landslide risk assessment using Convolutional Neural Networks and I want to train a network that can recognize high-risk areas using multi-spectral imagery. The bands will contain numeric and categorical data that I have found to be related to my field of work.

The problem is that I only have historical data indicating where a landslide has happened before and defining zones as low-risk is not reliable in this field (since we are not yet sure how these variables affect the risk or susceptibility, and I don't want to bias my categorization) and my training data will be made up of only one class.

Can this be done? Is training a network from scratch using only one class of training data possible?

If so, after building this network, can I use it to classify any zone and get any meaningful data from its output for risk assessment (for example, output value "1" being "similar to past landslides" and "0" being "not similar at all")?

",46818,,2444,,5/6/2021 16:56,7/1/2022 2:00,Is binary classification using CNN possible if the training data only consists of one class?,,3,1,,,,CC BY-SA 4.0 27676,2,,27675,5/6/2021 14:42,,0,,"

It probably won't work because of in the training of the artificial intelligence, it'll set the weights to always answer class 1, and your data will say you're right, and it'll continue forever.

",46816,,,,,5/6/2021 14:42,,,,0,,,,CC BY-SA 4.0 27677,1,,,5/6/2021 14:45,,1,64,"

I am trying to learn about Federated Learning (FL), but I have a question.

What is dynamic data sampling in FL?

Cai, Lingshuang, et al. "Dynamic Sample Selection for Federated Learning with Heterogeneous Data in Fog Computing." ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE, 2020.

",45605,,2444,,5/7/2021 18:58,5/7/2021 18:58,What is dynamic data sampling in federated learning?,,0,3,,,,CC BY-SA 4.0 27678,1,,,5/6/2021 15:35,,0,34,"

I am trying to train a classification RNN model on a sequence of table medical data, but I stuck with the normalization problem. I realized that I cannot simply use MinMaxScaler, because of 3 problems:

  1. outliers, but I could fight them or use RobustScaler instead.
  2. I am not sure that some features in my dataset include all possible ranges. Like I have max(feature_A) == 10, but with the data update, it could become 20. And if I'll preprocess data the same way I will get bad prediction results.
  3. Some features do not have a limit at all and will only increase with time, like how many years patients were treated, for example. I could suppose that this value is !>100years, for example, but if my mean value is 10 years, it will squeeze feature values a lot.

My dataset is pretty large, like millions of observations, thus there is a pretty good chance that it is representative, though. But I am concerned with the small-time range, like all those observations are for the 2 years only, thus, some feature values (like how many years patients were treated) could still grow their bounds.

How should I handle this?

My concerns example:

import pandas as pd
from sklearn.preprocessing import MinMaxScaler

scaler = MinMaxScaler()

#### like, initial state
df1 = pd.DataFrame({'A': [1, 2, 3, 4, 5], 'B': [10, 40, 60, 80, 100]})
""" output:
   A    B
0  1   10
1  2   40
2  3   60
3  4   80
4  5  100
"""

scaler.fit_transform(df1)
""" output:
array([[0.        , 0.        ],
       [0.25      , 0.33333333],
       [0.5       , 0.55555556],
       [0.75      , 0.77777778],
       [1.        , 1.        ]])
"""

#### new data arrived, while preprocessing is the same
df2 = pd.DataFrame({'A': [1, 2, 3, 4, 5, 10, 10], 'B': [10, 40, 60, 80, 100, 120, 140]})
""" output:
    A    B
0   1   10
1   2   40
2   3   60
3   4   80
4   5  100
5  10  120
6  10  140
"""

# now 5 in "A" scaled to 0.4 instead of 1, same in "B"
scaler.fit_transform(df2)
""" output:
array([[0.        , 0.        ],
       [0.11111111, 0.23076923],
       [0.22222222, 0.38461538],
       [0.33333333, 0.53846154],
       [0.44444444, 0.69230769],
       [1.        , 0.84615385],
       [1.        , 1.        ]])
"""

PS: I've duplicated this question in different communities (question in ai got most of views):

",46666,,46666,,5/8/2021 12:38,2/2/2022 15:04,Normalization of possibly not fully representative data,,1,0,,,,CC BY-SA 4.0 27679,1,,,5/6/2021 15:40,,2,84,"

I recently heard of GPT-3 and I don't understand how the attention models and transformers encoders and decoders work. I heard that GPT-3 can make a website from a description and write perfectly factual essays. How can it understand our world using algorithms and then recreate human-like content? How can it learn to understand a description and program in HTML?

",44611,,2444,,5/8/2021 1:08,6/2/2022 4:02,How do transformers understand data and answer custom questions?,,1,1,,,,CC BY-SA 4.0 27681,1,,,5/6/2021 17:23,,2,110,"

If $h_1(s)$ is a consistent heuristic and $h_2(s)$ is a admissible heuristic, is $\min(h_1(s),\ h_2(s))$ consistent?

",46832,,44413,,5/9/2021 11:46,5/9/2021 13:54,"Is $\min(h_1(s),\ h_2(s))$ consistent?",,1,0,,,,CC BY-SA 4.0 27682,2,,27675,5/6/2021 19:40,,0,,"

The first answer is correct in that you can't use discrimitive learning for binary classifucation here since you only have one class. There are a few things you can try however. If you can convert your images to feature vectors, kernal density estimation can be used to assign a probability density over the space of images, and then for any new image you can get a probability of it being similar to the training data. Generalising, you could use outlier detection methods such as isolation forest to determine if new images are "inliers" (i.e. landslide images) or "outliers".

",7760,,,,,5/6/2021 19:40,,,,0,,,,CC BY-SA 4.0 27683,1,,,5/6/2021 20:07,,1,36,"

I wonder why weights are initialized with zero-mean. It is one of the reasons, why deep architectures cannot be trained without skip connections. Without the skip connections, the zero initialization becomes problematic, because the identity function cannot be learned in earlier layers (this is a simplified explanation I know). But why can we not initialize weights around one? This would enhance the intrinsic learning of the identity function. Of course, the skip connections also allow a better backpropagation of the gradients, but couldn't this be helpful anyways? Can anyone tell me, why this is not done?

",46837,,,,,5/6/2021 20:40,Why are weights not initialized with mean=1?,,1,0,,,,CC BY-SA 4.0 27684,2,,27683,5/6/2021 20:40,,2,,"

Interesting question,

I can come with 2 explanations why we don't initialize weights with 1 mean value :

  1. It may be easier for the network to learn identity function, but we may have a similar issue about not being able to learn comparison, comparison is quite an important reasoning in my opinion, this is why having negative weight values is important, and initializing all weights around 1 may make it difficult for the network to get negative weights.
  2. Maybe it causes an issue with the non-linearity of the activation functions. If we use ReLU function for example, it only is non linear if there are negative values in the network. It is not uncommon to have positive inputs for a network, so initializing all weights around 1 may lead to have only positives values on the network, which makes ReLU function linear.

Anyway weight initialization has always been a complicated topic and it seems there is no definitive answer to what should be done about initializing weights. I am not an expert of the topic, just giving you my thoughts about it.

",46681,,,,,5/6/2021 20:40,,,,0,,,,CC BY-SA 4.0 27685,2,,27673,5/6/2021 21:24,,0,,"
  1. The correct formula for updating the weights between the hidden layer and the output layer is: $$\Delta W_{j,k} = h_k \ \cdot \ o'_{j} \ \cdot \ (o_j - t_j),$$ where $h$ is the activated hidden layer and $o'$ is the derivative of output layer.
    I found this formula in the book Artificial Intelligence: A Guide to Intelligent Systems by Michael Negnevitsky.
",46816,,,,,5/6/2021 21:24,,,,0,,,,CC BY-SA 4.0 27686,1,,,5/6/2021 22:01,,0,648,"

I'm making a simple deep Q learning algorithm, with cartpole-v1 env.

Like you can see in chart, after many episodes the reward decrease, some possible reasons?

The exploration vs axplotation algorithm is epsilon-decay, I used a target network (used every mini-batch gradient descent update, in calculating actual Q values, and next Q values, is it right?)

The neural network is made from scratch the complete code is here: https://github.com/LorenzoTinfena/deep-q-learning-itt-final-project

# %%
from core.dqn_agent import DQNAgent
from cartpole.cartpole_neural_network import CartPoleNeuralNetwork
from cartpole.cartpole_wrapper import CartPoleWrapper
import gym
import numpy as np
import torch
from tqdm import tqdm
import glob
import os
from IPython.display import Video
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from itertools import cycle
import sys
import shutil
from pathlib import Path
import shutil
import pyvirtualdisplay
_display = pyvirtualdisplay.Display(visible=False, size=(1400, 900))
_ = _display.start()



# %% [markdown]
# Initialize deep Q-learning agent, neural network, and parameters
# %%
np.random.seed(20)
agent = DQNAgent(env=CartPoleWrapper(gym.make("CartPole-v1")),
                nn=CartPoleNeuralNetwork(), replay_memory_max_size=5000, batch_size=30)

DISCOUNT_FACTOR = 0.995
LEARNING_RATE = 0.0001

n_episodes = []
total_rewards = []
number_steps = []
total_episodes = 0


# %% [markdown]
# Training
# %%
while total_episodes <= 10000:
    total_reward, steps = agent.start_episode_and_evaluate(DISCOUNT_FACTOR, LEARNING_RATE, epsilon=0, min_epsilon=0, render=False, optimize=False)
    print(f'\ntotal_episodes_training: {total_episodes}\tsteps: {steps}\ttotal_reward: {total_reward}', flush = True)
    n_episodes.append(total_episodes)
    total_rewards.append(total_reward)
    number_steps.append(steps)

    for i in tqdm(range(50), 'learning...'):
        agent.start_episode_and_evaluate(DISCOUNT_FACTOR, LEARNING_RATE, epsilon=1, epsilon_decay=0.99, min_epsilon=0.01, render=False, optimize=True)
    total_episodes += i+1

",46743,,,,,5/6/2021 22:01,"Reward firstly increase, but after more episodes, start decrease, and weights diverges",,0,4,,,,CC BY-SA 4.0 27689,1,,,5/7/2021 6:22,,1,39,"

There are a few ways to regularise a neural network, for example dropout or the L1. Now, both these methods, and possibly most other regularisation methods, tend to remove from, or simplify the neural network. The Dropout deactivates nodes and the L1 shrinks the weights of the model, and so on.

The main argument in favour of regularising a neural network is that by simplifying the model you are forcing it to learn more general functions and thus making the neural network more robust to overfitting or noisy input.

Once you have a model trained with, and without, regularisation, it is possible to compare their performance by calculating the error metrics on their outputs. This will prove whether the regularised model performs better than the standard model or not.

However, considering that the regularised model achieved better performance on its error metrics, how to prove that the weights of the regularised model have less variance (simpler) than the standard neural network?

",32265,,32265,,5/7/2021 6:35,5/7/2021 6:35,How to prove that a regularisation method simplified a neural network?,,0,5,,,,CC BY-SA 4.0 27691,2,,27667,5/7/2021 9:24,,0,,"

No, you did not correctly understood the meaning of the passage.

Using a model with three-layer shallow-learning methods has the same output (predicted) value as using one-layer shallow learning method, The effect of using multiple layers of shallow learning methods is to 'increase running time or repetition'.

A three-layer shallow learning method (so three one-layer methods stacked one after the other in a secuential way) has not the same predicted output value as a one-layer shallow learning method. It will be different output (and a 3 layers model should show some improvement) and clearly will need more time to get the result.

Deep learning methods as mentioned by @Ubikuity are (generally speaking) mode efficient at finding the important features and (usually) get better results than using a model with three-layer shallow-learning methods.

",46540,,,,,5/7/2021 9:24,,,,0,,,,CC BY-SA 4.0 27692,1,,,5/7/2021 11:10,,1,19,"

Idea

Let's say we have simple pictures dataset containing 40x40 images of digits. We have only one image of each digit. We want to use that as training set, but we need more data, so we use data augmentation. We use only simple operations like translating and rotating and generate 1000 more images of each digit.

Question

Natural way to do data augmentation will be randomly generating parameters like translate_x, translate_y and rotate and applying them into our base image.

Does distribution of these parameters matter? From one hand we would like to have net that is able to recognize digit placed in the side of the image and rotated as well as digit placed in center and not rotated at all but maybe we don't need such accuracy with those borderline images? Maybe we know that our prediction data will be close to the centered ones so we want high accuracy in that cases and the more digit is translated and rotated the lower our's net accuracy might be.

What I mean is can we augmented data with parameters with e.g. gaussian distribution to make our net more sensitive for cases closest to ideal and less sensitive to this borderline cases. Advantage of that would be less training data that we don't need and more control on characteristic of our neural net.

Disclaimer

This digits case is just simple example to show You what I mean.

",46718,,,,,5/7/2021 11:10,Does distribution of data augmentation parameters matter?,,0,0,,,,CC BY-SA 4.0 27694,1,27697,,5/7/2021 15:26,,4,2574,"

I have studied in the past different algorithms, i.e. DQN, DDQN, REINFORCE, A3C, PPO, TRPO, so on. I am doing an internship this summer where I have to use a multi-armed bandit (MAB). I am a bit confused between MAB and the other above algorithms.

What are the major differences between MAB and REINFORCE, for instance? What are the major differences between MAB and the other well-known algorithms (DQN, A3C, PPO, etc)?

EDIT

The @Kostya's answer is fine, but it would be interested for me to have a deeper answer for that question. I am still a bit confused.

Question: Do we use the Markov Reward formula $$G_t = R_{t+1} + \gamma R_{t+2} + ... = \sum_{k=0}^{\infty} \gamma^k R_{t+k+1}$$ the same way in a Multi-Armed bandit problem versus a DQN problem?

",46856,,2444,,5/10/2021 22:23,5/10/2021 22:23,"What are the major differences between multi-armed bandits and the other well-known algorithms (DQN, A3C, PPO, etc)?",,1,1,,,,CC BY-SA 4.0 27695,2,,27679,5/7/2021 16:04,,1,,"

GPT-3 (and the likes) don't really have any understanding of the semantics nor pragmatics involved in the language. However, they are good at constructing text content similar to the contents created by a person (when the texts and the concepts are not too "complicated").

",46540,,2444,,5/8/2021 1:08,5/8/2021 1:08,,,,5,,,,CC BY-SA 4.0 27696,1,27698,,5/7/2021 16:04,,2,1297,"

Why do we use the softmax activation function on the last layer?

Suppose $i$ is the index that has the highest value (in the case when we don't use softmax at all). If we use softmax and take $i$th value, it would be the highest value because $e$ is an increasing function, so that's why I am asking this question. Taking argmax(vec) and argmax(softmax(vec)) would give us the same value.

",36107,,2444,,5/7/2021 19:02,5/7/2021 19:02,Why do we use the softmax instead of no activation function?,,1,0,,,,CC BY-SA 4.0 27697,2,,27694,5/7/2021 16:38,,9,,"

You should start with the general definition of Reinforcement Learning problem. And what Markov Decision Process is.

DQN, A3C, PPO and REINFORCE are algorithms for solving reinforcement learning problems. These algorithms have their strengths and weaknesses depending on the details of the underlying problem.

Multi-Armed Bandit is not even an algorithm - it is a subclass of reinforcement learning problems, where your environment (usually) doesn't have any state transitions and your actions are just a single choice from (usually) fixed and finite set of choices.

Multi-Armed Bandit is used as an introductory problem to reinforcement learning, because it illustrates some basic concepts in the field: exploration-exploitation tradeoff, policy, target an estimate, learning rate and gradient optimization. All these concepts are basic vocabulary in RL. I recommend reading (and, very importantly, doing all the exercises) the Sutton and Barto book chapter two to get familiarized with it.

Edit: since the answer got popular, I'll address the comments and the question edit.

Being a special simplified subset of Markov Decision Processes, Multi Armed Bandit problems allow deeper theoretical understanding. For example, (as per @NeilSlater comment) the optimal policy would be to always go for the best arm. So it makes sense to introduce "regret" $\rho$ - the difference between a potential optimal reward and the actual collected reward by agent following your strategy:

$$\rho(T) = \mathbb{E}\left[T\mu^* -\sum_{t=1}^T\mu(a_t)\right]$$

One can then study asymptotic behavior of this regret as a function of $T$ and devise strategies with different asymptotic properties. As you can see, the reward here is not discounted ($\gamma=1$) - we usually can study the behavior of it as a function of $T$ without this regularization.

Although, there is one famous result that uses discounted rewards - the Gittins index policy (note, though, that they use $\beta$ instead of $\gamma$ to denote the factor).

",20538,,20538,,5/10/2021 21:03,5/10/2021 21:03,,,,5,,,,CC BY-SA 4.0 27698,2,,27696,5/7/2021 17:11,,4,,"

Short answer: Generally, you don't need to do softmax if you don't need probabilities. And using raw logits leads to more numerically stable code.

Long answer: First of all, the inputs of the softmax layer are called logits.

During evaluation, if you are only interested in the highest-probability class, then you can do argmax(vec) on the logits. If you want probability distribution over classes, then you'll need to exponentiate and normalize to 1 - that's what softmax does.

During training, you'd need to have a loss function to optimize. Your training data contains true classes, so you have your target probability distribution $p_i$, which is 1 at your true class and 0 at all other classes. You train the network to produce a probability distribution $q_i$ as an output. It should be as close to the target distribution $p_i$ as possible. The "distance" measure between two probability distribution is called cross-entropy:

$$ H = - \sum p_i \log q_i $$ As you can see, you only need logs of the output probabilities - so the logits will suffice to compute the loss. For example, the keras standard CategoricalCrossentropy loss can be configured to compute it from_logits and it mentions that:

Using from_logits=True is more numerically stable.

",20538,,20538,,5/7/2021 17:22,5/7/2021 17:22,,,,4,,,,CC BY-SA 4.0 27699,1,,,5/7/2021 17:55,,0,249,"

I am working on a DIY project in which I want to be able to train a neural network to play Snake.

  • Is a genetic algorithm an efficient way of training a network for this application?

  • For a GA, what should the inputs of the network be? (distance to walls and fruit or the squares in the proximity of the snake head as a vector)

  • What would the difference in efficiency be depending on the algorithm and what limitation does each one have? Are there any other alternatives I should consider?

",46861,,46861,,5/8/2021 15:22,5/8/2021 15:22,Is a genetic algorithm efficient for a snake game?,,1,0,,,,CC BY-SA 4.0 27700,2,,27662,5/7/2021 19:37,,4,,"

No. As of now, there is no clear distinction between AI and a sequential program. The biggest reason is that there is no officially agreed definition of AI. Since AI researchers themselves hold different opinions of AI, the field itself is constantly redefined regularly, there is no clear-cut distinction between AI and non-AI.

",46864,,2444,,5/9/2021 11:43,5/9/2021 11:43,,,,2,,,,CC BY-SA 4.0 27701,2,,27681,5/8/2021 1:44,,1,,"

You can easily find a counterexample. Suppose that there are three nodes $s$, $p$, and $goal$ such that $s \rightarrow p \rightarrow goal$. The real cost of going from $s$ to $p$ is $c(s,p) = 10$ and $c(p, goal) = 10$. Also, $h_1(s) = 18$, $h_1(p) = 9$, $h_1(goal) = 0$, $h_2(s) = 17$, $h_2(p) = 1$.On the other hand, $h^*(s) = 19$ and $h^*(p) = 10$.

Now, $h_1(s) \leqslant c(s,p) + h_1(p)$ and $h_1(s) \leqslant c(s, goal) + h_1(goal)$ that satisfies the consistency constraint of $h_1$. Also, $h_2(s) \leqslant h^*(s)$ and $h_2(s) \leqslant h^*(p)$ that approve $h_2$ is admissible.

However, for any node $n$, if we define $h_3(n) = \min(h_1(n), h_2(n))$:

$$h_3(s) = \min(h_1(s), h_2(s)) = 17 \not\leqslant c(s,p) + \min(h_1(p), h_2(p)) = $$ $$c(s,p) + h_3(p) = 10 + 1 = 11. $$ It means that $h_3$ as a heuristic function is not consistent.

",4446,,2444,,5/9/2021 13:54,5/9/2021 13:54,,,,0,,,,CC BY-SA 4.0 27703,1,27711,,5/8/2021 7:33,,4,769,"

Recently, I was reading Pytorch's official tutorial about Mask R-CNN. When I run the code on colab, it turned out that it automatically outputs a different number of channels during prediction. If the image has 2 people on it, it would output a mask with the shape 2xHxW. If the image has 3 people on it, it would output the mask with the shape 3xHxW.

How does Mask R-CNN change the channels? Does it have a for loop inside it?

My guess is that it has region proposals and it outputs masks based on those regions, and then it thresholds them (it removes masks that have low probability prediction). Is this right?

",36107,,2444,,5/10/2021 11:21,5/10/2021 11:21,How does Mask R-CNN automatically output a different number of objects on the image?,,1,0,,,,CC BY-SA 4.0 27706,1,,,5/8/2021 10:51,,3,49,"

When I am reading about convolutional neural networks, I have encountered the following sentence from the textbook(page 341) that says about the limitation of the usage of the convolution in CNNs.

When a task involves incorporating information from very distant locations in the input, then the prior imposed by convolution may be inappropriate

My interpretations are

  1. when the object is very small, then the convolution may not work well.

  2. when the object is very large, then the components of the object are far away from each other and hence the convolution may not work well.

Which of the interpretation is correct? If both are wrong, then what is the correct interpretation.

If possible, please provide an example to understand it.

",21964,,2444,,5/10/2021 22:40,5/10/2021 22:40,Why might the convolution be inappropriate when the task involves incorporating information from very distant locations in the input?,,0,1,,,,CC BY-SA 4.0 27707,2,,24972,5/8/2021 11:19,,1,,"

No, the images do not need to be same. You can use different images for downstream task however you need to do some changes in model definition while loading model state_dict as CNN architecture used for pretext task 'relative patch location', assume Alexnet will expect 2 patches as input.

",46876,,,,,5/8/2021 11:19,,,,0,,,,CC BY-SA 4.0 27708,2,,27678,5/8/2021 12:34,,1,,"

A friend of mine answered on this question in different social media on different language, I'll post his answer here:

1. scaler should be saved in this case.

You do fit_transform in the example the second time you run it, but you should just transform. Scaler should be "fit" once on data train and not change it afterwards. Then you will get 5 in 1 in both cases. And when 10 appears, scaler will convert it to 2.

2. It's not a fact that this will break the model.

I think everything is clear here, but just in case: Let us consider the simplest case. Suppose there is a linear dependence y = 3 * x, we don't know about it, but there is a dataset:

X Y
1 3
2 6
3 9

If we assume that the model has the form y = a * x, in the process of learning it turns out that
a = 3 gives the best score and when suddenly x = 100 appears, it has no problem mapping to 300, even though 100 is strongly out of dataset (1,2,3).

3. You can check if this breaks the model now.

Suppose there is a chip that takes values 0, 1, 2, 3, 4, 5, 6, 7, 8 ... You are afraid that after some time samples with values greater than the current maximum (9, 10, 11, ...) will appear and the model will behave strangely on them. You can try to "simulate" this situation. Split dataset into train and test so samples with values from 0 to 6 will go into train, and samples with values from 7 to 8 will go into test. If accuracy doesn't fail at the test stage, then we can assume that when all the 0-8 ranks get into train, then the appearance of 9-11 won't break anything.

Also, there is answer for the second question: https://stackoverflow.com/questions/46744076/machine-learning-normalizing-features-with-no-theoretical-maximum-value

",46666,,,,,5/8/2021 12:34,,,,0,,,,CC BY-SA 4.0 27709,2,,27699,5/8/2021 12:46,,1,,"

Q-Learning and Genetic algorithm are both good algorithms to create an IA that plays Snake.

The one you use depends mostly on how you understand and model your IA environment.

  • Q-Learning algorithm is an algorithm that needs a State (give by the environment), Actions it can take, and Rewards to give him according to how it performs.
  • Genetic algorithm needs to have intrinsic parameters (could be caracteristics of the network taking the decisions, or more simple things like leg size / muscle strenght if you want to make an IA that runs), how to merge 2 parents to create children, and a Metric to evaluate how the network performed.

I assume your snake is make of a grid (like old snake where you only have 4 directions, you can't go in diagonal directions).

In snake example, here is how I would define it in both case (careful, this is how I would model the problem, there might be more efficient models) :

Q-Learning

  • State : what is in each block of the grid (snake tail, snake head, food)
  • Actions : Up, Down, Right, Left
  • Rewards : Not dying : +0.01; dying : -10; eating piece of food: +1.

Genetic Algorithm

  • Intrinsic Parameters : Parameters of the network that takes decisions (inputs of the network could be whole state of the environment as described in Q-Table).
  • How to merge : ?
  • Metric : Lenght of the snake or time lived.

I can't find exactly how to make a genetic algorithm, but it should be possible.

Hope it helps. Q-Learning is usually easier so this is what I would use, but feel free to use whatever you want.

",46681,,,,,5/8/2021 12:46,,,,0,,,,CC BY-SA 4.0 27710,1,,,5/8/2021 12:58,,0,74,"

According to the triplet loss Wikipedia page:

t-SNE (t-distributed Stochastic Neighbor Embedding) preserves embedding orders via probability distributions, whereas triplet loss works directly on embedded distances.

I don't understand how does t-SNE preserves embedding order from the description given by its Wikipedia page.

I am trying to understand this claim in order to translate the page into other languages. I don't have a very quick understanding of maths, don't be afraid to explain it as if I was a teenager.

",4738,,4446,,5/8/2021 22:53,1/29/2023 2:00,How does t-SNE preserves embedding orders?,,1,0,,,,CC BY-SA 4.0 27711,2,,27703,5/8/2021 13:57,,2,,"

Object detection models usually generate multiple detections per object. Duplicates are removed in a post-processing step called Non-Maximum Suppression (NMS).

The Pytorch code that performs this post-processing is called here in the RegionProposalNetwork class. The filtering loop you've mentioned performs the NMS and applies the score_thresh threshold (although it seems to be zero by default).

",20538,,,,,5/8/2021 13:57,,,,0,,,,CC BY-SA 4.0 27713,2,,27710,5/8/2021 22:52,,0,,"

You just to look at the method of T-SNE. It "computes probabilities $p_{ij}$ that are proportional to the similarity of objects ${x} _{i}$ and ${x} _{j}$". This means the same as the sentence that you have already quoted:

t-SNE (t-distributed Stochastic Neighbor Embedding) preserves embedding orders via probability distributions, whereas triplet loss works directly on embedded distances.

",4446,,,,,5/8/2021 22:52,,,,0,,,,CC BY-SA 4.0 27714,1,,,5/8/2021 23:08,,0,33,"

I am trying to write a program in which an ai can detect whether a conversation is occurring or not. The ai does not need to transcribe words or have any meaning about the conversation, simply if one is occurring. A conversation can then simply be defined as having more than one speaker.

Anyways, while searching for past research on the subject, I came across the field of speech diarization, which is where an AI is trained to distinguish the numbers of speakers in a conversation. This seems perfect to me, however, while implementing I came across a few troubles. First of all, it wasn't good. I used this tutorial: https://medium.com/saarthi-ai/who-spoke-when-build-your-own-speaker-diarization-module-from-scratch-e7d725ee279 to write a simple program for this task, but I found it wasn't good at finding if there was a single or two speakers. Also the times where it was distinguishing speakers were all off.

It occurred to me that perhaps speech diarization may not be the best approach for this problem, so I decided to ask here about if this is the best solution, or if there are better ones out there. If it is the best solution, I would love some insight into why this wasn't working for me. Is the tutorial simply not good enough? I used 45 second - 1 minute long clips of just myself speaking or other people speaking with me and it did not work well at all like I said.

",46889,,,,,5/8/2021 23:08,Speech diarization for a conversation detector: A good idea or not?,,0,2,,,,CC BY-SA 4.0 27716,1,27742,,5/9/2021 12:36,,8,1625,"

I'm not really that experienced with deep learning, and I've been looking at research code (mostly PyTorch) for deep neural networks, specifically GANs, and, in many cases, I see the authors setting bias=False in some layers without much justification. This isn't usually done in a long stack of layers that have a similar purpose, but mostly in unique layers like the initial linear layer after a conditioning vector, or certain layers in an attention architecture.

I imagined there must be a strategy to this, but most articles online seem to confirm my initial perception that bias is a good thing to have available for training in pretty much every layer.

Is there a specific optimization / theoretical reason to turn off biases in specific layers in a network? How can I choose when to do it when designing my own architecture?

",46906,,2444,,5/10/2021 11:44,5/10/2021 23:06,When should you not use the bias in a layer?,,1,1,,,,CC BY-SA 4.0 27718,2,,18290,5/9/2021 12:59,,1,,"

My thoughts.

The short answer is: you can't.

The long answer is that since we're searching for a new definition of a term when removing a necessary (in my opinion) precondition for it to exist, the question becomes "can you make up a new definition for what you intuitively and empirically understand as intention, while removing free will from the picture?". I'm going to give it a shot.

First of all, there's a lot to be said about whether or not the idea or intuition of intention even exists in our collective discourse based on the latent assumption that free will is a thing. As in, before any rigorous definition, even the intuitions encoded in discourse, philosophy, art, and other languages as "intention" could very well be as invalid as the assumption of free will itself.

That being said, I'm a fan of Deleuze's model for people and other entities as machines of input, output and internal state (not his wording, but I paraphrase and in consequence, interpret and alter for the purposes of my point). It's not perfect, but I run to it a lot to answer these question as I find it very refreshing, often lacking in bias and having good explanatory power compared to the usual romance-foo that dominates these conversations. If that's the case you could pretty much define intention not as a self-started force but as a product of a much blurrier mechanism, namely the non-deterministic characteristic of this rhizomatic soup. Whether or not an input or output will exist, what kind will it be, what internal state will it find or cause in the machine and the long term dependencies between these interactions seem to me like a convincing enough candidate for the cause of any intuition (or illusion if you like a more cynic vocabulary) of "intention". It's pretty much an emergent symbol we use, assuming the form of a force for the setup and function of new connections, that will in their complication or pure non-determinism spawn even more intention in the network.

tl;dr: Intent could be the most basic expression of the RNG of the universe.

",46906,,2444,,5/9/2021 13:53,5/9/2021 13:53,,,,3,,,,CC BY-SA 4.0 27720,1,,,5/9/2021 16:39,,5,52,"

I just finished reading this paper MoFlow: An Invertible Flow Model for Generating Molecular Graphs.

The paper, which is about generating molecular graphs with certain chemical properties improved the SOTA at the time of writing by a bit and used a new method to discover novel molecules. The premise of this research is that this can be used in drug design.

In the paper, they beat certain benchmarks and even create a new metric to compare themselves against existing methods. However, I kept wondering if such methods were actually used in practice. The same question is valid for any comparable generative models such as GAN's, VAE's or autoregressive generative models.

So, basically, are these models used in production already? If so, do they speed up existing molecule discovery and/or discover new molecules? If not, why not? And are there any remaining bottlenecks to be solved before this can be used?

Any further information would be great!

",46914,,2444,,5/10/2021 11:32,5/10/2021 11:32,Are generative models actually used in practice for industrial drug design?,,0,0,0,,,CC BY-SA 4.0 27723,2,,18290,5/10/2021 0:20,,0,,"

I like both answers but am going to propose something simpler:

  • Intention can be understood as an expression of pursuit of a goal, and thus does not require free will.

Deterministic algorithms can have and pursue goals, even without being consciously "aware" of the goal. Thus, a simple NIM solving algorithm can be said to have the intention of winning at NIM, even there the goal is not a part of the algorithm, but embedded undefined in the simple algorithm itself.

This becomes even more true with neural networds, where, unlike the NIMATRON, goals are typically defined.

",1671,,,,,5/10/2021 0:20,,,,0,,,,CC BY-SA 4.0 27724,2,,20405,5/10/2021 0:52,,0,,"

It's useful to understand HBO's Westworld as an extension of Phillip Dick's Do Androids Dream of Electric Sheep.

Most of Dick's novels involve the nature of reality in relation to perception, and how that informs identity.

A major feature of Dick's book, and BladeRunner, is a form of Turing Test (Voight-Kampff) to which humans are subjected to determine if they are replicants.

At a certain point, Deckard, the hero, begins to question whether he is human or an android. (This is never fully made clear in the book, and Deckard's wife's alienation from him may indicate his non-human status. The film adaptation similarly raises this question over potentially implanted memories, which would mark Deckard as a replicant, even though the director reversed course and later stated otherwise.)

Westworld continues this idea where there are characters who turn out to be definitively androids, such as the Man in Black, who, presumably, has had artificial memories created, and believes he is "real". BladeRunner 2049 also involves this theme, which could be said to be the "unreliability of memory" in relation to experience and identity. Even in mundane circumstances, two humans can remember the same event differently!

  • The point of the Electric Sheep hypothesis is ambiguity—we can only validate our own qualia, and even that is not entirely reliable due to the nature of perception and subjectivity.

The novel ends with Deckard finding a frog in the ashes, and initially thinking it is real. It turns out to be robotic, but Deckard ultimately decides it doesn't really matter.

  • This is important because empathy is the main theme, and altruistic behavior in nature is supported by evolutionary game theory.

The central plot device is that replicants don't have empathy, a design flaw that becomes a "feature not a bug" in that it keeps replicants from banding together to overthrow their oppressors. But the new generation of Nexus androids are intelligent enough to develop empathy naturally.

Dick was a Christian philosopher who worked mainly in popular narrative and believed empathy is a natural function of intelligence sufficiently advanced.

  • If the suffering witnessed by an entity appears real, but we cannot validate that entity's qualia, is it a moral imperative to make a Pascal's Wager?

i.e. err on the side of caution and compassion, just in case the entity is conscious.

  • If altruistic behavior is expressed by an algorithm, is that altruism invalid?
",1671,,,,,5/10/2021 0:52,,,,0,,,,CC BY-SA 4.0 27725,1,27727,,5/10/2021 3:17,,4,243,"

The state-value function for a given policy $\pi$ is given by

$$\begin{align} V^{\pi}(s) &=E_{\pi}\left\{r_{t+1}+\gamma r_{t+2}+\gamma^{2} r_{t+3}+\cdots \mid s_{t}=s\right\} \\ &=E_{\pi}\left\{r_{t+1}+\gamma V^{\pi}\left(s_{t+1}\right) \mid s_{t}=s\right\} \tag{4.3}\label{4.3} \\ &=\sum_{a} \pi(s, a) \sum_{s^{\prime}} \mathcal{P}_{s s^{\prime}}^{a}\left[\mathcal{R}_{s s^{\prime}}^{a}+\gamma V^{\pi}\left(s^{\prime}\right)\right] \tag{4.4}\label{4.4} \end{align}$$

It is given in section 4.1 of the first edition of Sutton and Barto's book (equations 4.3 and 4.4).

I don't understand how equation \ref{4.4} derives from equation \ref{4.3}. How can I get the product in equation \ref{4.4} from the expectation in equation \ref{4.3}?

",18758,,2444,,5/10/2021 13:45,5/10/2021 13:45,How is the state-value function expressed as a product of sums?,,1,0,,,,CC BY-SA 4.0 27726,1,,,5/10/2021 5:30,,0,176,"

We know about the lineage of datasets. Is there anything called "(ML) model lineage". What are all the works that had been remarkable regarding "model lineage"?

There are few links available on internet which talk about model lineage. According to one of the articles[1], the definition of model lineage is as follows:

Model Lineage keeps the history of a model: when it was trained, using which data, algorithms, and parameters. This should be automatically generated each time a new version of a model is generated.

The next reference that I could find on the internet is the session by data bricks[2].

Apart from these two links, I could not find many resources or standards regarding model lineage. It will be helpful if anyone could provide more resources or pointers towards this topic.

  1. https://blog.tail.digital/en/we-need-to-talk-about-model-lineage/#:~:text=Model%20Lineage%20keeps%20the%20history,of%20a%20model%20is%20generated

  2. https://databricks.com/session_na20/machine-learning-data-lineage-with-mlflow-and-delta-lake

",46930,,46930,,5/12/2021 11:06,5/12/2021 12:53,What is a model lineage?,,0,3,,,,CC BY-SA 4.0 27727,2,,27725,5/10/2021 9:00,,3,,"

A quick review of resolving expectations: If you know that a discrete random variable $X$, drawn from set $\mathcal{X}$ has probability distribution $p(x) = \mathbf{Pr}\{X=x \}$, then

$$\mathbb{E}[X] = \sum_{x \in \mathcal{X}} xp(x)$$

This equation is the core of what is going on when resolving the expectation in your quoted equation.

Resolving the expectation to show how the value function of a state relates to the possible next rewards and future states means summing up all possible rewards and next states. There are two components to the distribution over the single step involved - the policy $\pi(a|s)$, and the state progression $P^a_{ss'}$. As they are both independent probabilities, they need to be multiplied to establish the combined probability of any specific trajectory.

So, looking at only a single trajectory starting from state $s$, the trajectory of selecting action $a$ and ending up in state $s'$ has a probability of:

$$p_{\pi}(a,s'|s) = \pi(a|s) P^a_{ss'}$$

Iterating over all possible trajectories to get the expected value of some function of the end of the trajectory $f(s,a,s')$ looks like this:

$$\mathbb{E}[f(S_t, A_t, S_{t+1})|S_t=s] = \sum_a \pi(a|s)\sum_{s'}P_{ss'}^a f(s,a,s')$$

It is important to note that the sums are nested here, not separately resolved then multiplied. This is standard notation, but you could add some brackets to show it:

$$\mathbb{E}[f(S_t, A_t, S_{t+1})|S_t=s] = \sum_a \pi(a|s)\left(\sum_{s'}\left(P_{ss'}^a f(s,a,s')\right)\right)$$

In the equation from the book, $f(s,a,s') = R_{ss'}^a + \gamma v_{\pi}(s')$

",1847,,36821,,5/10/2021 9:54,5/10/2021 9:54,,,,0,,,,CC BY-SA 4.0 27733,2,,20778,5/10/2021 14:33,,1,,"

Usecases I came across:

  1. As mentioned by Saurav, the use of NLP is definitely a use case. Adding to his source, you can check out the Two Sigma's Kaggle competition. Going through their careers page it is evident that use of NLP is prominent. Cofounder of Kavout, Alex Lu reconfirms that in https://emerj.com/ai-podcast-interviews/artificial-intelligence-in-stock-trading-future-trends-and-applications/

Two Sigma Challenge Kaggle: https://www.kaggle.com/c/two-sigma-financial-news

  1. ML to determine whether some large market player is rebalancing or liquidating his or her portfolio. Source: https://qr.ae/pGnoGx
  2. I have heard about ML being used in Statistical Arbitrage(Pairs trading) also. Example case: http://cs229.stanford.edu/proj2019spr/report/26.pdf
  3. Further, I am aware of use of ML for hedging purposes in a couple of banks. It's called deep hedging. Source: https://www.maths.ox.ac.uk/system/files/attachments/2019%2004%2024%20Deep%20Hedging%20Frontiers%20Imperial%202.1.pdf
",42119,,40434,,5/13/2021 2:53,5/13/2021 2:53,,,,0,,,,CC BY-SA 4.0 27734,1,,,5/10/2021 15:28,,1,35,"

I am reading this paper on image retrieval where the goal is to train a network that produces highly discriminative descriptors (aka embeddings) for input images. If you are familiar with facial recognition architectures, it is similar in that the network is trained with matching / non-matching pairs and triplet loss.

The paper discusses the use of PCA and whitening on the training set of descriptors as a means of further improving the discriminability (second to last block in image below, fig 1a of paper). This all make sense to me.

Where I'm confused is where they replace PCA/whitening with a trainable fully connected layer with bias. I do understand that PCA+whitening is just the composition of of two linear transformations (ie rotation + (un)squishing in each dimension) and that these are the same as having one linear transformation but:

  • How is PCA+whitening equivalent to a learnable fully connected layer? Is there some theorem or paper explaining that training a fully connected layer with triplet loss is somehow statistically equivalent to PCA and whitening?
  • Why is there a bias?
",16871,,,,,5/10/2021 15:28,Under what circumstances is a fully connected layer similar to PCA?,,0,0,,,,CC BY-SA 4.0 27737,2,,27626,5/10/2021 20:01,,0,,"

The OP's (i.e. my) assumption is wrong. It is not true that for almost all random graphs under no permutation a pattern is visible, as can be seen in Tiago P. Peixoto's paper Bayesian stochastic blockmodeling, p. 4:

The opposite seems to be correct: For almost all random graphs there are permutations such that patterns are visible.

",25362,,,,,5/10/2021 20:01,,,,0,,,,CC BY-SA 4.0 27738,2,,27136,5/10/2021 20:11,,1,,"

After a lot of searches, I think self-taught learning is a Transfer learning category, I think when Self-taught learning paper published (2007), there isn't any good survey on transfer learning, and as seen by high citation of the paper of Pan, his paper (which published in 2009) describes transfer learning in a clear way that did not exist before it.

Also, it is reasonable to consider Self-Taught learning a transfer learning category because he actually transfers the knowledge learned from unlabelled data to use it in the supervised task that we want to do (there is no need for the unlabeled data (that used for training) to follow the same class labels or generative distribution as the labelled data that will be used for the supervised task).

If someone could find something wrong with my answer or there is something missing, then please tell me.

",36578,,36578,,5/11/2021 10:58,5/11/2021 10:58,,,,0,,,,CC BY-SA 4.0 27740,1,27781,,5/10/2021 20:45,,2,53,"

I'm fairly new to ANNs. I know the general structure, the math, and the algorithms behind them. I figured the logical next step on my journey to fully understanding them would to be implement one myself from scratch, even if it's a fairly small one.

So I'm curious, coming from those who actually work with and deploy these things, are perceptrons/neurons typically implemented as objects with class variables, methods, etc. (kind of like nodes in a Linked List)? Or is there a more practical/memory-conservative way to build them?

",46968,,2444,,5/10/2021 22:31,5/13/2021 11:34,"In practice, are perceptrons typically implemented as objects?",,1,3,,,,CC BY-SA 4.0 27742,2,,27716,5/10/2021 23:06,,4,,"

The most usual case of bias=False is in layers before/after Batch Normalization with no activators in between. The BatchNorm layer will re-center the data anyway, removing the bias and making it a useless trainable parameter. Quoting the original BatchNorm paper:

Note that, since we normalize $Wu+b$, the bias $b$ can be ignored since its effect will be canceled by the subsequent mean subtraction

Similar thing happens in transformers' LinearNormalization and (as far as I understand how conditioning works) in the GANs' conditioning layer - the data gets re-centered, effectively cancelling the bias.

In my experience, that's the most frequent reason to see bias=False, but one can imagine other reasons to remove the bias. As a rule of thumb, I'd say that you don't include bias if you want to "transform zeros to zeros" - things like learned rotations can be an example of such (rather exotic) application.

",20538,,,,,5/10/2021 23:06,,,,0,,,,CC BY-SA 4.0 27743,1,,,5/11/2021 4:08,,1,54,"

Value functions for a given MDP can be learned in at least two ways by experience.

  1. The first way (tabular calculation) is generally used in the case of state spaces that are small enough.

  2. The second way (using parameterized functions) is generally used in the case of large state paces.

It can be understood from the following statement from section 3.7 of the first edition of Sutton and Barto's book.

The value functions $V^{\pi}$ and $Q^{\pi}$ can be estimated from experience. For example, if an agent follows policy $\pi$ and maintains an average, for each state encountered, of the actual returns that have followed that state, then the average will converge to the state's value, $V^{\pi}$(s), as the number of times that state is encountered approaches infinity. If separate averages are kept for each action taken in a state, then these averages will similarly converge to the action values, $Q^{\pi}(s,a)$ . We call estimation methods of this kind Monte Carlo methods because they involve averaging over many random samples of actual returns. Of course, if there are very many states, then it may not be practical to keep separate averages for each state individually. Instead, the agent would have to maintain $V^{\pi}$ and $Q^{\pi}$ as parameterized functions and adjust the parameters to better match the observed returns. This can also produce accurate estimates, although much depends on the nature of the parameterized function approximator.

Is there any thumb rule or strict margin, in the literature, on the cardinality of state space to use the parameterized function to estimate value functions?

",18758,,18758,,5/11/2021 9:29,5/11/2021 9:29,Is there any thumb rule on the cardinality of state space in order to use the parameterized function to estimate value functions?,,0,0,,,,CC BY-SA 4.0 27746,1,,,5/11/2021 6:38,,1,39,"

I'm taking a course 'Introduction to AI' and, in one of the tutorials, it was written that when pruning the game tree using $\alpha$-$\beta$ boundaries, the number of nodes that will be developed, when using random sort function for the children (i.e., in the average case) will be $$ O(b^{3d/4})$$.

Since the proof is not in the scope of the course we weren't given one at the tutorial, so I tried looking for the proof online, but couldn't find anything, and I didn't think of anything myself. I wondered whether someone could give me a lead or refer me to some reading material that shows the full proof?

",29989,,2444,,5/13/2021 18:36,5/13/2021 18:36,Why is the number of examined nodes $ O(b^{3d/4})$ in $\alpha$-$\beta$ pruning?,,0,1,,,,CC BY-SA 4.0 27748,1,,,5/11/2021 10:08,,1,36,"

I have two time-series datasets (temperature and speed of the vehicle). I will use Agglomerative Hierarchical Clustering and DTW to cluster both datasets.

I am looking for a technique (like regression model) to find the relationship between two time-series clustered data. I am curious to find the relationship between changes in temperature and vehicle speed. Does anyone have an idea?

",46981,,5763,,5/16/2021 14:51,5/16/2021 14:51,Is there a technique for analyzing the relationship between time-series clusters?,,0,1,,,,CC BY-SA 4.0 27749,1,,,5/11/2021 10:19,,1,169,"

I'm working on a machine learning problem where I need to guess which customers will churn and which of them will continue to be customers.

I have $X_0, X_1, X_2, X_3, X_4, X_5$ and $X_6$ attributes representing if they have credit cards, if they are active customers, if they have money in their accounts, etc. So, according to these multiple $X$ values and the target value $Y$, which is either $0$ or $1$, I need to develop a model that can do the prediction.

I have always worked with only one $X$ attribute and one target value $Y$. Right now, I'm confused about how I should work with multiple $X_n$ values.

Any help is appreciated.

",46984,,2444,,5/13/2021 10:58,5/13/2021 10:58,How to train a machine learning model with multiple attributes and one target value?,,0,6,,,,CC BY-SA 4.0 27754,1,,,5/11/2021 15:56,,2,470,"

The VC dimension is a very important concept in computational/statistical learning theory. However, the first time you read its definition, you may not immediately understand what it really represents or means, as it involves other concepts, such as shattering, hypothesis class, learning and sets. For example, let's take a look at the definition given by Shai Shalev-Shwartz and Shai Ben-David (p. 70)

DEFINITION $6.5$ (VC-dimension) The VC-dimension of a hypothesis class $\mathcal{H}$, denoted $\operatorname{VCdim}(\mathcal{H})$, is the maximal size of a set $C \subset \mathcal{X}$ that can be shattered by $\mathcal{H}$. If $\mathcal{H}$ can shatter sets of arbitrarily large size we say that $\mathcal{H}$ has infinite VC-dimension.

Without knowing what a hypothesis class is, or what the specific $C$, $X$ and $H$ in this definition are, it's difficult to understand this definition. Even if you are familiar with what a hypothesis class is (i.e. a set of sets, i.e. our set of functions/hypotheses/models, e.g. the set of all possible neural networks with a specific topology) and you know that $C$ and $X$ are sets of input points, it should still not be clear what the VC dimesion really is or represents.

So, how would you intuitively and rigorously explain the exact definition of the VC dimension?

Note that I am not asking for answers like

The VC dimension represents the complexity (or expressive power, richness, or flexibility) of your model/hypothesis class.

Of course, this is easy to memorize, but it's quite vague. So, I am not looking for vague/general answers. I am looking for answers that rigorously but intuitively describe the mathematical definition of the VC dimension. For example, you could provide an illustration that shows what the VC dimension is, and, in your example (e.g. the XOR problem cannot be solved by a set of lines), you can describe what $H$, $C$, and $X$ are, and how they relate to the typical concepts you will find an introductory course to machine learning, but you should not forget to describe the concept of shattering. If you have other ideas of how to illustrate this concept memnomically, feel free to provide an answer.

",2444,,2444,,5/11/2021 16:49,5/12/2021 8:35,How would you intuitively but rigorously explain what the VC dimension is?,,3,0,,,,CC BY-SA 4.0 27755,2,,27754,5/11/2021 16:08,,0,,"

This may be lacking some rigour, but this is how I have explained it in the past:

The VC Dimension is the maximum number of inputs such that for any subset of these inputs, it is possible for the model to classify the subset as true and the rest as false.

I am aware that this definition still uses the term subset, but in my experience even people who are not familiar with set theory understand the concept of a subset.

The part about this that tends to confuse most people is the notion of a subset unable to be induced by a model. This can be nicely illustrated by picking a really simply hypothesis, such as a closed interval, and showing how it cannot express non-contiguous positive inputs, or using a linear classifier and showing how it cannot express XOR.

",12201,,,,,5/11/2021 16:08,,,,2,,,,CC BY-SA 4.0 27756,2,,27754,5/11/2021 17:04,,2,,"

Shattered set. First we need a concept of a shattered set. I'll work from a shattered set example in Wikipedia adjusting it to your notation.

The statement that $\mathcal{H}$ shatters $C$ means that for every subset $A \subset C$ there is a set $B\in\mathcal{H}$ such that $B$ "separates" $A$ from $C \backslash A$. Writing this formally:

$$\text{shatters}(\mathcal{H},C) = \forall A \subset C\; \exists B\in\mathcal{H}\;.\;A = B\,\cap A $$

As an example consider the set $C$ of four points on $\mathcal{X} = \mathbb{R}^2$:

$$C = \left\{(0,0); (0,1); (1,0); (1,1)\right\}$$

And the classification class $\mathcal{H}$ being all possible 2D discs on $\mathcal{X} =\mathbb{R}^2$. (Notice that people use the world "class" here, because we are dealing with "set-of-sets" stuff and that might get tricky). Note that a disk can be represented as the set of all points inside the circle.

Now, it turns out that this $\mathcal{H}$ doesn't shatter $C$. The counterexample would be the subset $A$ of "diagonal" points:

$$A = \left\{(0,0); (1,1)\right\} \subset C$$

There is no 2D disk $B\in\mathcal{H}$ (in the context of learning theory, an element/set of the hypothesis class is a hypothesis, which can also be viewed as a function) that satisfies $A = B\,\cap A$. Intuitively, this means that you cannot use a 2D disk to classify your pair of points $A$ from the rest of the set $C$.

The VC dimension of $\mathcal{H}$ is the maximal cardinality $d$ of the set $C$ that it can shatter. For $d=3$, we can provide tree points $C'$ for which we can easily find all discs that cover all eight possible subsets of $C'$:

$$C' = \left\{(0,0); (0,1); (1,0)\right\}$$

Above we've shown that for a particular set $C$ of $d=4$ points $\text{shatters}(\mathcal{H}, C)$ is false. But we need to prove that it is false for all 4-point sets. To prove this, we consider a convex hull of an arbitrary set of four points $C = \left\{a;b;c;d\right\}$. In general position, the convex hull is either a triangle or a quad:

In the case of a triangle, we choose the outermost three points as a counterexample set $A$. So, with this (labeling) configuration, you cannot find a disk that covers (i.e. classifies) the outermost three points correctly while excluding the point inside the triangle. In the case of a quad, we choose the pair on the longest diagonal. If any three points lay on a single line, then we choose the pair of outermost points.

This sketches a proof that no $d=4$ point set can be shattered by $\mathcal{H}$, but we've shown that there is a $d=3$ set $C'$ that can be. Concluding that $VCdim(\mathcal{H}) = 3$

Another example is considered on the next page (pg. 71) of the book you've referenced. It again considers 2D plane $\mathcal{X} =\mathbb{R}^2$ and the classification class $\mathcal{H}$ is all possible axis-aligned rectangles. Authors show a configuration $C$ of four points that can be shattered by $\mathcal{H}$ on the left of figure 6.1. And then provide a proof that no $d=5$ points can be shattered by axis-aligned rectangles. Concluding that VC dimension of their $\mathcal{H}$ is four.

Hope these examples help. (BTW, note that, it seems that Deep Learning has quite bad VC dimension but it still works somehow - which is rather puzzling).

",20538,,2444,,5/12/2021 8:35,5/12/2021 8:35,,,,2,,,,CC BY-SA 4.0 27757,1,,,5/11/2021 18:46,,2,97,"

I am looking for techniques for augmenting very small image datasets. I have a classification problem with 3 classes. Each class consists of 20 different shapes. The shapes are similar between the classes, but the task is to identify which class the shapes belong to. Per shape, I have between 1 and 35 training examples. For two classes, I have 25 training examples per shape, but the number of examples per shape for the third classes is usually around 5. Now, what data augmentation schemes do you recommend? Geometric / affine transformations seem like a good place to start. However, I have also thought of applying Fast Fourier Transform (do the forward transform, add some noise, do the inverse transform). GANs seem infeasible, right? Not enough data, I suspect. In any case, I am grateful for your advice.

",45724,,,,,10/5/2022 12:06,Data augmentation for very small image datasets,,1,0,,,,CC BY-SA 4.0 27759,1,,,5/11/2021 20:55,,1,134,"

I'm trying to make a neural network in pytorch that picks the parameters of a nonlinear function, the radius and (x,y) center of a circle in the example below, based on a sample of values from the nonlinear function.

More concretely, the neural network trained in the code below takes as input 100 (x,y) points on a circle and outputs radius, x_center, y_center of the circle.

I don't consider this a very difficult problem, but the trained neural network doesn't work very well, as you can see from two example plots after the code. How can the code be improved?

And in case this informs your recommendation, the goal is not to fit circles, which no one needs a neural network to do. I'm trying to use a neural network to calculate 9 parameters in a nonlinear function taking a single real valued input and outputting a complex number f(t) -> a + b*sqrt(-1). The input into the neural network is 54 complex values, and the output is 9 parameter values. I am guaranteed that the 54 complex input values can always be well approximated by f(t) with an appropriately picked 9 parameters. The parameters can easily be guessed by a human because different parameters intuitively change the shape of the complex curve, but I've been unable to use a minimization math algorithm for curve fitting. The problem is there are a lot of local minima the minimization algorithms can encounter before reaching the global minimum. The goal of the neural network is to get a good guess of the 9 parameters at the global minimum for a minimization math algorithm to be close to the global minimum initially, and thus converge to the global minimum rather than get stuck at a local minima.

You probably guessed that I know a bit of math, but I don't know much about machine learning. I was able to pick it up pretty quickly because of my math background, but I am severely lacking in practical experience. I don't know what to do at this point other than randomly changing the number of samples on a circle, number of examples circles, adding more layers to the neural network, adding different types of layers to the neural network, changing the loss function, changing the learning rate, changing the optimizer, changing the loss function, et cetera, but I have no method to my madness.

Post Script
I've found someone who did almost what I need. This paper paired with this github repo used 1,000 samples in a set of 100,000 with 1% failure rate, so there's hope. I have to dig deeper for the innards of their neural network training.

import torch
import numpy as np
import math
import matplotlib.pyplot as plt

#circle parameterized by t, < x(t) , y(t) >
t_parameter = np.linspace(-math.pi, math.pi, 100)

#create random radius,(x,y) center or circle paired with points on circle evaluated at all t in t_parameter
examples = 1000
max_radius = 4
random_rxy = np.random.rand(examples,3)
input_list = []
for i in range(examples):
  r_rand = random_rxy.item(i,0) * max_radius
  x_rand = random_rxy.item(i,1) * 7 - 2 #-2 < x_rand < 5 
  y_rand = random_rxy.item(i,2) - 2 #-2 < y_rand < -1
  x_coordinates = [r_rand*math.cos(t) + x_rand for t in t_parameter]
  y_coordinates = [r_rand*math.sin(t) + y_rand for t in t_parameter]
  input_list.append(x_coordinates + y_coordinates)
input_tensor = torch.Tensor(input_list)
output_tensor = torch.Tensor(random_rxy)

print(input_tensor)
'''
tensor([[ x_0_0,   x_0_1,   ..., x_0_99,   y_0_0,   y_0_1,   ..., y_0_99   ],
        [ x_1_0,   x_1_1,   ..., x_1_99,   y_1_0,   y_1_1,   ..., y_1_99   ],
        [ x_2_0,   x_2_1,   ..., x_2_99,   y_2_0,   y_2_1,   ..., y_2_99   ],
        ...,
        [ x_997_0, x_997_1, ..., x_997_99, y_997_0, y_997_1, ..., y_997_99 ],
        [ x_998_0, x_998_1, ..., x_998_99, y_998_0, y_998_1, ..., y_998_99 ],
        [ x_999_0, x_999_1, ..., x_999_99, y_999_0, y_999_1, ..., y_999_99 ]])
'''
print(output_tensor) #radious, x circle center, y circle center
'''
tensor([[r_0,   x_0,   y_0  ],
        [r_1,   x_1,   y_1  ],
        [r_2,   x_2,   y_2  ],
        ...,
        [r_997, x_997, y_997],
        [r_998, x_998, y_998],
        [r_999, x_999, y_999]])
'''

#define model and loss function.
model = torch.nn.Sequential(
  torch.nn.Linear(200, 200),
  torch.nn.Tanh(),
  torch.nn.Tanh(),
  torch.nn.Linear(200, 3)
)
loss_fn = torch.nn.MSELoss(reduction='mean')

#train model
learning_rate = 1e-4
optimizer = torch.optim.Adagrad(model.parameters(), lr=learning_rate)
for t in range(10000):
  # Forward pass: compute predicted y by passing x to the model.
  output_pred = model(input_tensor)

  # Compute and print loss.
  loss = loss_fn(output_pred, output_tensor)
  if t % 100 == 99:
    print(t, loss.item())
    '''
    99   0.0337635762989521
    199  0.0285916980355978
    299  0.025961756706237793
    399  0.024196302518248558
    499  0.022839149460196495
    ...
    9799 0.004136151168495417
    9899 0.0040830159559845924
    9999 0.004030808340758085
    '''
  
  #typical procedure
  optimizer.zero_grad()
  loss.backward()
  optimizer.step()

print(output_tensor[0].tolist())
print(output_pred[0].tolist())
#[0.7722834348678589, 0.46600303053855896, 0.5080233812332153 ]
#[0.7921068072319031, 0.46946045756340027, 0.49222415685653687]

plt.xlabel('x')
plt.ylabel('y')
r_rand, x_rand, y_rand = output_tensor[0].tolist()
plt.scatter([r_rand*math.cos(t) + x_rand for t in t_parameter],[r_rand*math.sin(t) + y_rand for t in t_parameter],label="Measured Data")
r_rand, x_rand, y_rand = output_pred[0].tolist()
plt.scatter([r_rand*math.cos(t) + x_rand for t in t_parameter],[r_rand*math.sin(t) + y_rand for t in t_parameter],label="Fit Data")
plt.legend(loc='upper right')
plt.tight_layout()
plt.show()

",47000,,47000,,5/12/2021 23:50,5/12/2021 23:50,Neural Network for Picking Parameters of a Nonlinear Function to Data Points,,0,3,,,,CC BY-SA 4.0 27760,2,,27754,5/11/2021 23:21,,2,,"

Trying to explain the idea of VC to some of my colleagues I've discovered quite an intuitive way of laying out the basic idea. Without going through lots of math and notation as I've done in my other answer.

Imagine a following game between two players $\alpha$ and $\beta$ :

  1. First, player $\alpha$ plots $d=4$ points on a piece of paper. She may place the points however she likes.
  2. Next, player $\beta$ marks several of the drawn points.
  3. Finally, player $\alpha$ should draw a circle such that all the marked points are inside the circle, and all the unmarked points - outside. (Points on the boundary considered "inside".)

The player $\alpha$ wins if she can draw such a circle at step #3. The player $\beta$ wins if making such circle is impossible.

If you try to analyze this game then you'll notice that the player $\beta$ has a winning strategy. For any $d=4$ points on a plane, there is always a subset such that player $\alpha$ is unable to draw a required circle. ( I don't want to go into the detailed proof of the strategy - it is straightforward, but cumbersome - I've sketched it my other answer). If we now change the number of points to $d=3$ then the game suddenly becomes winnable by player $\alpha$ - for three points that are not on the same line any subset can be separated by a circle.

The largest number $d$ at which the game is winnable by player $\alpha$ is called the VC dimension of our classification set. So, in the case of 2D discs (insides of a circle) the VC dimension is 3. If one changes the rules to use rectangles instead of circles, then the maximum number of points winnable by $\alpha$ would be 4 - thus, the VC dimension of rectangular classification sets is 4.

Restoring our the mathematical notation, we denote $\mathcal{X} = \mathbb{R}^2$ our two-dimensional plane. $C$ is the subset of cardinality $|C| = d$ that the player $\alpha$ selects. And $\mathcal{H}$ is a class (a "set-of-sets") of the subsets of $\mathcal{X}$ that one should use as a classification boundary. Formally, the statement that the game above is winnable by $\alpha$ can be written as:

$$ \exists\;C\subset\mathcal{X}.|C|=d \Rightarrow\forall\; A\subset C\; \exists\; B\in\mathcal{H}\;. A = B\,\cap\,A $$

The maximal $d$ at which this statement is true would be the VC dimension. (I actually worked backwards from noticing the alternating quantifiers $\exists\,\forall\,\exists$ in the VC definition - which is typical in game playing, so I worked back from the definition to make the game above.)

",20538,,2444,,5/12/2021 8:00,5/12/2021 8:00,,,,1,,,,CC BY-SA 4.0 27761,1,27762,,5/12/2021 0:06,,7,7733,"

I know that the API is python based, but what's the gpt-3 engine written in mostly? C? C++? I'm having some trouble finding this info.

",36067,,,,,5/12/2021 2:03,What language is the GPT-3 engine written in?,,1,0,,,,CC BY-SA 4.0 27762,2,,27761,5/12/2021 2:03,,14,,"

The GPT-3 source code hasn’t been released but the creators say it uses the “same model and architecture as GPT-2” (source) with some exceptions.

The GPT-2 source code is written in 100% Python. The model is based on Tensorflow and NumPy which are written using C and C++. My best guess is that GPT-3 is also written in Python using libraries based on C.

",41026,,,,,5/12/2021 2:03,,,,0,,,,CC BY-SA 4.0 27763,1,,,5/12/2021 2:59,,0,50,"

I am considering an approach to evolutionary algorithms, in which instead of maintaining a population of individuals, we maintain a pool of $N$ mutations that can be applied to a base genome. For every possible subset (or many possible subsets) of the mutation pool, we apply that subset to the base genome to produce an individual, then test that individual. We discard mutations for which the population with that mutation performs worse than the population without it. We merge the best-performing mutations into the base genome for the next generation. Then we replace the discarded or merged mutations with new ones.

Is this known under some name? Is it a good idea?

",45906,,,,,5/12/2021 8:40,Is there a name for this approach to evolutionary algorithms?,,1,4,,,,CC BY-SA 4.0 27764,1,,,5/12/2021 3:40,,5,138,"

Newbie here.

I recently read about cognitive architectures (see: https://en.wikipedia.org/wiki/Cognitive_architecture). They are supposed to be modeled after the human mind and represent a promising approach towards artificial general intelligence (AGI).

My question is, however, why haven't these cognitive architectures achieved AGI yet? What are the specific limitations and roadblocks that cognitive architectures face?

",47007,,2444,,5/12/2021 9:03,5/12/2021 9:03,Why can't cognitive architectures achieve general intelligence?,,0,4,,,,CC BY-SA 4.0 27765,2,,27763,5/12/2021 8:40,,2,,"

It's just a genetic algorithm, only your population is a set of instructions that generate a subject to be tested. There are minor differences in the way you apply multiple mutations, so you evaluate groups of them instead of each individually, but for the scoring you effectively single out the commonalities among the worse individuals.

Hard to tell whether it's a good idea; discarding poor mutations based on them being part of a sub-optimal individual might not be as straight forward as it sounds. And where do you get new mutations from?

",2193,,,,,5/12/2021 8:40,,,,5,,,,CC BY-SA 4.0 27766,1,27767,,5/12/2021 10:32,,2,100,"

Consider the following statement from 4.1 Policy Evaluation of the first edition of Sutton and Barto's book.

The existence and uniqueness of $V^{\pi}$ are guaranteed as long as either $\gamma < 1$ or eventual termination is guaranteed from all states under the policy $\pi$.

I have a doubt on the first condition $\gamma < 1$. If $\gamma < 1$, then it makes our task easy in a way that the $\gamma^k$ becomes zero for sufficiently higher $k$ and it is totally based on the hardware architecture. But, in theory, $\gamma^k$ can never be zero. It may approach zero.

In this context, how can the condition $\gamma < 1$ assure the existence and uniqueness of $V^{\pi}$ theoretically?

",18758,,2444,,4/18/2022 8:36,4/18/2022 8:36,Is the existence and uniqueness of the state-value function for $\gamma < 1$ theoretical?,,1,0,,,,CC BY-SA 4.0 27767,2,,27766,5/12/2021 11:51,,3,,"

In essence, your question is about convergence of infinite series. The mathematical discipline that studies such series is hundreds (if not thousands) years old an has nothing to do with "hardware architecture".

A basic example of an infinite series is the geometric series:

$$ S = 1 + \gamma + \gamma^2 + \gamma^3 + \dots$$

Note that the series is infinite - no one says that $\gamma^k$ "becomes zero at sufficiently large $k$". To compute the infinite sum we represent it as a limit of partial sums: $$ S = \lim_{k\to\infty} ( 1 + \gamma + \gamma^2 + \gamma^3 + \dots + \gamma^k)$$ The partial sum is called the geometric progression and has a well-known expression for it: $$ 1 + \gamma + \gamma^2 + \gamma^3 + \dots + \gamma^k = \frac{1-\gamma^{k-1}}{1-\gamma}$$ As a result, the sum of our geometric series is: $$ S = \lim_{k\to\infty} \frac{1-\gamma^{k-1}}{1-\gamma} = \frac{1}{1-\gamma}\quad\text{if}\;|\gamma| < 1$$

In Reinforcement Learning we are dealing with similar infinite series. In partcular, when we are talking about returns: $$R_t = r_{t+1} + \gamma r_{t+2} + \gamma^2 r_{t+3} + \dots = \sum_{k=0}^\infty\gamma^kr_{t+k+1}$$ The convergence of this is acutally discussed in the Sutton and Barto's book:

If $\gamma < 1$, the infinite sum has a finite value as long as the reward sequence $\{r_k\}$ is bounded.

I guess, the simplest way to prove that would be to use the direct comparison test. Having $\{r_k\}$ bounded means that there exist such number $C$ so that $|r_k| \leq C$ for all $k$. So we've got the dominating series for the return : $$ \left|\gamma^kr_{t+k+1}\right| \leq C |\gamma^k|$$ And, since the $\gamma^k$ series absolutely converges for $0 \leq \gamma < 1$, so does $\gamma^kr_{t+k+1}$ series.

",20538,,20538,,5/12/2021 11:58,5/12/2021 11:58,,,,0,,,,CC BY-SA 4.0 27771,1,27775,,5/12/2021 16:40,,1,67,"

I've been learning Python machine-learning using this project report and the guy who wrote it begins by visualizing his data using various statistical analysis methods: histograms, density plots, box plots, scatter plots, etc.

The problem is that he doesn't explain what this is for. The only detail he provides is that "univariate plots help to understand each attribute" and "multivariate plots help to understand the relationships between attributes."

What would be the reason behind using these plots for ML development? Do they help you to determine which algorithm(s) you should try? If so, how? Can anyone explain the main points or maybe point me to a resource that will help?

",47025,,2444,,5/13/2021 19:47,5/14/2021 21:24,What would be the reason behind using plots (such as box-plots or histograms) for ML development?,,2,0,,,,CC BY-SA 4.0 27772,1,,,5/12/2021 17:49,,1,49,"

I think to have found a somewhat interesting connection between neural networks and another area of mathematics. However, it requires the activation functions in the network to have a bounded - ideally small - domain. For the sake of simplicity, I am restricting this to feedforward networks.

My approach has been the following: Assuming bounded input and weights, a maximal input can be derived. Before each application of an activation function, I thus just scale down the range from the maximal one to the one permitted.

This however causes nearly all weights that are not close (in absolute terms) to the maximal ones to lead to very small outputs of the network, meaning nearly all weight combinations lead to outputs near zero. The network thus has issues learning even simple tasks.

My question, therefore, is: Has anyone ever studied these issues and maybe found a network architecture that works well with this? Or another solution for bounded domains?

",47032,,2444,,5/13/2021 12:08,5/13/2021 12:08,Is there literature on Neural Network with activation functions of bounded domain?,,0,1,,,,CC BY-SA 4.0 27775,2,,27771,5/12/2021 19:44,,1,,"

At a basic level, these kinds of low-dimensional plots where you look at one or two variables at a time can help to give you a sense of what types of relationships you might expect to see, such as linear, non-linear, or periodic relationships, which can steer you toward an appropriate family of models. You wouldn't want to use a linear model to predict data that has highly non-linear relationships, for example, nor would you want to use a monotonic non-linear model to predict a periodic function like a sine wave. Knowing about the general distribution of certain variables can also give you a sense of what statistical assumptions might or might not be met - if a model assumes that data is normally distributed, it might not be appropriate if your histograms suggest otherwise. Statistical analysis can help you determine if the underlying assumptions for certain model classes are or are not met.

",2841,,,,,5/12/2021 19:44,,,,0,,,,CC BY-SA 4.0 27776,1,,,5/13/2021 0:24,,2,2101,"

I am working on a project where I am trying to detect and localize forgeries in images. I am using the CASIA v2 dataset and using Unet model for the task. I have the binary masks of all the images in the CASIA v2 dataset. The metric I am using for the model are F1 score.

The issue with the model is that it is highly overfitting, the validation loss plateaus up.

Batch size is 128 and Learning rate is 0.000001. Image size is 128 x 128.

Updated graph for batch size 16 with the changes mentioned by @spb is as follows:

I have also tried using Learning rate scheduler to decrease the learning rate(starting with high learning rate) on plateaus but that didn't help much.

I am also using the package Albumentations for data augmentation of both the images and its masks. I load the images and the masks and then apply the augmentations and save the augmented images and masks in a separate arrays and finally extend the original images and masks with the augmented images and masks. So technically I have original plus the augmented images and masks that I use for training the model. The augmentations I am using are:

Augment = A.Compose([
A.VerticalFlip(p=0.5),
A.RandomRotate90(p=0.5),
A.HorizontalFlip(p = 0.5)
])

I have split the dataset into 70% Training, 20% Validation and 10% for testing. Here is a snippet of my model. Updated Code below

def conv2d_block(input_tensor, n_filters, kernel_size = 3, batchnorm = True):
"""Function to add 2 convolutional layers with the parameters passed to it"""
# first layer
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
          kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
    x = BatchNormalization()(x)
x = Activation('relu')(x)

# second layer
x = Conv2D(filters = n_filters, kernel_size = (kernel_size, kernel_size),\
          kernel_initializer = 'he_normal', padding = 'same')(input_tensor)
if batchnorm:
    x = BatchNormalization()(x)
x = Activation('relu')(x)

return x

def get_unet(input_img, n_filters = 16, dropout = 0.1, batchnorm = True):
"""Function to define the UNET Model"""
# Contracting Path
c1 = conv2d_block(input_img, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)
p1 = MaxPooling2D((2, 2))(c1)
#p1 = Dropout(dropout)(p1)

c2 = conv2d_block(p1, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)
p2 = MaxPooling2D((2, 2))(c2)
#p2 = Dropout(dropout)(p2)

c3 = conv2d_block(p2, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)
p3 = MaxPooling2D((2, 2))(c3)
#p3 = Dropout(dropout)(p3)

c4 = conv2d_block(p3, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)
p4 = MaxPooling2D((2, 2))(c4)
#p4 = Dropout(dropout)(p4)

c5 = conv2d_block(p4, n_filters * 16, kernel_size = 3, batchnorm = batchnorm)
p5 = MaxPooling2D((2, 2))(c5)
#p5 = Dropout(dropout)(p5)

c6 = conv2d_block(p5, n_filters = n_filters * 32, kernel_size = 3, batchnorm = batchnorm)

# Expansive Path
u7 = Conv2DTranspose(n_filters * 16, (3, 3), strides = (2, 2), padding = 'same')(c6)
u7 = concatenate([u7, c5])
u7 = Dropout(dropout)(u7)
c7 = conv2d_block(u7, n_filters * 16, kernel_size = 3, batchnorm = batchnorm)

u8 = Conv2DTranspose(n_filters * 8, (3, 3), strides = (2, 2), padding = 'same')(c7)
u8 = concatenate([u8, c4])
u8 = Dropout(dropout)(u8)
c8 = conv2d_block(u8, n_filters * 8, kernel_size = 3, batchnorm = batchnorm)

u9 = Conv2DTranspose(n_filters * 4, (3, 3), strides = (2, 2), padding = 'same')(c8)
u9 = concatenate([u9, c3])
u9 = Dropout(dropout)(u9)
c9 = conv2d_block(u9, n_filters * 4, kernel_size = 3, batchnorm = batchnorm)

u10 = Conv2DTranspose(n_filters * 2, (3, 3), strides = (2, 2), padding = 'same')(c9)
u10 = concatenate([u10, c2])
u10 = Dropout(dropout)(u10)
c10 = conv2d_block(u10, n_filters * 2, kernel_size = 3, batchnorm = batchnorm)

u11 = Conv2DTranspose(n_filters * 1, (3, 3), strides = (2, 2), padding = 'same')(c10)
u11 = concatenate([u11, c1])
u11 = Dropout(dropout)(u11)
c11 = conv2d_block(u11, n_filters * 1, kernel_size = 3, batchnorm = batchnorm)

outputs = Conv2D(1, (1, 1), activation='sigmoid')(c11)
model = Model(inputs=[input_img], outputs=[outputs])
return model

Currently I am not using the dropout as it leads to higher validation loss plateaus in my case.

The F1 score and F1 loss I am calculating are as follows

def f1(y_true, y_pred):

y_pred = K.round(y_pred)
tp = K.sum(K.cast(y_true*y_pred, 'float'), axis=0)
tn = K.sum(K.cast((1-y_true)*(1-y_pred), 'float'), axis=0)
fp = K.sum(K.cast((1-y_true)*y_pred, 'float'), axis=0)
fn = K.sum(K.cast(y_true*(1-y_pred), 'float'), axis=0)

p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())

f1 = 2*p*r / (p+r+K.epsilon())
f1 = tf.where(tf.is_nan(f1), tf.zeros_like(f1), f1)
return K.mean(f1)

def f1_loss(y_true, y_pred):

tp = K.sum(K.cast(y_true*y_pred, 'float'), axis=0)
tn = K.sum(K.cast((1-y_true)*(1-y_pred), 'float'), axis=0)
fp = K.sum(K.cast((1-y_true)*y_pred, 'float'), axis=0)
fn = K.sum(K.cast(y_true*(1-y_pred), 'float'), axis=0)

p = tp / (tp + fp + K.epsilon())
r = tp / (tp + fn + K.epsilon())

f1 = 2*p*r / (p+r+K.epsilon())
f1 = tf.where(tf.is_nan(f1), tf.zeros_like(f1), f1)
return 1 - K.mean(f1)

I have also tried using other losses like focal_tversky but have a similar result.

What can be the issue and how can I solve it?

Is it

  1. Issue with my data like presence of outliers
  2. Model related issue
  3. Batch size and Learning rate related issue
  4. Or anything else?

Please your help in this regard is really appreciated as I really need to solve it soon.

",47034,,47034,,5/18/2021 23:20,5/18/2021 23:20,Unet Overfitting for binary segmentation of fake images,,2,0,,,,CC BY-SA 4.0 27778,2,,27757,5/13/2021 6:05,,0,,"

In my opinions, you have around $25\times 20 \times 2 + 5 \times 20 = 1100$ samples, so the list of problems is:

  • Lack of data
  • Imbalance between class 1,2 and class 3

With the simple task, the model with the low capacity (small number of parameters) is more suitable for your task and to avoid overfitting because of the lack of data. Since the augmentation is nothing but regularization, based on EfficientNetV2 paper, with the small model you don't need too much regularization, select some basic affine transformation and elastic transformation then apply Rand Augment with small magnitude mix with 0.1 dropout should be enough.

The second problem should be solved with the weight class for weight, which means set the scale factor for the third class, or duplicated the samples of the third class then apply the strong affine transformation to it.

",41287,,,,,5/13/2021 6:05,,,,0,,,,CC BY-SA 4.0 27779,1,,,5/13/2021 6:17,,1,3267,"

In valid convolution, the size of the output shrinks at each layer. So after some point of time additional layers cannot meaningfully performs convolution. For this reason, same convolution is introduced, where where the size of the output remains intact. This is achieved by padding with enough number of zeroes at the borders of input image.

What happens to the size of output feature map in case of full convolution?

If it remains intact then what is the difference between same convolution and full convolution?

",21964,,,,,5/13/2021 8:37,What is the difference between same convolution and full convolution in terms of feature map size?,,1,0,,,,CC BY-SA 4.0 27780,2,,27779,5/13/2021 8:37,,3,,"

What happens to the size of output feature map in case of full convolution?

It increases.

First one is valid padding: the blue square is not padded, so the green square is smaller. Third one is same padding: the blue square is padded just enough so that the green square is the same size. Fourth one is full padding: the blue square is padded as much as possible for that size of filter, so the green square is larger.

From here.

",45906,,,,,5/13/2021 8:37,,,,0,,,,CC BY-SA 4.0 27781,2,,27740,5/13/2021 11:29,,1,,"

Unless one performs an exhaustive search, it's difficult to answer your question.

However, in the widely used libraries, such as TensorFlow, PyTorch and sklearn, most abstractions (like neural networks and layers) are implemented as classes (see this, this and this examples, respectively), as the main programming language supported by these libraries is Python, which an object-oriented language (but note that Python also supports other programming paradigms, such as functional programming).

I don't know the statistics, but, from my experience, I would say that OOP (which tends to be intuitive to humans for obvious reasons) is the mostly widely used programming paradigm, as opposed to the (pure) functional paradigm.

However, in general, the programming paradigm used to implement a certain concept probably depends on the language that you want to use. For example, in Haskell, a purely functional programming language, you will probably implement a perceptron as a sequence of functions (see this example). Another example is NumPy, which, although the primary interface is written in Python, under the hood, is primarily implemented in C, a non-OOP language (see e.g. this example, where you see many functions, but no class).

This should also partially answer your other question. In some cases, you will implement a concept using the programming language and paradigm that improves the efficiency of your implementation (e.g. the case of NumPy).

",2444,,2444,,5/13/2021 11:34,5/13/2021 11:34,,,,0,,,,CC BY-SA 4.0 27782,2,,27656,5/13/2021 13:54,,0,,"

After several tests, I've opted for the following configuration:

Approximation mode: riemann trapezoidal Steps: 1000 Baseline: average mean of each column (feature)

Although the global score remains good, there're important variations in the number of correctly explained local samples. For example, an execution returns 200 out of 390 samples explained as expected, but the following execution only 17.

Is there a reason for that? And, is there a way to make the algorithm predictable?

",46817,,,,,5/13/2021 13:54,,,,0,,,,CC BY-SA 4.0 27788,1,,,5/13/2021 20:13,,2,38,"

I am having a difficult time explaining to my boss that what he is trying to achieve may not be possible or within reason. We have a database of 3 Million data points per computer across hundreds of machines and when any data point is updated, changed, or removed. Some of these data points are the number of times a computer has been logged in, the names of printers attach, folders on the root of the drive. Some of the data points we do care about, others like a printer being out of ink, we don't care about but the same method would return if the printer was offline which we do care about.

He wants to design an AI that would check these data points and return with true or false on whether the data point is significant when they are changed, removed or updated. We are storing the name of the method to retrieve the data, the current data, all previous data, and the time the change was made. I can not foresee a way to train the data efficiently as we currently don't know which methods retrieve significant data or which values are not significant within the method.

Is it possible to design such an AI without hours of supervisor learning?

",47060,,,,,5/14/2021 0:44,Is it possible to design an AI with two inputs and a Boolean output?,,1,0,,,,CC BY-SA 4.0 27789,1,27799,,5/13/2021 22:22,,6,192,"

In Reinforcement Learning: An Introduction, section 9.2 (page 199), Sutton and Barto describe the on-policy distribution in episodic tasks, with $\gamma =1$, as being

\begin{equation} \mu(s) = \frac{\eta(s)}{\sum_{k \in S} \eta(k)}, \end{equation}

where

\begin{equation} \eta(s) = h(s) + \sum_s \eta(\bar{s}) \sum_a \pi(a|\bar{s})p(s|\bar{s},a), \text{ (9.2)} \end{equation}

is the number of time steps spent, on average, in state $s$ in a single episode.

Another way to represent this is setting $\eta(s) = \sum_{t=0}^{\infty} d_{j,s}^{(t)}$, the average number of visits to $s$ starting from $j$, and $d_{j,s}^{(t)}$ being the probability of going from $j$ to $s$ in $t$ steps under policy $\pi_{\theta}$. In particular, $d_{j,s}^{(1)} = d_{j,s} = \sum_{a \in A}[\pi_{\theta}(a|j)P(s|j,a)]$. This formulation is obtained through pag 325 proof of the Policy Gradient Theorem (PGT) and some basic stochastic processes theory.

If instead of defining $\gamma = 1$, we prove PGT using any $\gamma \in (0,1)$, we would get

\begin{equation*} \eta_{\gamma}(s) = \sum_{t=0}^{\infty} \gamma^t d_{j,s}^{(t)} \end{equation*}

This is not anymore the average number of visits to $s$. Here comes my first question. Would we still get a $\mu_{\gamma}$ on-policy distribution through the same trick done before? That is \begin{equation} \mu_{\gamma}(s) = \frac{\eta_{\gamma}(s)}{\sum_{k \in S} \eta_{\gamma}(k)}, \end{equation} would be the on-policy distribution?

My second question is related and has to do with the phrase on page 199, that says that

If there is discounting ($\gamma <1$) it should be treated as a form of termination, which can be done simply by including a factor of $\gamma$ in the second term of (9.2).

What the authors mean by "as a form of termination"?

As inferred by my previous question, my conclusion is that having $\gamma < 1$ does not alter $\mu_{\gamma}$ being the on-policy distribution, so I don't get this last comment on page 199.

",38402,,2444,,4/18/2022 8:49,4/18/2022 8:49,"If $\gamma \in (0,1)$, what is the on-policy state distribution for episodic tasks?",,1,3,,,,CC BY-SA 4.0 27790,2,,27788,5/14/2021 0:44,,1,,"

Yes... But... you have a lot of hours.. possibly days or weeks.. of work before you are at that point.

Your bigger problem is apparent from your concerns in the second paragraph. It doesn't sound as though your org has a solid grasp of the problem at the moment. For that reason, it seems that some first steps are in order.

Data Exploration

Begin by collecting all of the data points. I'd recommend that you perhaps begin with some statistical analysis of the data as a whole, including some basic visualization and generating a covariance matrix. From there, begin to look at using some clustering methods to identify possible patterns. Along the way, you will almost certainly go down the path of some dimensionality reduction, either via something like PCA or possibly identifying useless features.

Feature Selection

Based on the exploration work above you should now have a much better understanding of your data and relationships within it. Based on this, it's time to start thinking about how to generate a model that produces the desired output. Frankly, you may discover that something as simple as a random forest classification or even a clustering method such as DBSCAN can be used to initially train and then continuously fit your data, producing either a binary classification with the random forest or a yes/no cluster with the clustering technique.

Is More Required?

Of course, something more sophisticated might be required, but if it is you would know have a better handle on the problem and likely be able to intuitively generate a large dataset that could be used for a neural network.

Oh... And as a concluding thought... It might turn out that after all of this analysis the problem cannot be solved with the data that you have. At that point you have to go back and see if there are other data points that could be gathered.

",30426,,,,,5/14/2021 0:44,,,,0,,,,CC BY-SA 4.0 27791,1,,,5/14/2021 4:21,,2,20,"

trying to figure out where to get started with this:

I have a few hundred CT images where certain three-dimensional features in the image (anatomy) are moving in a correlated fashion with a set of radio-opaque markers. These anatomic features can rotate, translate and deform and the markers can move together or sometimes in relation to each other. The position and motion of the markers are indicative of the position and motion of the anatomy in which they are embedded in.

I'd like to develop a model whereby given the positions and motion of the markers I can then predict the position and motion of the full anatomy in which they are embedded and use this for segmentation.

What deep-learning software and techniques are suited for this type of problem?

Thanks!

",47062,,,,,5/14/2021 4:21,Predicting the the motion of a 3D object when the motion of a set of markers is known,,0,0,,,,CC BY-SA 4.0 27793,2,,25178,5/14/2021 11:41,,2,,"

This problem has been formally termed as Delayed MDP (Katsikopoulos & Engelbrecht, 2003)[1] - the actions generated are not instantly applied to the environment and/or the captured observations are not immediately seen by the agent, as expected in an MDP. The delay can either be:

a) A constant delay - Constant Delayed MDP (CDMDP) (Learning and planning in en- vironments with delayed feedback, 2008)[2]

  • The delay introduced by the environment is constant and known. Like the example you provide in the question.

b) A random delay - Random Delayed MDP

  • We don't know what delay to expect, or when the delay will occur. This is more realistic for a real-world case.

TRIED APPROACHES

1. Just ignore the delay

This assumes the CDMDP is an MDP. We then attempt to search for the policy that best ignores the delay

$$ \pi(I_k)={\pi^∗(s)|I_k=(s, a_1,···,a_k)} $$

$k$ is the number of time-steps between acting on the current state, and receiving feedback. This is simple and can give a reasonable result if the delay is small compared to the state transition magnitude [2].

2. Reconstruct the MDP from the CDMDP (Augmented approach)

The corresponding optimal policy for the reconstructed MDP will then the optimal policy for the CDMDP. This is however made intractable by the size of the action buffer growing with the delay length. It's therefore limited to small constant delays. [1] defines how to do the re-construction.

3. Predicting the delayed observation

This tries to "undelay" the environment by using a predictive model $P$ to approximate what the delayed observation would be, instead of keeping the agent patient to the end of the delay (Learning and planning in envs with delayed feedback, (2008)[5], At Human Speed: Deep RL with action delay, (2018)][6]).

e.g., For a delayed action:

$$ s_{t+i} = P(s_{t + i -1}, a_{t + i - delay}) $$

If the delay is on the observation, the most recent K actions can be used with the most recent observation to approximate the current state[2]. However, this also means the delays are part of the state - it's possible to have a better approach, as next described in the random delays part.

Estimating the current state enables the agent to act conditioned an estimate of the true state, on which the action is executed. The limitation is that it assumes a constant delay.


All the above approaches assume a constant delay

Handling random delays*

4. Partial Trajectory resampling (PTR)

A method that recursively resamples actions in the buffer, replacing them with on-policy actions. It uses the delay dynamics to simulate their effect on the current policy.

If random delays exist on an MDP, some actions present in the off-policy replay buffer will not influence the later delayed observations and rewards, for a number of timesteps. This will allow generation of on-policy sub-trajectories from the off-policy samples by recursively resampling the most recent actions from the action buffer. It will not invalidate the sub-trajectory.

The resampled action will be valid as long as the observation delay $\omega$ and the action delay $\alpha$ are both greater than the current time step $t$ i.e., No delayed observation depends on a resampled action.

$$ \omega_t + \alpha_t >= t $$

Illustration of PTR, (3)

The benefits of this method:

  1. Allows discarding of outdated information. e.g If the delay is such that the are no observations for $5$ timesteps, which then all arrive at step $5$, we can safely discard the transitions of steps $(1,2, 3,4)$. This is because this info will be compressed in the most recent observation.

  2. Provides information on the "age" of an observation and the actions applied next

  3. Efficient credit assignment

These factors give partial trajectory resampling better performance compared to concatenating the past $K$ actions with the most recent observation to estimate the current delayed state.

RL with random delays (2020)[3] describes and applies Partial Trajectory Resampling in RL. It's the only work I've managed to come across that handles both random and constant delayed MDPs.

What of the reward delay?

Reward delay, in this context, is referring to the rewards attributed to a delayed observation and action.

A solution to a delayed MDP will involve tackling the credit assignment problem. In this case, this is deciding what to do with the delayed rewards and how to give credit to the delayed actions and states.

This has had different approaches. For instance:

" ...we chose to accumulate the rewards corresponding to the [excessively delayed transitions we drop]. When an observation gets repeated because no new observation is available, the corresponding reward is 0, and when a new observation arrives, the corresponding reward contains the sum of intermediate rewards in lost [dropped] transitions." [3]

Real time sample efficint RL for robots, (2013)[9] on the other hand, uses a decision tree to assign credit to an MDP reconstructed from a CMDP. The tree learns which delayed actions are relevant.


Related reads:

  1. Real-time Reinforcement Learning (2019)
  2. Thinking while moving: Deep reinforcement learning with concurrent control, (2020)
",40671,,40671,,5/14/2021 11:50,5/14/2021 11:50,,,,0,,,,CC BY-SA 4.0 27794,1,,,5/14/2021 11:53,,0,46,"

Hi I am new to machine learning can anyone suggest open dataset consists of both digits and letters(small,capital)

I want images consisisting of both digits and letters to train my cnn model and make the model recognize the real time images

Can anyone please suggest me that dataset link

Thanks in advance

",46872,,,,,5/15/2021 21:45,Cnn for Combination of both digits and letters(small and capital),,1,5,,5/16/2021 10:49,,CC BY-SA 4.0 27795,2,,27776,5/14/2021 13:09,,0,,"

Data augmentations is usually done on the fly during training, meaning before each you apply the random augmentation for the entire dataset, because of the randomness there will be different transformation of the same image in each epoch.

Shuffle the dataset before batching in each epoch, so that each epoch will not have minibatch of same images, which will reduce overfitting. Learning rate usually 1e-4 works fine for me.

Your UNet is not wide enough, why are you using only 16 filters in first conv block, original UNet paper had 64 filters in first conv block. Also you have only one convolution block in each layer, why? original unet has 2 conv blocks in each layer. I suggest you to try with unet given in here https://github.com/zhixuhao/unet/blob/master/model.py

Dice loss is usually prefered for segmentation, check code here

from keras import backend as K

def dice_score(y_true, y_pred, smooth=1e-7):

    intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
    return (2. * intersection + smooth) / (K.sum(K.square(y_true),-1) + K.sum(K.square(y_pred),-1) + smooth)

def dice_loss(y_true, y_pred):
    return 1-dice_score(y_true, y_pred)
",46243,,,,,5/14/2021 13:09,,,,5,,,,CC BY-SA 4.0 27798,2,,27038,5/14/2021 16:39,,2,,"

The cases when we use encoder-decoder architectures are typically when we are mapping one type of sequence to another type of sequence, e.g. translating French to English or in the case of a chatbot taking a dialogue context and producing a response. In these cases, there are qualitative differences between the inputs and outputs so that it makes sense to use different weights for them.

In the case of GPT-2, which is trained on continuous text such as Wikipedia articles, if we wanted to use an encoder-decoder architecture, we would have to make arbitrary cutoffs to determine which part will be dealt with by the encoder and which part by the decoder. In these cases therefore, it is more common to just use the decoder by itself.

",44902,,,,,5/14/2021 16:39,,,,0,,,,CC BY-SA 4.0 27799,2,,27789,5/14/2021 16:49,,5,,"

This question is really getting at the meaning of the discount factor in Markov decision processes. There are actually two, equivalent ways of interpreting the discount factor.

The first is probably what you're familiar with: the $i^{th}$ step receives a reward of $\gamma^{i}r_i$ instead of just $r_i$.

The second, equivalent interpretation, is that at every step, we have a $1-\gamma$ chance of immediately ending the episode. Assuming we don't end the episode, we receive the full $r_i$ reward. This is what Sutton and Barto mean by termination. Note that under the second interpretation, we have a $\gamma^i$ probability of reaching the $i^{th}$ step, hence the connection to the first interpretation.

Your definition of $\eta_{\gamma}(s)$ looks correct to me. Alternatively, we can write it in Sutton's, Barto's notation like $$\eta_{\gamma}(s) = h(s) + \sum_{\bar s}\eta(\bar s)\sum_{a}\gamma \pi(a | \bar s) p(s | \bar s, a)$$

",47080,,2444,,5/14/2021 18:23,5/14/2021 18:23,,,,0,,,,CC BY-SA 4.0 27800,1,27803,,5/14/2021 21:20,,-2,350,"

i'm trying to solve a problem in which i need to carry out reinforcement learning with multiple simultaneous actions in continuous action space . i checked the multiagent structure; however, im trying to solve a problem in which there are difficulties to set up connection between the agents. for instance, they should take actions simultaneously so there is no way they can be aware of each other's actions. so i decided to go with the multivariate normal solution. has anybody tried that out ever?

first of all i have have difficulties finding the covariance matrix. since it has to be PSD so i decided to assume covariance is zero. something like:

covariance matrix = [[variance1 0][0 variance2]]

but its not everything. the agent doesn't seem to be learning. the problem to be solved by the agent is about resource allocation so the "mean" can not be negative then i decided to go with the "RELU" activation function for the neural network. surprisingly, mean is usually zero so as you can guess its updating the weights in a way to do nothing (negative mean). on the other hand, the variances are on the rise. Though i have checked it a million times there might be a flaw on the code of the environment there is no doubt. i just wanted to to make sure if its mathematically ok to go in this way ? i checked for papers and i found bunch of them but they don't seem to be enough. i would appreciate any guidance.

",47083,,47083,,5/14/2021 21:26,5/15/2021 3:32,Policy Gradient ( Advantage actor-critic) for multiple simultaneous continuous actions,,1,0,,,,CC BY-SA 4.0 27801,2,,27771,5/14/2021 21:24,,0,,"

In addition to this answer and given that you were also looking for a resource, I suggest that you read chapter 1 of the book An Introduction to Statistical Learning: with Applications in R, where you can find multiple examples of these plots and explanations of why they can be useful: to understand the relationship between the features and the target variable, which you want to predict.

",2444,,,,,5/14/2021 21:24,,,,0,,,,CC BY-SA 4.0 27802,1,,,5/14/2021 23:43,,2,318,"

I am wondering how to correctly implement the DQN algorithm for two-player games such as Tic Tac Toe and Connect 4. While my algorithm is mastering Tic Tac Toe relatively quickly, I cannot get great results for Connect 4. The agent is learning to win quickly, if it gets the chance, but it only plays in the centre. It is unable to detect threats in the first and last columns. I am using DDQN with memory replay. Also teacher and student refer to two agents at different strengths, while the teacher is frequently replaced by a new student. My algorithm looks simplified as follows:

for i in range(episodes):
    observation = env.reset()
    done = False
    while not done:
        if env.turn == 1:
            action = student.choose_action(observation)
            observation_, reward, done, info = env.step(action)
            loss = student.learn(observation, action, reward, observation_, done))
            observation = observation_
        else:
            action = teacher.choose_action(-observation)
            observation_, reward, done, info = env.step(action)
            observation = observation_

The observation is -1 for player "o", 1 for player "x" and 0 for empty. The agent learns to play as player "x" and through action = teacher.choose_action(-observation) it should find the best move for player "o". I hope that is clear.

The update rule looks as follows:

# Get predicted q values for the actions that were taken
q_pred = Q_eval.forward(state, action)
# Get Q value for opponent's next move
state_ *= -1.
q_next = Q_target.forward(state_, max_action)
# Update rule
q_target = reward_batch - gamma * q_next * terminal
loss = Q_eval.loss(q_pred, q_target)

I am using -gamma * q_next * terminal, because the reward is negative, if the opponent wins in the next move. Am I missing anything important or is it just a question of hyperparameter tuning?

",46837,,46837,,5/15/2021 12:36,5/21/2021 23:41,Update Rule with Deep Q-Learning (DQN) for 2-player games,,0,4,,,,CC BY-SA 4.0 27803,2,,27800,5/15/2021 3:32,,2,,"

Sounds like you have several problems with the way your policy is parametrized.

You don't have to use the multivariate normal distribution. It can work, and probably others have done it already (if not with AAC, surely with DDPG, as it'll be easier to derive the policy gradient there). I won't explain how to use the multivariate normal with either case as there is a much simpler solution for the AAC.

Just have a regular normal distribution over each action. Then, you estimate the mean and variance separately for each action. In other words, it's exactly the same as if you had a single action, but instead of outputting a single mean/variance, you output a vector of mean/variance. Note that this is the same as a multivariate normal distribution, with a diagonal covariance matrix.

",47080,,,,,5/15/2021 3:32,,,,3,,,,CC BY-SA 4.0 27804,1,27834,,5/15/2021 5:46,,1,66,"

When discussing why the policy improvement theorem is true, when we do Monte Carlo control by updating greedily, it says on page 98 of Sutton and Barto's book (2nd edition) that:

$$ \begin{aligned} q_{\pi_{k}}\left(s, \pi_{k+1}(s)\right) &= q_{\pi_{k}}\left(s, \underset{a}{\arg \max } q_{\pi_{k}}(s, a)\right) \\ &= \max _{a} q_{\pi_{k}}(s, a) \\ & \geq q_{\pi_{k}}\left(s, \pi_{k}(s)\right) \\ & \geq v_{\pi_{k}}(s) \end{aligned} $$

I don't understand why the last inequality is not an equality.

The policy improvement theorem was derived on page 78 for deterministic policies.

So, the $\pi_{k}(s)$ we see in $q_{\pi_k}(s, \pi_{k}(s))$ is a fixed action, call it $a'$. Then, in this case, since $v_{\pi_k}(s)= \sum_a \pi_k(a|s) q_{\pi_k}(s, a) = 1 * q_{\pi_k}(s, a') = q_{\pi_k}(s, a')$ (where the second equality is because the probability of all other actions is 0), shouldn't the last inequality be an equality? When is it possible that we have a greater than relation?

",45562,,2444,,5/16/2021 11:34,5/16/2021 18:05,"When showing that the policy improvement theorem applies to MC control, why is $q_{\pi_{k}}\left(s, \pi_{k}(s)\right) \geq v_{\pi_{k}}(s)$ true?",,1,0,,,,CC BY-SA 4.0 27805,1,,,5/15/2021 8:19,,1,224,"

I am currently using the PPO method from the Nvidia's Isaac gym to train an agent for my robot. Below, you can see the plot which corresponds to a training process.

I know that something is massively wrong with my training process (since the robot does not manage to get a nice policy), so I am trying to understand the training process more with the help of the below values being logged during the problematic training.

So far, I could find out what this value function means: the reward function stabilizes, thus the loss value function also stabilizes, which means that my robot should explore more to fail and learn from the fail!

But what about the other two plots, surrogate and mean_noise_std? How should one interpret those values?


The ideal training process

",27593,,2444,,5/16/2021 11:22,5/16/2021 11:22,How should I interpret the surrogate and mean_noise_std plots of training a PPO model (from the Nvidia's Isaac gym)?,,1,0,,,,CC BY-SA 4.0 27807,2,,27805,5/15/2021 14:17,,2,,"

Loss. In the context of Deep Learning and Deep Reinforcement Learning, "training" is just a fancy word for "optimization". You are essentially looking for an optimum point in some huge-dimensional parameter space. For the Reinforcement Learning problems, the optimal point is supposed to maximize the expected return - while most of the Deep Learning classification problems are minimizing some loss.

Most of the modern Deep Learning programming frameworks was created with the classification problems in mind, so they call the optimization goals "losses": tf.keras, pytorch. The good news is that you can easily convert return-maximization problem to loss-minimization problem by just changing the sign of your target function.

Given all the above, the fact that your loss is going up instead of going down doesn't seem to be a good sign.

Value Function. Given an agent's state, Reinforcement Learning tries to find the best agent's action that maximized the expected return. The problem is that the expected return is also unknown. Note on the difference between "reward" and "return". Rewards $r_t$ are immediate bits of positive/negative reinforcement that the agent receives obtain after taking an action. The "return" $G_t$ is the function of a particular trajectory - it is the discounted sum of future rewards starting from step $t$:

$$G_t = \sum_k \gamma^k r_{t+k+1}$$

The expectation of the return at state $s$ given a policy $\pi$ is called value $V^\pi(s)$:

$$V^\pi(s) = \mathbb{E}_{a\sim\pi}\left[ G_t\right|\left. s_t=s\right]$$

In principle, it is possible to estimate $V^\pi(s)$ by doing Monte Carlo: start from the state $s$ and run many simulations following $\pi$ and average all the returns you've got. For most practical cases this approach is infeasible. Modern Deep Reinforcement Learning algorithms try to reduce computational load by using neural networks to approximate $V(s)$.

Having the value function approximation $V(s)$, one can then try to optimize the policy approximation function $\pi(a|s)$ using the policy gradient update. Such setup: with one approximator for policy and another approximator for value function is called an "Actor-Critic" family of Reinforcement Learning algorithms.

BTW, it doesn't look like your value function "stabilizes" during training. And it also doesn't change too much if you look at the vertical scale of your plot.

Surrogate Loss. In practice, the policy gradient optimization step above suffers from instabilities. The gradient step tends to change the policy too strongly, which makes the approximation error of the value function too large.

To deal with this, the TRPO/PPO algorithms introduce a kind of clipping of the loss gradient that constrains the policy gradient changes from being too large. The new function that produces these clipped/constrained gradients is called a "surrogate loss".

mean_noise_std. This doesn't look like a standard name for anything, but I suspect that this is a version of PPO with artificial noise added for better exploration of the agent.

",20538,,20538,,5/15/2021 19:34,5/15/2021 19:34,,,,5,,,,CC BY-SA 4.0 27809,1,,,5/15/2021 15:25,,1,196,"

I was reading the paper on Generalized Advantage Estimate. It first introduces a generalized form of policy gradient equation without involving $\gamma$ and then it says the following:

We will introduce a parameter $\gamma$ that allows us to reduce variance by downweighting rewards corresponding to delayed effects, at the cost of introducing bias. This parameter corresponds to the discount factor used in discounted formulations of MDPs, but we treat it as a variance reduction parameter in an undiscounted problem.

I know the Monte Carlo estimate of value function is given as:

$$V(s_t)=\sum_{l=t}^\infty \gamma^tr_t$$

The bootstrapped estimate of value function is given as:

$$V(s_t)=r_t+\gamma V(s_{t+1})$$

(In both equations, $\gamma$ is a discount factor.)

The bootstrapped estimate is biased because it based on $V(s_{t+1})$ which is usually a biased estimate by some estimator such as a neural network. The Monte Carlo estimate is unbiased because it contains all rewards sampled from the environment. In this case, however, because the agent might take a lot of actions over the course of an episode, it's hard to assign credit to the right action, which means that a Monte Carlo estimate will have a high variance.

Does this contradict what the paper says: "a parameter $\gamma$ that allows us to reduce variance"? Or does it simply mean the following: lower $\gamma$ gives smaller weights to distant future rewards, thus making value estimate less dependent on them, reducing variance in comparison to larger $\gamma$, which make distant future rewards contribute significantly to value estimate. So introduction/existence of $\gamma$ itself does not reduce the variance but gives way to increase or decrease the variance.

",40640,,2444,,5/17/2021 0:23,5/17/2021 0:23,Understanding Generalized Advantage Estimate in reinforcement learning,,1,0,,,,CC BY-SA 4.0 27810,1,,,5/15/2021 18:52,,0,36,"

I have developed, trained and tested an NLP model. It is persisted in a pickle file. The model contains the data preprocessing function that includes text cleaning and new features engineered with word2vec.

With the trained model, I want to make predictions on a new text. The new text data, after preprocessing, won't contain the same engineered features of the training dataset.

Therefore my question is, how can the trained model make predictions on the new dataset as it has different engineered features (different numbers of columns and different columns)?

Should I preprocess the new text data and the training dataset as one dataset?

",47099,,2444,,5/16/2021 10:58,5/16/2021 10:58,Do the training and test datasets need to be equally preprocessed as one whole dataset?,,0,2,,,,CC BY-SA 4.0 27811,1,27812,,5/15/2021 19:25,,2,164,"

I have a question about the $W$ term in the off-policy MC control algorithm on Page 111 of Sutton & Barto. I have also included it in the figure below.

My question: shouldn't the check $A_{t} = \pi(S_{t})$ be made before updating $C(S_{t}, A_{t})$ and $Q(S_{t}, A_{t})$? And, at this point if $A_{t} \neq \pi(S_{t}) $ then the inner loop should exit before updating $Q(\cdot)$. If $A_{t} = \pi(S_{t})$ then shouldn't $W$ be updated to $W = W \frac{1}{b(A_{t}|S_{t})} $ before updating the $Q(s, a)$ and $C(s, a) functions?

The algorithm as stated seems problematic to me. For example, if say the target policy $\pi$ is deterministic and behavior policy $b$ is stochastic. If in period $T-1$ the behavior policy takes an action that is not consistent with $\pi$ then the importance sampling ratio $\rho_{T-1:T-1} = 0$. However, the algorithm as shown would update $Q(S_{T-1}, A_{T-1})$ since the checks I referred to above don't occur until the end of the inner loop. What am I missing here?

",47100,,2444,,5/16/2021 11:06,5/16/2021 11:12,"In off-policy MC control algorithm by Sutton & Barto, why do we perform a last update when sample action is inconsistent with target policy?",,1,0,0,,,CC BY-SA 4.0 27812,2,,27811,5/15/2021 20:54,,1,,"

I think that this is an intentional subtle detail of the algorithm that ensures the convergence property. The claim in the book is that for any $b$ that provides us with "an infinite number of returns for each pair of state and action" the target policy $\pi$ will converge to optimal.

Imagine now that we have such a bad policy $b$ that it never aligns with the target policy action at the last step $t=T-1$ of each generated episode: $A_{T-1} = \pi(S_{T-1})$. In that case the weight value will stay $W=1$ and the the algorithm will be reduced to (ignoring $t$ indices for the last $S,A,R$ triplet): $$\begin{array}{l} C(S,A) \leftarrow C(S,A) + 1 \\ Q(S,A) \leftarrow Q(S,A) + \frac{1}{C(S,A)}\left[R - Q(S,A)\right] \end{array}$$
Which is just the tabular incremental averaging for the Q values (see for example eq. (2.3)).

If we bail from the for loop before these updates, then no updates would happen at all. And authors won't be able to claim convergence to optimal policy for all those "good" sampling policies $b$.

",20538,,2444,,5/16/2021 11:12,5/16/2021 11:12,,,,4,,,,CC BY-SA 4.0 27813,2,,27794,5/15/2021 21:45,,1,,"

As for me, the easiest path to what you are asking for is generating them yourself.

An Example

I usually grab some TTF fonts and put them into a directory so that I have variety for character identification. Begin by importing dependencies and creating a generator function:

from PIL import Image, ImageDraw, ImageFont
from os import listdir

WIDTH       =   200
HEIGHT      =   100
MINX        =    20
MINY        =    20
MAXX        =    WIDTH-60
MAXY        =    HEIGHT-60
MINSIZE     =    24
MAXSIZE     =    48

fonts = [i for i in filter(lambda i:i[-3:]=="ttf", listdir("../data/fonts"))]

def generate_character_image():
    fonts = [i for i in filter(lambda i:i[-3:]=="ttf", listdir("../data/fonts"))]
    charset=list("0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ")
    while 1:
        img = Image.new('RGB', (WIDTH,HEIGHT), color = (255,255,255))
        font = ImageFont.truetype(f'../data/fonts/{fonts[randrange(len(fonts))]}', randrange(MINSIZE,MAXSIZE))
        canvas = ImageDraw.Draw(img)
        where = (randrange(MINX,MAXX), randrange(MINY,MAXY))
        character = charset[randrange(len(charset))]
        coords = canvas.textbbox(where, character, font)
        # The coordinates are the top left corner
        bounding_box = (coords[0], coords[1], coords[2]-coords[0], coords[3]-coords[1])
        #canvas.rectangle(coords, outline=0, width=1)
        canvas.text(where, character, font=font, fill=(0,0,0), anchor="la")
        yield (character, coords, img)

for i in range(5):
    character,coords,image = next(generate_captcha())
    print(f"{character} is in Bounding Box: {coords}")
    display(image)
    print(image)

You can then use this to generate an infinite number of training samples pretty trivially.

Why This Might Be Good

Personally, I prefer to generate my datasets whenever possible. It allows me to do the following:

  • Train without ever repeating a sample
  • The validation data changes every time (in fact, I'll often not bother with a validation step since every epoch uses different data)
  • Zero storage required

Hope this helps!

",30426,,,,,5/15/2021 21:45,,,,2,,,,CC BY-SA 4.0 27814,2,,27809,5/15/2021 23:48,,1,,"

Bootstrapped estimate is biased because it based on $V(s_{t+1})$ which is usually a biased estimate by some estimator such as neural network.

I don't think this statement is totally correct in the context of the paper. Quoting the paper:

Taking $\gamma < 1$ introduces bias into the policy gradient estimate, regardless of the value function’s accuracy. On the other hand, $ \lambda < 1$ introduces bias only when the value function is inaccurate.

So the initial $\gamma$-discounting introduces bias regardless of the accuracy of the $V(s)$.

Same point is reiterated in the footnote on the page 3:

Note, that we have already introduced bias by using $A^{\pi,\gamma}$ in place of $A^\pi$; here we are concerned with obtaining an unbiased estimate of $g^\gamma$, which is a biased estimate of the policy gradient of the undiscounted MDP.

To summarize the above:

  1. the original MDP under consideration is undiscounted
  2. authors introduce $\gamma$-discounting to reduce variance (at cost of bias) of the PG estimate - this happens even if we had accurate $V(s)$
  3. later, authors introduce another $\lambda$-discounting which also controls bias-variance tradeoff - now due to inaccuracy in $V(s)$

As for why lower $\gamma$ reduces variance (p.2 above) - your explanation is on point: smaller $\gamma$ $\Rightarrow$ smaller weights for distant rewards $\Rightarrow$ less variance (but more bias).

",20538,,,,,5/15/2021 23:48,,,,0,,,,CC BY-SA 4.0 27815,1,,,5/16/2021 3:18,,0,145,"

I created a binary image classification model. The dataset contains about 500K images in each class, with ratio = Train : Validation : Test = 7 : 2 : 1. Total images = 1M

I split my dataset into 5 parts (compute constraints)—5 training subsets, 5 validation subsets, and 1 test subset.

I trained and evaluated my model stage by stage. In first stage (evaluation), my model's accuracy was 65%. I re-fitted it with 2nd dataset and the accuracy was 43%. I did same process with the rest, and my accuracies were: 65%, 43%, 57%, 21%, 30%.

How can I train my model in staged training?

I want to train models with different datasets without reinitialize the weight every training process.

",47105,,40434,,5/25/2021 3:30,5/25/2021 3:30,What is the process working on Tensorflow model.fit()?,,2,1,,5/18/2021 22:02,,CC BY-SA 4.0 27816,2,,27815,5/16/2021 4:19,,1,,"

You can save weights during training by passing checkpoint callback to model.fit() method.

# Instantiate your model here
model = create_model() 

# Set model configurations here
model.compile(loss=..., optimizer=..., metrics=...) 

# Set checkpoint path
checkpoint_path = "model_weights.ckpt"

# Create a callback that saves the model's weights
    filepath=checkpoint_path,
    save_weights_only=True,
    monitor='val_loss',
    mode='min',
    save_best_only=True)

# Train the model with the new callback
model.fit(train_images_1, 
          train_labels_1,
          epochs=50, 
          batch_size=batch_size, 
          callbacks=[cp_callback],
          validation_data=(test_images_1, test_labels_1),
          verbose=0)

After finishing training 1st dataset, model weights will be saved in file called model_weights.ckpt. Before starting training next dataset, load the model weights as below

# Create a new model instance
model = create_model()

# Set model configurations here
model.compile(loss=..., optimizer=..., metrics=...) 

# Set checkpoint path
checkpoint_path = "model_weights.ckpt"

# Load the previously saved weights
model.load_weights(checkpoint_path)

# Create a callback that saves the model's weights
cp_callback = tf.keras.callbacks.ModelCheckpoint(
    filepath=checkpoint_path,
    save_weights_only=True,
    monitor='val_loss',
    mode='min',
    save_best_only=True)

# Train the model with the new callback
model.fit(train_images_2, 
          train_labels_2,
          epochs=50, 
          batch_size=batch_size, 
          callbacks=[cp_callback],
          validation_data=(test_images_2, test_labels_2),
          verbose=0)

Repeat this for all datasets.

",46243,,46243,,5/16/2021 5:45,5/16/2021 5:45,,,,0,,,,CC BY-SA 4.0 27818,1,,,5/16/2021 8:53,,0,34,"

I'm trying to manipulate the learning rate with tf PiecewiseConstantDecay. I can easily check if the algorithm switches learning rate values, because one rate is extremely low 1e-20 !! However, NO setting of the "boundaries" causes the algorithm to switch learning rate... What am I doing wrong?

step = tf.Variable(0, trainable=False)
boundaries = [100]
values = [1e2, 1e-20]

schedule = tf.optimizers.schedules.PiecewiseConstantDecay(boundaries, values)
lr = 1e-4 * schedule(step)

optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=optimizer, loss='mean_squared_error')

history = model.fit(x = x_train, y = y_train, validation_data=(x_val, y_val), epochs=100,batch_size=32)

Would love your input.

",46260,,,,,5/16/2021 9:50,Tensforflow schedule - does not change boundaries,,1,0,,5/16/2021 10:46,,CC BY-SA 4.0 27820,2,,25615,5/16/2021 9:09,,2,,"

They use the same techniques, but study different problems.

Transfer learning always does not imply that the novel classes have very-few samples (as few as 1 per class). Few-shot learning does.

The goal of transfer learning is to obtain transferrable features that can be used for a wide variety of downstream discriminative tasks. One example is using an ImageNet pretrained model as an initialization for any downstream task, but note that we need to train on large amounts of data on those novel classes for the model to be suitable to that task.

Note that you can't finetune an ImageNet classifier on few examples of COCO and expect it to generalize well, because it won't. It wasn't explicitly optimized for few-sample learning.

In few-shot learning, our aim is to obtain models that can generalize from few-samples. This could be transfer learned (with certain changes to the usual transfer learning scenario), or it could be meta-learned. It might not need both, it could just be augmented with data from the novel classes during the test time, and a classifier could be trained from scratch.

",36474,,,,,5/16/2021 9:09,,,,0,,,,CC BY-SA 4.0 27821,2,,27818,5/16/2021 9:50,,1,,"

After 100 steps learning rate will switch from 1e-2 to 1e-24.

What exactly is the problem here? Are you confusing steps with epochs?

",46243,,,,,5/16/2021 9:50,,,,1,,,,CC BY-SA 4.0 27824,1,,,5/16/2021 11:45,,1,304,"

I have been working on coding a CNN in python from scratch using numpy as a semester project and I think I have successfully implemented it up to backpropagation in the MaxPool Layers. However, my model seems to never converge whenever there is a Convolutional Layer(s) added. I am assuming there is a problem with the way I have implemented the backpropagation.

Most examples that I have seen for this implementation either really simplify it by using a one-channel input and a single one-channel filter, or just dive straight into the Mathematics which doesn't only not help but also confuses me more.

Here is the way I have tried to implement both Forward and Backward Propagation for multichannel inputs and outputs based on my own understanding and things I read online.

Forward Prop:

Backward Prop for Filter Gradients:

Backward Prop for Input Gradients:

Kindly point out anything that's wrong here. I have been working on this part for the last 2 days but there has to be a problem because my model never seems to converge.

Thanks!

",47110,,,,,5/16/2021 15:54,Convolutional Layer Multichannel Backpropagation Implementation,,0,2,,,,CC BY-SA 4.0 27827,1,27832,,5/16/2021 15:01,,5,1207,"

I'm interested in artificial neural networks (ANN) and I wonder how big ANNs in practical use are, for example, Tesla Autopilot, Google Translate, and others.

The only thing I found about Tesla is this one:

"A full build of Autopilot neural networks involves 48 networks that take 70,000 GPU hours to train. Together, they output 1,000 distinct tensors (predictions) at each timestep."

It seems like most companies don't publish clear information about their ANN sizes. I really can't find anything detailed on this subject.

Is there any information about the size of big practical/commercial ANNs that include something like the amount of neurons/connections/layers etc.?

I'm looking for a few examples in this scale with more precise information on the size of the neural networks.

",47121,,46540,,5/18/2021 7:15,7/13/2021 3:03,What are the typical sizes of practical/commercial artificial neural networks?,,4,0,,,,CC BY-SA 4.0 27828,2,,22999,5/16/2021 15:12,,1,,"

I understand the confusion and I wanted to refer to this (older post) because the metric really is unclear in the context of the SDNE paper.

Perhaps I can try to explain it for future readers, in hopes that this makes sense. All this is my own interpretation, of course.

SDNE is an autoencoder setup that outputs both node embeddings ($y_i$ vector for focal node $i$) and a reconstruction of the ties of $i$ denoted by $\hat{x}_i$ with the original being $x_i$. Note that $y_i$ is the input to the decoder component, and thus the reconstruction is a function of the embedding. In SDNE, $x_i$ are the inputs and the "labels", hence autoencoder.

Now, the notion of precision comes from information retrieval. However, for networks, the problem setting differs. We do not retrieve documents repeatedly, instead we literally predict an entire adjacency vector (especially if one uses transformer layers and such). For that reason, the "ranking" part needs to be reformulated to make any sense.

Let's take a naive view and see what would make substantive sense. In the context of a reconstructed network, precision should mean the following:

"What percentage of reconstructed ties are in the real network?"

Whereas recall would mean.

"What percentage of real ties in the network are found in the reconstruction?"

So we have our reconstruction $\hat{x_i}$ and our ground truth vector $x_i$ - typically rows of the adjacency matrix of the network $\hat{X}$ and $X$ respectively. Let's denote these networks as $\hat{X}$ and $X$ as well as there won't be any confusion (in the paper the authors distinguish the network $G$, its adjacency matrix $S$ and, finally, the inputs and outputs $X$ as subset of $S$.)

The vectors denote ties between $i$ and $j$. With some abuse of notation, we could write for unweighted networks $(i,j) \in X \Leftrightarrow x_{i,j}=1$

Precision would be: $$\frac{|(i,j) \in \hat{X} \cap (i,j) \in X|}{|(i,j) \in \hat{X}|} \Leftrightarrow \frac{|\{j| x_{i,j}=1 \cap \hat{x}_{i,j}=1\}|}{|\{j| \hat{x}_{i,j}=1\}|}$$

and recall would have the denominator with $x_{i,j}=1$ instead of $\hat{x}_{i,j}=1$.

The only difference to the precision@k metric in the paper comes from the ranking. As mentioned above, it is not immediately apparent from the paper how a reconstruction would yield probabilities that we can use for a rank- especially if ties are binary.

However, SDNE does not predict binary ties, even if these appear in the original graph. Instead, it applies a sigmoid function and thus gets some value that is proportional to the likelihood of a tie between two nodes. Long story short, each element in $\hat{x}_i$ is akin to a probabilistic prediction across possible neighbors. To get the $index(j)$ we can thus rank the values of $\hat{x}_i$ from highest to lowest.

Let the top $k$ of $\hat{x}_i$ be above some cutoff value $t_i(k)$. We can write precision@k as

$$\frac{|\{j| x_{i,j}=1 \cap \hat{x}_{i,j} \geq t_i(k)\}|}{|\{j|\hat{x}_{i,j} \geq t_i(k)\}|}=\frac{|\{j| x_{i,j}=1 \cap \hat{x}_{i,j} \geq t_i(k)\}|}{k}$$

If our network were weighted, we could do a similar ranking for $x_i$. In any case, this solves the first issue.

Now, the main problem comes from the description of $AP(i)$.

Both in the paper, and in the previous answer given, there is an obvious mistake: Where precision@k takes an integer $k$ as parameter, we are to sum over $j$ in $AP(i)$. That is, we are told in the other answer (and in the paper) that

$$AP(i) = \frac{\sum_{j \in S_i} \text{precision@}j(i)}{|S_i|}$$ with $S_i=\{j|x_{i,j}=1\}$

This of course makes no sense. $j$ comes from the node set. Nodes could be numbers, but could also be things like $v_i = $"Apple" and $v_j=$"potato". Obviously, the measure precision@"apple"$(i)$ can not be derived from the above definition.

So, we need to find an interpretation that works. Note first, that the denominator is the number of neighbors of $i$ in the network. Thus, the above sum should maximally yield $|S_i|$.

Furthermore, the authors want to sum over neighbors $j$, employing some sort of precision measure for each. Consequently, whatever is summed up, should sum up to 1 for each $j \in S_i$.

Let's consider an embedding that predicts everything perfectly. Note that then precision@k is $1$ for every $k$. That leaves us with the conclusion that the measure must be

$$\frac{1}{|S_i|}\sum_{j \in S_i} \frac{\sum_k \text{precision@}k(i,j)}{|k|}$$

where $\text{precision@}k(i,j)$ denotes some node wise measure of precision. In any case, the measure collapses to

$$\frac{\sum_k \text{precision@}k(i)}{|k|}$$ for each $k$ where $\text{precision@}k(i)>0$.

",47115,,,,,5/16/2021 15:12,,,,0,,,,CC BY-SA 4.0 27830,1,27835,,5/16/2021 16:04,,2,239,"

In the book An Introduction to Statistical Learning, the authors claim (equation 2.3, p. 19, chapter 2)

$$\mathbb{E} \left[ (Y - \hat{Y})^2 \right] = \left(f(X) - \hat{f}(X) \right)^2 + \operatorname{Var} (\epsilon) \label{0}\tag{0},$$

where

  • $Y = f(X) + \epsilon$, where $\epsilon \sim \mathcal{N}(0, \sigma)$ and $f$ is the unknown function we want to estimate
  • $\hat{Y} = \hat{f}(X)$ is the output of our estimate of $f$, i.e. $\hat{f} \approx f$

They claim that this is easy to prove, but this may not be easy to prove for everyone. So, why is equation \ref{0} true?

",2444,,2444,,5/16/2021 18:06,5/18/2021 9:30,Why is the equation $\mathbb{E} \left[ (Y - \hat{Y})^2 \right] = \left(f(X) - \hat{f}(X) \right)^2 + \operatorname{Var} (\epsilon)$ true?,,2,2,,,,CC BY-SA 4.0 27831,2,,27830,5/16/2021 16:04,,1,,"

Let me try to show this. The only (non-constant) random variable here is $\epsilon$, while $f(X)$ and $\hat{Y} = \hat{f}(X)$ are constant random variables (so their expectations is equal to their only value).

So, we start with the following expression.

\begin{align} \mathbb{E} \left[ (Y - \hat{Y})^2 \right] \tag{1}\label{1} \end{align}

Now, we just apply the distributive property, so \ref{1} becomes

\begin{align} \mathbb{E} \left[ Y^2 - 2Y \hat{Y} + \hat{Y}^2 \right] \tag{2}\label{2} \end{align}

Given the linearity of the expectation, we can write \ref{2} as follows

\begin{align} \mathbb{E} \left[ Y^2 \right] - \mathbb{E} \left[ 2Y \hat{Y} \right] + \mathbb{E} \left[\hat{Y}^2 \right] \tag{3}\label{3} \end{align}

Given that $\hat{Y} = \hat{f}(X)$ is a constant and that we can take constants out of the expectations, we have

\begin{align} \mathbb{E} \left[ Y^2 \right] - 2 \hat{Y} \mathbb{E} \left[ Y \right] + \hat{Y}^2 \tag{4}\label{4} \end{align}

Now, let's replace $Y$ with $f(X) + \epsilon$, to obtain

\begin{align} \mathbb{E} \left[ \left( f(X) + \epsilon \right)^2 \right] - 2 \hat{Y} \mathbb{E} \left[ f(X) + \epsilon \right] + \hat{Y}^2 \tag{5}\label{5} \end{align}

Now, in the book, they assume that $\epsilon \sim \mathcal{N}(0, \sigma)$, so $\mathbb{E}\left[ \epsilon \right] = 0$ (i.e. the expected value of $\epsilon$ is just the mean of the Gaussian, which is assumed to be zero). So, \ref{5} becomes

\begin{align} &\mathbb{E} \left[ \left( f(X) + \epsilon \right)^2 \right] - 2 \hat{Y} \left( \mathbb{E} \left[ f(X) \right] + \mathbb{E} \left[ \epsilon \right] \right) + \hat{Y}^2 = \\ &\mathbb{E} \left[ \left( f(X) + \epsilon \right)^2 \right] - 2 \hat{Y} \left( f(X) + 0 \right) + \hat{Y}^2 = \\ &\mathbb{E} \left[ \left( f(X) + \epsilon \right)^2 \right] - 2 \hat{Y} f(X) + \hat{Y}^2 = \\ &\mathbb{E} \left[ f(X)^2 + 2 f(X) \epsilon + \epsilon^2 \right] - 2 \hat{Y} f(X) + \hat{Y}^2 = \\ &\mathbb{E} \left[ f(X)^2 \right] + \mathbb{E} \left[ 2 f(X) \epsilon \right] + \mathbb{E} \left[ \epsilon^2 \right] - 2 \hat{Y} f(X) + \hat{Y}^2 = \\ & \mathbb{E} \left[ \epsilon^2 \right] + f(X)^2 - 2 \hat{Y} f(X) + \hat{Y}^2 = \\ & \mathbb{E} \left[ \epsilon^2 \right] + \left(f(X) - \hat{Y} \right)^2 \tag{6}\label{6} \end{align}

Now, note that the variance of a random variable $Z$ is defined as

$$\operatorname {Var} (Z)=\mathbb {E} \left[(Z - \mu_Z )^{2}\right]$$

In our case, $\mu_Z$ is zero, so the variance of $\epsilon$ is $\mathbb{E} \left[ \epsilon^2 \right]$, so \ref{6} becomes

\begin{align} \operatorname{Var} (\epsilon) + \left(f(X) - \hat{Y} \right)^2 \\ \tag{7}\label{7} \end{align}

You can also come up with the same result in a different and simpler way, i.e. rewrite $\mathbb{E}\left[ \left( f(X) + \epsilon - \hat f(X) \right)^2 \right]$ as $\mathbb{E}\left[ \left( \left(f(X) - \hat f(X)\right) +\epsilon \right)^2 \right]$, then you apply the distributive property and similar rules that I applied above to derive the same result.

",2444,,2444,,5/16/2021 19:01,5/16/2021 19:01,,,,1,,,,CC BY-SA 4.0 27832,2,,27827,5/16/2021 16:18,,3,,"

NLP Domain

You can easily find such open-source neural networks in NLP applications that have been published by Companies like Google. For example, in BERT models, you can see the BERT-Base has the following specifications:

BERT-Base, Multilingual Cased: 104 languages, 12-layer, 768-hidden, 12-heads, 110M parameters

You can find more data about other versions of BERT in the same link.

Another example is GPT models, like GPT-3:

All GPT-3 models use the same attention-based architecture as their GPT-2 predecessor. The smallest GPT-3 model (125M) has 12 attention layers, each with 12x 64-dimension heads. The largest GPT-3 model (175B) uses 96 attention layers, each with 96x 128-dimension heads.

Image Procssing Domain

Another useful domain for your expectation is image processing tasks such as image classification. Pretrained models such as VGG, ResNet, and Inception. These are mostly used for image classification tasks in different companies, and you can find their specification many where. For example for VGG-16, we can see the followings:

Speech Processing

Another practical domain is Auto-Speech Recognition or ASR in short. One of the renowned models in this context is DeepSpeech(2) by Baido Research center. For example, you can find some info like the number of its parameters and its structure in this github link.

Summing up

Note that one regular metric to measure the size of neural networks is "the number of parameters" of the network that is required to be learned in the training phase. Hence, you can compare the size of models even between cross domains by knowing their number of parameters (instead of going to more details about the number of hidden layers and their types). Although, sometimes the length (number of layers) and height (number of neurons in each layer) of the network are very important in the matter of performance and capability of the network.

",4446,,4446,,5/17/2021 22:00,5/17/2021 22:00,,,,0,,,,CC BY-SA 4.0 27834,2,,27804,5/16/2021 18:05,,2,,"

You are right that the strict equality $q_\pi(s,\pi(s)) = v_\pi(s)$ is generally true for a deterministic policy $\pi$.

The $\geq$ inequality is also correct, of course, and it could be that the authors' intention was to show that $\pi_{k+1}$ and $\pi_k$ satisfy the condition for the policy improvement theorem:

Let $\pi$ and $\pi'$ be any pair of deterministic policies such that, for all $s\in\mathcal{S}$, $$q_\pi(s,\pi'(s))\geq v_\pi(s)\tag{4.7}$$

So they kinda relaxed the equality to show that the condition is satisfied "word-for-word" for $\pi'=\pi_{k+1}$ and $\pi=\pi_k$.

Although, I must say that they don't seem to be consistent in this - for example, they keep the strict equality at the derivation at the bottom of the page 78. So it might as well be that you've discovered a typo in Sutton and Barto.

",20538,,,,,5/16/2021 18:05,,,,0,,,,CC BY-SA 4.0 27835,2,,27830,5/16/2021 18:59,,2,,"

Let's say we have $a$ - constant and $\epsilon \sim \mathcal{N}(0,\sigma)$, then: $$\mathbb{E}\left[(a+\epsilon)^2\right] = \mathbb{E}\left[a^2\right] + 2 \mathbb{E}\left[a\right]\mathbb{E}\left[\epsilon\right] + \mathbb{E}\left[\epsilon^2\right] $$ Expectations of constants are just the constants: $\mathbb{E}[a] = a$ and $\mathbb{E}[a^2] = a^2$
The mean of $\epsilon$ is zero $\mathbb{E}[\epsilon] = 0$. And the expectation of $\epsilon^2$ is its variance: $$ \mathop{\mathrm{Var}}(\epsilon) = \mathbb{E}[\epsilon^2] - \mathbb{E}[\epsilon]^2 = \mathbb{E}[\epsilon^2]$$

Substituting, we get an expression for the original expectation:
$$\mathbb{E}\left[(a+\epsilon)^2\right] = a^2 + \mathop{\mathrm{Var}}(\epsilon) \tag{*}$$

Getting to the expectation in the book, we first substitute the values for $Y$ and $\hat{Y}$: $$\mathbb{E}\left[(Y - \hat{Y})^2\right] = \mathbb{E}\left[((f(X) - \hat{f}(X)) + \epsilon)^2\right]$$

In the book it is assumed that $X$, $f$ and $\hat{f}$ are constant. So we can use the expression (*) with the constant being $a = f(X) - \hat{f}(X)$:

$$\mathbb{E}\left[(Y - \hat{Y})^2\right] = (f(X) - \hat{f}(X))^2 + \mathop{\mathrm{Var}}(\epsilon)$$

",20538,,2444,,5/18/2021 9:30,5/18/2021 9:30,,,,0,,,,CC BY-SA 4.0 27836,1,,,5/16/2021 19:35,,2,28,"

In the problems of NLP and sequence modeling, the Transformer architectures based on the self-attention mechanism (proposed in Attention Is All You Need) have achieved impressive results and now are the first choices in this sort of problem.

However, most of the architectures, which appear in the literature, have a lot of parameters and are aimed at solving rather complicated tasks of language modeling ([1], [2]). These models have a large number of parameters and are computationally expensive.

There exist multiple approaches to reduce the computational complexity of these models, like knowledge distillation or multiple approaches to deal with the $O(n^2)$ computational complexity of the self-attention ([3], [4]).

However, these models are still aimed at language modeling and require quite a lot of parameters.

I wonder whether there are successful applications of transformers with a very small number of parameters (1k-10k), in the signal processing applications, where inference has to be performed in a very fast way, hence heavy and computationally expensive models are not allowed.

So far, the common approaches are CNN or RNN architectures, but I wonder whether there are some results, where lightweight transformers have achieved SOTA results for these extremely small models.

",38846,,2444,,12/5/2021 16:58,12/5/2021 16:58,Are there any successful applications of transformers of small size (<10k weights)?,,0,0,,,,CC BY-SA 4.0 27838,2,,14228,5/17/2021 3:13,,1,,"

Answers in the comments are decent, particularly DuttA.

DuttA gives these, approximately

  • ease of derivative
  • Don't have to worry about ~0 in denominator causing huge gradient
  • But to me the most important is mathematical convenience, someone might easily make the mistake of RMSE is just equal the difference y−y′ instead of root of mean square of y−y′. The answer this might start depending on conventions.
  • In maths (Don't know the reason and might be inaccurate) we mainly work with variances instead of standard deviation.

Here are my reasons for using the MSE instead of RMSE:

  • Doesn't have the sqrt operations, so it computes faster
  • the square root isn't easy, its Newtons method, so it could be a dozens steps per iteration
  • MSE has all the information of RMSE, there is 1-to-1 mapping, so no loss
  • the storage of the square root typically doesn't save any memory in IEEE-784 and compute vs. memory is big thing in complexity
  • tools like gradient boosted machine can "recycle" the squared error computation for speedup and working on O(n) complexity
  • there is hidden scaling and regularization because many gpu hardware elements are fundamentally 8-bit, so if you can make your code more 8-bit in its guts then you don't have as much in the back-conversion and it runs a lot faster
",2263,,,,,5/17/2021 3:13,,,,0,,,,CC BY-SA 4.0 27839,1,27845,,5/17/2021 3:58,,2,105,"

I have been learning about Style Transfer recently. Style is defined as

The correlation of activations between channels.

I can't seem to understand why that would be true. Intuitively, style seems to be the patterns that exist in one particular channel/image rather than the patterns between channels. When filters in CNNs have different weights for acting as filters for different channels, why do we even expect 2 channels to be correlated? And further, why do we expect the style to be conveyed by it?

I expected a style function that could compare activations in some layer of a CNN condensed into one channel so that an algorithm can search for which activations occur simultaneously and hold style information.

I understand how we are carrying out the operations with the matrix and defining the loss function, what I don't get is why we are assuming style information lies in correlation between channels in the first place.

",47139,,2444,,5/20/2021 0:15,5/20/2021 0:15,"In style transfer, why does the comparison between channels give a good sense of style?",,1,0,,,,CC BY-SA 4.0 27842,2,,27827,5/17/2021 9:22,,1,,"

I hope this helps. Disclaimer: the info is extracted from Computer Vision at Tesla, though aditional references may be needed....

",46540,,46540,,5/17/2021 9:41,5/17/2021 9:41,,,,1,,,,CC BY-SA 4.0 27843,1,,,5/17/2021 9:23,,2,274,"

Is there a multi-agent deep reinforcement learning algorithm which is for environments with only discrete action spaces (Not hybrid) and have centralized training?

I have been looking for algorithms, (A2C, MADDPG etc.) but still havent find any algorithm that provides all of properties i mentioned (Multi agent + discrete action space + deep learning + centralized training).

I am wondering if we use an actor network that gets state as input and concatenated discrete actions of agents as output (For example if agent has 3 actions and we have 4 agents output can be [0,0,1, 0,1,0, 0,0,1, 1,0,0]) is that would be bad idea ?

",47148,,,,,5/18/2021 17:04,Is there a multi-agent deep reinforcement learning algorithm which is for environments with only discrete action spaces (Not hybrid)?,,1,2,,,,CC BY-SA 4.0 27844,1,,,5/17/2021 9:35,,2,84,"

I have a dataset, where objects are very close to each other. So, the question is: what is the best approach to label them?

There are two possible options:

  1. mark objects so that they will not intersect (it is difficult, surroundings are not included in the label area)
  2. mark a larger area of objects, but labels will intersect

What is more practical?

",46587,,46587,,5/19/2021 20:40,5/20/2021 8:01,Is intersection of labels acceptable in computer vision?,,1,3,,,,CC BY-SA 4.0 27845,2,,27839,5/17/2021 12:21,,2,,"

When the original Neural Transfer paper was published, your question stayed unanswered for a while. The reason why the Gram matrix represents artistic style was not entirely clear. A satisfactory (in my opinion) explanation came with the "Demystifying Neural Style Transfer" paper.

The basic idea is that you cannot just directly compare activations for two images. The spatial positions of various features are different for the source and target image, so you should somehow get rid of positional information of the activations and compare their distributions across the whole image. The goal of the style transfer task is thus to make the two distributions as close as possible. One of the possible measures of distance between two distributions $P$ and $Q$ is the Maximum Mean Distance (MMD):

$$\text{MMD}^2(P,Q) = \left\Vert \mathbb{E}_{P}[\phi(X)] - \mathbb{E}_{Q}[\phi(Y)]\right\Vert^2$$

With $\phi(\cdot)$ being a feature function - in our case it would be the NN activations in a particular layer. The next step would be to apply a "kernel trick" to the MMD, representing it through a kernel $k(x,y)$

$$\text{MMD}^2(P,Q) = \mathbb{E}[k(x_i,x_j)] + \mathbb{E}[k(y_i,y_j)] - 2 \mathbb{E}[k(x_i,y_j)]$$

The Gram matrix of the original style transfer corresponds to the squared dot-product kernel $k(x,y) = (x^Ty)^2$: $$\text{MMD}^2(P,Q) = \mathbb{E}[(x^T_ix_j)^2] + \mathbb{E}[(y^T_iy_j)^2] - 2 \mathbb{E}[(x^T_iy_j)^2] = \left\Vert G^x - G^y\right\Vert_F^2 $$

Where $G^x$ and $G^y$ are the Gram matrices for activations and $\Vert\cdot\Vert_F$ is the Frobenius norm. The last equality gives a bit of trouble to those not used to the kernel trick, so I'll expand it (assuming Einstein summation convention):

$$\begin{array}{l} \mathbb{E}[(x^T_ix_j)^2] + \mathbb{E}[(y^T_iy_j)^2] - 2 \mathbb{E}[(x^T_iy_j)^2] = \\ = x_{ik}x_{jk}x_{im}x_{jm} + y_{ik}y_{jk}y_{im}y_{jm} - 2 x_{ik}x_{jk}y_{im}y_{jm} \\ = (x_{ik}x_{im} - y_{ik}y_{im})(x_{jk}x_{jm} - y_{jk}y_{jm}) \\ = (G^x_{km} - G^y_{km})(G^x_{km} - G^y_{km}) \\ = \left\Vert G^x - G^y\right\Vert_F^2 \end{array} $$

The authors of the 1701.01036 also tried different kernel functions $k(x,y)$ getting more interesting style transfer results.

",20538,,20538,,5/17/2021 20:38,5/17/2021 20:38,,,,2,,,,CC BY-SA 4.0 27847,2,,27815,5/18/2021 12:43,,0,,"

If you reinitialize your model weights before training it on a new subset, you erase everything it learned before; is it what you want to ? If not, saving your model after each subset and loading it before training on the next subset isn't a good practice neither because your model will see a lot of times a subset of samples and then move on the next ones without seing again these samples.

A better solution is to load batches on-fly. This can be done using keras generators. It allows you to load a batch and do a gradient descent step without changing the model.fit function.

It will be a bit slower but it will work.

",47183,,,,,5/18/2021 12:43,,,,0,,,,CC BY-SA 4.0 27848,2,,27776,5/18/2021 13:00,,1,,"

I do not understand why you say that your model is overfitting. An overfit occurs when the validation loss start increasing after diminishing. Here it seems that your model has reaches its potential and cannot improve anymore. What I would recommend here is to make your model bigger: add filters, increase the depth. Also consider trying transfer learning; it is a common base to all tasks.

",47183,,,,,5/18/2021 13:00,,,,4,,,,CC BY-SA 4.0 27849,1,27856,,5/18/2021 13:57,,6,2269,"

I have seen this question asked primarily in the context of continuous action spaces.

I have a large action space (~2-4k discrete actions) for my custom environment that I cannot reduce down further:

I am currently trying DQN approaches but was wondering that given the large action space - if policy gradient methods are more appropriate and if they are appropriate for large action spaces that are discrete as in my scenario above. I have seen answers to this question with regard to large continuous action spaces.

Finally - I imagine there isn't a simple answer to this but: Does this effectively mean DQN will not work?

",34530,,2444,,5/18/2021 22:01,5/18/2021 22:01,Are policy gradient methods good for large discrete action spaces?,,1,2,,,,CC BY-SA 4.0 27850,1,27946,,5/18/2021 14:13,,1,181,"

When I try to run Amidar even without RL code, I cannot get the environment to move immediately. It takes about 100 steps before the game actually starts moving. I use the following simple code to display some images and print some actions (I always try to do the same action, namely going up):

env = gym.make('Amidar-v0')
env.reset()

for i in range(1000):
    action = 2 
    next_state, reward, terminated, info = env.step(action) # take a random action
    print(f"Timestep {i}")
    print(next_state.shape)
    print(reward)
    print(action)
    print(info)
    plt.imshow(next_state)
    plt.show()

When running this code, it takes until about step 85 before the environment starts to move. After that, each step, it moves until the agent is hit by the enemy. Then the environment restarts in the start state, and it takes quite some time before it starts to move again. I have tried doing 'FIRE' as my first action; however, this is not working since it also takes a while before the environment starts moving. Because of this, my buffer is almost always filled with the same images and hence my network isn't learning anything. How do I get this environment to move immediately?

",47186,,37607,,5/24/2021 21:27,5/24/2021 21:27,Why does the Atari Gym Amidar environment only move after a certain number of episodes?,,1,2,,5/25/2021 10:02,,CC BY-SA 4.0 27851,1,,,5/18/2021 14:14,,1,10,"

As far as I recall, in Deep Learning, batch normalization normalizes each layer activations as a gaussian every batch. If so, Let $x$ be the input and $z_i$ the activation in the $i$-th layer: $p(z_i)$ becomes a gaussian with batch-norm. Right?

Does this constraint affects $p(z_i|x)$?

",34019,,,,,5/18/2021 14:14,Does batch normalization affects the possible solution distribution?,,0,0,,,,CC BY-SA 4.0 27852,2,,27827,5/18/2021 14:22,,0,,"

The size of the model depends on the domain. I am currently working with a model that is used for real time inference on an embedded device. Speed of computation is critical.

The model size is a 5 layer CNN, about 700k parameters and it's about 12MB in size on disk.

",47187,,,,,5/18/2021 14:22,,,,0,,,,CC BY-SA 4.0 27854,1,,,5/18/2021 16:36,,1,204,"

Let's start with a typical definition of the VC dimension (as described in this book)

Definition $3.10$ (VC-dimension) The $V C$ -dimension of a hypothesis set $\mathcal{H}$ is the size of the largest set that can be shattered by $\mathcal{H}$ : $$ \operatorname{VCdim}(\mathcal{H})= \max \left\{m: \Pi_{\mathcal{H}}(m)=2^{m}\right\} $$

So, if there exists some set of size $d$ that $\mathcal{H}$ can shatter and it cannot shatter any set of size $d+1$, then the $\operatorname{VCdim}(\mathcal{H}) = d$.

Now, my question is: why would we be just interested in the existence of some set of size $d$ and not all sets of size $d$?

For instance, if you consider one of the typical examples that are used to illustrate the concept of the VC dimension, i.e. $\mathcal{H}$ is the set of all rectangles, then we can show that $\operatorname{VCdim}(\mathcal{H}) = d = 4$, given that there's a configuration of $d=4$ points that, for all possible labellings of those points, there's a hypothesis in $\mathcal{H}$ that correctly classifies those points. However, we can also easily show that, if the 4 points are collinear, there's some labelling of them (i.e. the 1st and 3rd are of colour $A$, while the 2nd and 4th are of colour $B \neq A$) that a rectangle cannot classify correctly.

So, the class of all rectangles can shatter some sets of points, but not all, so we would need another class of hypotheses to classify all sets of four points correctly. The VC dimension does not seem to provide any intuition on which set of classes would do the trick.

So, do you know why the VC dimension wasn't defined for all configurations of $d$ points? Was this just a need of Vapnik and Chervonenkis for the work they were developing (VC theory), or could have they defined it differently? So, if you know the rationale behind this specific definition, feel free to provide an answer. References to relevant work by Vapnik and Chervonenkis are also appreciated.

",2444,,2444,,11/28/2021 12:20,4/27/2022 14:04,Why was the VC dimension not defined for all configurations of $d$ points?,,2,1,,,,CC BY-SA 4.0 27855,2,,27843,5/18/2021 17:04,,1,,"

A natural policy to act in an environment with discrete action space would be a softmax.

This paper describes a method that uses the idea of centralized training, and I believe could be used in your implementation.

With regard to your last question, I don't know if i understood, but if you have a system that must perform 3 actions, you could assign each action to a specific agent (assuming we have three different action spaces). Then you would have a cooperation game with 3 agents, where all of them have a common reward function. In theory, this 3 agents represents an individual agent that interacts with the environment.

",38402,,,,,5/18/2021 17:04,,,,0,,,,CC BY-SA 4.0 27856,2,,27849,5/18/2021 17:28,,8,,"

I don't think that (at least from a practical standpoint), there is much difference between continuous action space and discrete action space with >2k discrete actions. Quoting the "Continuous control with Deep RL" paper - which I'd recommend as a starting point for your investigation:

An obvious approach to adapting deep reinforcement learning methods such as DQN to continuous domains is to to simply discretize the action space. ... Such large action spaces are difficult to explore efficiently, and thus successfully training DQN-like networks in this context is likely intractable. Additionally, naive discretization of action spaces needlessly throws away information about the structure of the action domain, which may be essential for solving many problems.

The last sentence in the quote above is the most important point for dealing with your problem. The fundamental issue is inability to efficiently explore such a large action space - so the idea is to use its structure. I'm sure that your >2k discrete action set has a certain structure on it. Like some actions might be "closer" to others. If so, then you (1) can infer some information about "neighbor" actions even if you never took them (2) do some exploration by adding noise to your policy-preferred action.

The Actor-Critic class of algorithms matches perfectly the steps (1) and (2) above. The Critic Q-value network learns about your state-action space and the Actor policy network returns actions that you could smear.

Are policy gradient methods good for large discrete action spaces?

The Actor-Critic class of RL algorithms is a subclass of the Policy Gradient algorithms. So, the answer is yes - it looks like going that way is your best shot at making progress.

Does this effectively mean DQN will not work?

I don't think that there is a strict "it will not work" statement. The problem is that it just would be extremely difficult to make the DQN training stable "by hand". In principle you might be able to construct a neural architecture that captures the state-action space structure. And then figure out how to perform efficient exploration. And then ensure that nothing explodes anywhere. But that's exactly what DDPG and then TRPO/PPO approaches would do for you.

",20538,,,,,5/18/2021 17:28,,,,1,,,,CC BY-SA 4.0 27857,1,,,5/18/2021 19:34,,2,415,"

I am currently working on a small project where I am trying to automate some stuff at home. I am building a model capable of identifying my face with OpenCV. This will be a live feed.

I am making the project's estimations and have a really low budget. Therefore I am trying to identify what could be the minimum quality video feed I can pass to my algorithm to identify any face. For now I am just trying to identify mine.

I understand facial recognition works primarily on the unique pattern that could be found in the face. What is the minimum video resolution I need to identify anyone with facial recognition?

",7137,,2444,,4/13/2022 12:37,4/13/2022 12:37,What is the minimum video resolution I need to identify anyone with facial recognition?,,2,4,,,,CC BY-SA 4.0 27861,1,,,5/19/2021 10:09,,2,29,"

There are many resources available for text-to-audio (or vice versa) synthesis, for example Google's 'Wavenet'.

These tools do not allow the finer degree of control that may be required regarding the degree of inflections / tonality retained in output. For example to change vocal characteristics (Implied Ethnicity / Sexbfor example) of a dubbed voice over from one voice whilst retaining tonality (Shouting vs calm).

Text-to-speech 'and back' seems a suboptimal approach due to data loss (e.g. tonality) before reconstruction.

Re-encoding audio-to-audio would/may allow the alteration of characteristics in a manner not available via standard audio processing methods whilst retaining more of the desired tonality.

Is AI able to distinguish between characteristics and tonality as implied above and is such a speach-speach re-encoding tool available, ideally open source?

",47218,,2444,,5/19/2021 13:48,5/19/2021 13:48,Model for direct audio-to-audio speech re-encoding,,0,1,,,,CC BY-SA 4.0 27864,1,,,5/19/2021 13:48,,1,427,"

I would like some help with understanding why there is no explicit flow of information from the reward gradient to the parameters of the policy in policy gradient methods.

What I mean is the following, there are 2 scenarios:

1st - deterministic framework with given initial state $s_0$, actions $a_t = \mu_\theta(s_t)$, rewards $r(s_t, a_t, s_{t+1})$, and transitions $s_{t+1}=f(s_t, a_t)$. Assume all of these things are differentiable (maybe all is continuous). By drawing the computational graph I found I can compute the gradient of $J(\mu_\theta) = \sum_{t} r(s_t, a_t, s_{t+1})$ with respect to $\theta$. I could optimize for cumulative reward by doing gradient ascent on this objective.

2nd - framework in https://spinningup.openai.com/en/latest/spinningup/rl_intro3.html#id16 , which seems general, reading $\nabla_\theta J(\pi_\theta) = \nabla_\theta E_{\tau \sim \pi_\theta}[R(\tau)] = E_{\tau \sim \pi_\theta}[R(\tau)\sum_{t=0}^T \nabla_\theta \log{\pi_\theta (a_t | s_t)}]$.

What I don't understand (and I certainly feel confused about in an ignorant way) is why the derivation assumes that $\nabla_\theta R(\tau) = 0$ when this scenario (as it is stochastic) should include the 1st as a particular case, which does have a derivative!

It makes sense that $R(\tau)$ is only affected by $\theta$ through the change in probability of the trajectories $\tau$, but it still feels strange that $\tau = (s_0, a_0, \dots)$ is indeed one sample of $\tau = (s_0\sim \rho(s_0), a_0 \sim \pi_\theta(s_0), \dots)$ which depends on $\theta$. The deterministic reduction is obvious, but one could also think about reparametrization tricks in order to show the same point.

In other words, if the reward function were differentiable, i.e. fully differentiable known environment, how could I use this information?

",47226,,,,,7/17/2022 7:29,What happens with policy gradient methods if rewards are differentiable?,,2,6,,,,CC BY-SA 4.0 27866,2,,23412,5/19/2021 15:01,,0,,"

Giving this small rectangle to a model will probably not provide him enough information to identify what is inside this rectangle. An approach that could work would be to do image segmentation, having the identity of everything in the image, and then extract the label of the pixels in the rectangle.

",47183,,,,,5/19/2021 15:01,,,,1,,,,CC BY-SA 4.0 27867,1,32093,,5/19/2021 15:30,,2,141,"

I am looking for the standard notation to define element-wise / Hadamard-style functions, if there is one.

That is to say, if the operator I am looking for were represented by a hexagon ⬡, I could use it as such:

$$A(x) = \underset{i}{\Large{⬡} } f(x_i)$$

$$A : \mathbb{R}^n \rightarrow \mathbb{R}^n$$ $$f : \mathbb{R} \rightarrow \mathbb{R}$$

It is very convenient to define such functions explicitly because I want to manipulate them: $B \circ A$ . It seems to me that the following notation is correct: $A_i(x) = f(x_i)$ but I worry it is nonstandard and confusing.

My functions are non-linear so I cannot simply apply them directly to the array as a vector.

As stated in an answer, this is unnecessary when a function is strictly scalar because it is implied to apply element-wise. There are still some situations I would hope to have it:

$$\underset{ij}{\Large{⬡} } e^{M_{ij}} \ne e^M$$

The answers suggest to me the best option would be something like these:

$$A(M) = \text{for each } i,j: e^{M_{ij}}$$ $$A(M) = \text{element-wise}: e^{M_{ij}}$$

The question is now closed in the negative, but I would welcome a new answer. Would be nice to find something like $\forall$.

Related:

",45018,,45018,,10/23/2021 9:43,10/23/2021 9:43,What is the correct notation for an operation that applies to each element of an array independently?,,2,0,,,,CC BY-SA 4.0 27868,2,,27864,5/19/2021 16:39,,2,,"

That's exactly the point of the Policy Gradient Theorem. Let's go through he proof of this theorem - it relies on our ability to "loop" the reward expression $J(\theta,s)$, expressing it through $J(\theta,s')$ in the next state:

$$J(\theta,s) = \mathbb{E}_{\tau\sim\pi_\theta}\left[ R(\tau) | s_0 = s\right]$$ $$\begin{align} J(\theta,s) & = \sum_a\pi_\theta(a|s)\sum_{s',r}P(s',r|s,a)(r + J(\theta, s'))\\ & = \sum_aJ(\theta,a,s) \end{align} $$

Here, I've introduced the notation $J(\theta,a,s)$ to keep further derivation sane. Taking the gradient:

$$\begin{align} \nabla_\theta J(\theta,s) = \sum_{a}\left(\vphantom{\sum_a}\right.&\nabla_\theta \pi_\theta(a|s) \left(\sum_{s',r}P(s',r|s,a)(r + J(s',\theta))\right) + \\ &+\pi_\theta(a|s) \nabla_\theta \sum_{s',r}P(s',r|s,a)(r + J(s',\theta)\left.\vphantom{\sum_a}\right)\end{align} $$

With the first term in the sum we do the log trick: $$ \frac{\nabla_\theta \pi_\theta(a|s)}{\pi_\theta(a|s)} \left(\pi_\theta(a|s)\sum_{s',r}P(s',r|s,a)(r + J(s',\theta))\right) = J(\theta,a,s)\nabla_\theta\log\pi_\theta(a|s) $$ And the second term simplifies to:

$$\pi_\theta(a|s) \nabla_\theta \sum_{s',r}P(s',r|s,a)(r + J(s',\theta)) = \pi_\theta(a|s) \sum_{s'}P(s'|s,a) \nabla_\theta J(s',\theta) $$ The last equality is exactly where we loose the rewards $r$ - the gradient of the constant $r$ is zero, and we can sum over $r$ because of the transition probability normalization $\sum_{r}P(s',r|s,a) = P(s'|s,a)$.

So we've finally get the expression for the gradient:

$$\nabla_\theta J(\theta,s) = \sum_{a}\left(J(\theta,a,s)\nabla_\theta\log\pi_\theta(a|s) + \pi_\theta(a|s) \sum_{s'}P(s'|s,a) \nabla_\theta J(s',\theta) \right)$$

The Policy Gradient Theorem proof goes on though a couple more steps: unroll the last expression, rewrite through the distribution over states, then convert back from summation to the expectation over trajectories -- going through all this in detail here would too long. While we've already passed the crucial point for your question - we've got rid of the gradient of the rewards.

Edit: In response to your comment, let me just stress again that this expression is a statement of the Policy Gradient Theorem.

$$\nabla_\theta J(\theta) = \mathbb{E}_{\tau\sim\pi}\left[R(\tau)\sum\nabla_\theta\log \pi_\theta(a_t|s_t)\right]$$

The proof of Policy Gradient Theorem is pretty involved and certainly is not as simple as saying that $\nabla_\theta R(\tau) = 0$. Some authors abuse notation or cut corners when going through the proof of the theorem - for a reasonably strict exposition of the PG theorem I recommend Sutton and Barto, Chapter 13.

Another potential point of confusion is the notation $r(s_t,a_t,s_{t+1})$ that you've been using. In the most general MDP formulation rewards are random variables $r$. The probability of getting a reward $r$ and ending up in state $s_{t+1}$ is encoded in the transition probability $P(s_{t+1},r|s_t,a_t)$. (This covers the case of deterministic rewards by having a deterministic distribution over $r$ ). The derivation above uses this, most general, formulation of the MDP.

",20538,,20538,,5/21/2021 14:24,5/21/2021 14:24,,,,5,,,,CC BY-SA 4.0 27869,1,27905,,5/19/2021 16:47,,1,294,"

I am working with some time-series hydrology data. Our goal is to forecast the time series forward, meaning predicting the data 1 month, 3 months ,6 months into the future. The data itself(image below) is characterized by mostly 0 or very small rates of flow expect for brief periods that are characterized by high flow. So I get this crazy spiky pattern where the median is around 0 or 1-2 meters^3/min, but at the same time there are periods of 5000 meters^3/minute, etc. I am not sure of the exact scale dimensions, but the picture below tells the tale.

So I was trying to figure out a good way to scale this type of spiky data. I have been using a MinMaxScaler just to start with, to rescale the values between (-1, 1). But that approach is not going to work well, especially because at the top ends of the range, the difference between 1000 m^3/min and 5000 m^3/min will be like 0.001 difference.

Does anyone have a good suggestion of how to rescale data like this for time-series analysis in an LSTM or RNN network?

",15765,,15765,,5/21/2021 13:47,5/21/2021 14:57,Rescaling time-series data with very spiky pattern for training data in LSTM network,,1,4,,,,CC BY-SA 4.0 27870,2,,27867,5/19/2021 18:23,,1,,"

The mathematical notation for complex tensorial expressions always tries to balance complexity and precision. More precise notation - the one that explicitly spells all the indices - becomes extremely convoluted very quickly. My favorite example illustrating it is from physics -- the Standard Model Lagrangian is written shortly on T-shirts and coffee mugs as:

$$ \mathcal{L} = - \frac{1}{4} F_{\mu \nu} F^{\mu \nu} + (i \bar{\psi} \hat{D} \psi + \bar{\psi}_i y_{ij} \psi_j \phi + h.c.) + |D_\mu \phi|^2 - V(\phi) $$

But if you try to expand all the indices in all the objects above - then it barely fits on a page.

On the other hand, more succinct notation always leads to ambiguities in interpretations. Your example $f(x_i)$ can be read as: $$f(x_0, x_1, \dots, x_N)\quad \text{or as}\quad f(x_0),f(x_1), \dots, f(x_N)$$
One way to implicitly resolve this ambiguity is to show that the index $i$ "escapes" the argument brackets:

$$a_i = f(x_i)\quad \text{or e.g.} \quad \sum_if(x_i)$$

This can only be interpreted as $f(x_i)$ being element-wise. Also, at least in my opinion, using $x_i$ with index and $x$ without index in the same expression is extremely confusing.

And, of course, the best way to resolve these ambiguities is to state them explicitly. For example, I've seen authors using square brackets $f[x_i]$ or capital letters $F(x_i)$ the vector-argument functions.

",20538,,,,,5/19/2021 18:23,,,,2,,,,CC BY-SA 4.0 27871,1,,,5/19/2021 18:27,,0,29,"

I'm building a model (neural net) that would predict a quality score for images.

Ground truth is given by a 4-level discrete variable (0%, 33%, 67%, 100%), and I would like to build a model that would give something that looks like a continuous result over the 0-100% scale.

What should I pay attention to?

What I'm afraid of is that the model might stick to ground-truth levels and prefer them over any value in between.

",47232,,,,,5/19/2021 18:27,Regression for a discrete variable,,0,3,,,,CC BY-SA 4.0 27872,2,,27857,5/19/2021 18:28,,2,,"

On page 2 of Axis' web page Identification and Recognition there is an estimate of the minimum number of pixels needed for identification, recognition and detection.

",5763,,,,,5/19/2021 18:28,,,,0,,,,CC BY-SA 4.0 27875,2,,23172,5/20/2021 7:52,,1,,"

You can rescale images to same size based on the classification model you are using (preferably 300x300). Also for preprocessing you can try some morphological operations and some brightness removal techniques from OpenCV-Python. One more factor that could have affected your accuracy might be the number of images, if you are having less number of images you can augment your dataset to come up with enough samples for training.

",40266,,,,,5/20/2021 7:52,,,,0,,,,CC BY-SA 4.0 27876,2,,27844,5/20/2021 8:01,,1,,"

In my opinion, the second option will be more general. You can refer to some famous datasets for object detection task such as COCO or Pascal VOC, they usually accept the intersect annotations. As the image below, image from this link where they process the annotation of COCO dataset.

I think the reason is that the model will be easier to separate the intersect patterns in the bounding box than interpolate the missing patterns of the object to understand it

",41287,,,,,5/20/2021 8:01,,,,0,,,,CC BY-SA 4.0 27877,2,,23172,5/20/2021 8:08,,1,,"

Why do not simply perform some bilinear or bicubic interpolation?

Tensorflow and PyTorch deep learning frameworks have dedicated function to do this - https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.Resize and https://www.tensorflow.org/api_docs/python/tf/image/resize .

",38846,,,,,5/20/2021 8:08,,,,0,,,,CC BY-SA 4.0 27878,2,,27854,5/20/2021 8:50,,0,,"

I would say, that meaning of VC dimension is at least a possibility to implement any function on the given number of points for some case, not an express any function on $n$ points.

Yes, you are right, that this definition, unfortunately, is not very useful in practice.

Say, family of functions $\text{sign}(\sin(ax))$ has infinite $\text{VC}$ dimension. There exists an infinite sequence $x_n$, such that by tuning the parameter $a$ one can implement any logical function $0 1 0 \ldots 1$ on these points. However, this doesn't make this class of functions an outstanding ML algorithm.

",38846,,38846,,5/20/2021 9:24,5/20/2021 9:24,,,,3,,,,CC BY-SA 4.0 27879,2,,27857,5/20/2021 8:50,,2,,"

This is a pretty standard minimum "quality" (better said resolution in pixels between the eyes) needed for a facial recognition system:

Ensure that the image contains a frontal view of the face, good lighting, and at least 80 pixels between the eyes.

the bare minimum to identify a human face would be 25 to 75 pixels just between the eyes

In the end it comes to a detailed study of the camara location, distance, light, etc...

",46540,,46540,,5/20/2021 12:38,5/20/2021 12:38,,,,0,,,,CC BY-SA 4.0 27881,1,27896,,5/20/2021 10:09,,1,34,"

I am currently reading a paper called Dynamic Edge-Conditioned Filters in Convolutional Neural Networks on Graphs (2017, CPPR), and I cannot understand the following sentence:

We identify that the current formulations of graph convolution do not exploit edge labels, which results in an overly homogeneous view of local graph neighborhoods, with an effect similar to enforcing rotational invariance of filters in regular convolutions on images.

What does this sentence mean?

",47250,,2444,,5/20/2021 10:52,5/20/2021 23:10,"Why the non-exploitation of edge labels in current graph convolutions ""results in an overly homogeneous view of local graph neighborhoods""?",,1,0,,,,CC BY-SA 4.0 27886,2,,15621,5/20/2021 17:36,,1,,"

What I understand with a layer of LSTM composed of 4 cells is depicted in the following picture:

This would explain the fact that the hidden state of the whole layer has exactly the same dimension of the hidden states (or cells).

However, what I still don't fully understand is the 'return sequence' between LSTM layers, which changes the shape from [hidden_states] to [x_dimension, hidden_states]. This is explained because usually we only care about the state of the last cell, and when connecting multiple layers, all the states of the cells are passed into the next layer. Nevertheless, I still cannot make sense of it graphically.

e.g. model = keras.models.Sequential([ keras.layers.LSTM(20, return_sequences=True, input_shape=[None, 1]), keras.layers.LSTM(20, return_sequences=True), keras.layers.TimeDistributed(keras.layers.Dense(10)) ])

",47264,,,,,5/20/2021 17:36,,,,0,,,,CC BY-SA 4.0 27887,1,27889,,5/20/2021 17:37,,1,27,"

When I think about classification I think of the cancer/not cancer example. You have a bunch of attributes and you know whether the person had cancer during the relevant time period and you determine which attributes predict that result.

I work in a highly-regulated industry that serves the public. There are certain people we are not allowed to do business with, let's say because they will use our service for illegal purposes. Sometimes we tell the (potential) customer "yes" and sometimes "no".

When we say "yes" and the potential customer intends to use our service for illegal activity, they certainly won't inform us of the mistake.

Likewise, when we say "no" the potential customer will sometimes go away, will sometimes complain, but that potential customer will not self-identify and say "yes, you are correct, I intended to use your service for illegal activity".

Occasionally we will receive a report from a 3rd-party that will label a customer, but these reports are a tiny fraction of the number of customers. Unlike the cancer classification we almost always don't know the actual label, we only know what we guessed.

What techniques should we consider to measure our accuracy?

",47261,,47261,,5/21/2021 23:48,5/21/2021 23:48,How to classify when the label is seldom known,,1,0,,,,CC BY-SA 4.0 27888,2,,27854,5/20/2021 17:58,,1,,"

The measure that you are talking about actually has a name. It is called the "Popper dimension" -- it was introduced by Karl Popper in his "Logic of scientific discovery".

Popper's idea of falsifiability was, as Vladimir Vapnik himself admits, the inspiration behind their work on the VC dimension. The VC dimension of the hypothesis set $\mathcal{H}$ measures the "complexity" of the set by looking at how easy would it be to falsify it. The hypothesis sets with higher VC dimension should be harder to falsify. With limiting case of infinite VC dimension which is unfalsifiable for almost any data.

VC dimension and Popper dimension are different in exactly the way you are describing. This quote from the philosophy paper that Vapnik co-authored states it rather succinctly:

The VC-dimension is the largest number of points one can shatter, the Popper dimension is one less than the smallest number of points one can not shatter.

This paper goes on trying to reconcile the two definitions but I didn't find their exposition satisfactory (or even meaningful). I've found much more harsh take on this is in the "Learning from data" book (second edition, Section 4.7) :

Now we can contrast the VC dimension and Popper’s dimension, and conclude that Popper’s definition is not meaningful, as it does not lead to any useful conditions for generalization. In fact, for linear estimators the Popper’s dimension is at most 2, regardless of the problem dimensionality, as a set of hyperplanes cannot shatter three collinear points.

Which looks fair to me.

",20538,,,,,5/20/2021 17:58,,,,11,,,,CC BY-SA 4.0 27889,2,,27887,5/20/2021 18:00,,2,,"

Domain knowledge and cluster interpretation

When you have no (or very limited) gold standard labels for your dataset, traditional performance metrics that require knowing how often you're correct (like accuracy, sensitivity, or specificity) simply won't work. If this is the case, you'll need to examine your outliers to see if they make sense in the context of your particular field. You should look at what features are responsible for a sample being classified as an anomaly, for example, you might see that agents classified as spam send a huge number of emails per hour, or that a suspected money laundering bank account interacts frequently with many offshore accounts. In most cases, you will never know with absolute certainty that your classification is correct, but if you can justify the classification in terms of what is known about your particular domain, it can lend credence to a model.

Another approach is to perform unsupervised clustering on your dataset. Hopefully, you should see that your method identifies anomalies which are somehow different from other datapoints, appearing as outliers or a distinct group of samples unlike the others. You will need to do more investigation to be confident that the clusters represent normal people and fraudsters, but clustering can help you identify that there are indeed some samples that appear markedly different from the others. If you can show that certain samples show similar hallmarks of illegal activity, it also helps to gain confidence in what your model is doing.

",2841,,,,,5/20/2021 18:00,,,,0,,,,CC BY-SA 4.0 27890,1,,,5/20/2021 18:12,,1,64,"

I'm preparing a dataset for a multiclass semantic segmentation using U-Net like architecture. To be precise, I've got it ready but a question came to my mind. How does pixel values of a segmentation map influence the training? Also, is it better to have a greyscale segmap, or RGB one?

I have the dataset labeled and augmented, the only thing I am thinking now is if I should alter the segmaps.

I am planning to use Keras, is it smart enough to take in segmaps in both forms? I haven't found an answer anywhere, doublechecked if not on this forum, I hope it's not something super trivial. First time trying to do segmentation task so it's all a bit new to me :)

Edit: right now the segmaps looks like:

  • background [0 0 0]

  • object_type1 [16 180 75] green

  • object_type2 [225 225 25] yellow

  • object_type3 [230 25 75] red

",47266,,,,,5/20/2021 18:12,Pixel values of segmap in multi-class semantic segmentation,,0,0,,,,CC BY-SA 4.0 27891,1,,,5/20/2021 20:49,,2,191,"

I have read about the universal approximation theorem. So, why do we need more than 1 layer? Is it somehow computationally efficient to add layers instead of more neurons in the hidden layer?

",47268,,2444,,5/20/2021 23:44,5/22/2021 10:35,Do we ever need more then 1 hidden layer in a binary classification problem with ANNs? If yes why?,,1,2,,,,CC BY-SA 4.0 27893,2,,27891,5/20/2021 22:25,,2,,"

This is akin to asking "Why do we need more than one instance of sine to represent any repeating function" or "why can't we represent any polynomial with an equivalent polynomial of just the first degree?" There are many, many problems... I'd even want to say most... that will require more than one layer to solve because the higher dimensional relationships cannot be well represented by just one layer. This is not to say that the theorem is wrong, but consider the applied aspects. We can approximate any continuous function, but that might require a single layer that is infinitely wide, however that same function might be approximated by a deep network having only a few dozen neurons.

However, this is not to say that many networks could not be represented with networks that perform at least as well, or perhaps even better, by simpler networks of fewer layers/neurons. There is active research into how to generalize this.

Ultimately, non-trivial problems often require an empirical approach in this space currently because there is no general solution to "learning."

",30426,,30426,,5/22/2021 10:35,5/22/2021 10:35,,,,8,,,,CC BY-SA 4.0 27895,2,,6162,5/20/2021 22:54,,2,,"

Just so that this could be useful for people who refer to this post later on: Please refer to Sutton's reinforcement learning book (2nd edition) example 11.2. It provides an example for why full gradient wouldn't work.

",47269,,,,,5/20/2021 22:54,,,,0,,,,CC BY-SA 4.0 27896,2,,27881,5/20/2021 23:10,,0,,"

Consider a two-dimensional convolution layer with 3x3 kernels. The 2d inputs of this layer can be seen as a particular graph with each pixel being a graph node, that is connected to 8 of his neighbors:

The 3x3 kernels of the convolutional layer not only process the information about neighborhood relation between pixels, but also about their relative orientation. For example the [0,0] element of the kernel might represent the weight of the node to NW. And the [1,2] element of the kernel represent the weight the node across the S edge:

Now, if we make a convolution that "doesn't exploit edge labels" then we'll have to forget the labels on the picture above, making us loose directional information:

All we can say now is that the "red" node has those neighbors, but we don't really know how they are oriented relative to it. Since now the sub-graph does not provide any directional information, the learned convolution kernels will be direction-agnostic - in other words, they will be rotation-invariant.

",20538,,,,,5/20/2021 23:10,,,,3,,,,CC BY-SA 4.0 27897,1,,,5/21/2021 4:05,,1,16,"

If we estimate the gradient of $f(x)$ using the likelihood ratio/score function, i.e. $$\nabla f = f^*\dfrac{\partial \log p(x)}{\partial \theta}$$ is there any agreed upon terminology to call "$f^*$"? Specifically I'm thinking of the case where you may use some sort of baseline/control variate or a critic, so $f^*$ is not $f$.

I've seen $f^*$ called the learning signal or the cost. In reinforcement learning, you would call $f^*$ the advantage, but I think that terminology is only specific for RL. What is a general way to call $f^*$ that is not specific to RL?

",47080,,,,,5/21/2021 4:05,Terminology for the weight of likelihood ratio/score function?,,0,0,,,,CC BY-SA 4.0 27898,2,,4832,5/21/2021 8:14,,0,,"

I sounds to me like something that could be expressed as a planning problem. You have a start state, end end state, and a set of actions. You need to find the correct actiona sequence to get from the start to the goal.

You could probably express this in PDDL and use a planner to find the right steps.

",2193,,,,,5/21/2021 8:14,,,,0,,,,CC BY-SA 4.0 27899,2,,17463,5/21/2021 8:23,,0,,"

I am not absolutely sure, but I guess it is due to the domain gap. As far as I have seen in my project where YOLOv3 was trained on synthetic images, it performed better when the model was trained and tested on the same domain (synthetic), while the performance drops when we introduce real images for testing. So when you include real images along with synthetic images, you might have to use some domain adaptation methods to improve the performance.

",47276,,,,,5/21/2021 8:23,,,,0,,,,CC BY-SA 4.0 27901,2,,20052,5/21/2021 12:59,,0,,"

LSTMs solve the problem using a unique additive gradient structure that includes direct access to the forget gate's activations, enabling the network to encourage desired behaviour from the error gradient using frequent gates update on every time step of the learning process.

",47278,,40434,,5/25/2021 10:00,5/25/2021 10:00,,,,0,,,,CC BY-SA 4.0 27902,1,27903,,5/21/2021 13:22,,0,123,"

I am a bit new to Reinforcement learning. So, I am extremely sorry if I am asking something obvious. I have written a small piece of code to find the optimal policy for a 5x5 grid problem.

  • Scenario 1. The agent is only given two choices (Up, Right). I believe, I am getting an optimal policy.
  • Scenario 2. The agent is given four choices (Up, Right, Down, Left). I am getting the wrong answer.

I have represented actions with numbers:

0 - Right
1 - Up
2 - Down
3 - Left

When the action Up is chosen, with 0.9 probability it will move up or 0.1 probability move right and vice-versa. When the action Down is chosen, with 0.9 probability it will move down or 0.1 probability move left and vice-versa.

I did not use any convergence criteria. Instead let it run for sufficient iterations. I have indeed confirmed that my optimal state values and policy is converging but to a wrong number. I am attaching the code below:

def take_right(state):
    if (state/n < n-1): state = state + n
    return state

def take_up(state):
    if (state%n!=n-1): state = state + 1
    return state

def take_left(state):
    if (state/n > 0): state = state - n
    return state

def take_down(state):
    if (state%n > 0): state = state - 1
    return state

Scenario 1 result:

Scenario 2 result:

Green has a reward of 100 and Blue has a penalty of 100. Rest of the states have a penalty of 1. Discount factor is chosen as 0.5

Edit:

This was really silly question. The problem with my code was more pythonic than RL. Check the comments to get the clue.

",47280,,20538,,5/21/2021 16:21,5/21/2021 16:21,Converging to a wrong optimal policy if the agent is given more choices,,1,1,,,,CC BY-SA 4.0 27903,2,,27902,5/21/2021 13:34,,2,,"

Reinforcement Learning is really fun because the agent will find any bug in your implementation and will exploit it.

>>> take_left(0)
0
>>> take_left(1)
-4

The agent figured out your bug with negative values and exploits negative indexing to get to the target faster.

",20538,,,,,5/21/2021 13:34,,,,1,,,,CC BY-SA 4.0 27905,2,,27869,5/21/2021 14:57,,2,,"

First, if your data has a minimum of 0 and maximum of 5000, 1000 will get rescaled to .2 and 5000 will get rescaled to 1. So it's not a .001 difference as you suggest.

If you just used a regular loss function (e.g. mean squared error), I'm not sure you'd be able to achieve good predictions. In the literature this type of data might be called "sharp". There are some special loss functions such as DILATE (link) or soft-DTW (link) which are specially designed for time series like the one you show.

",47080,,,,,5/21/2021 14:57,,,,1,,,,CC BY-SA 4.0 27908,1,,,5/21/2021 18:54,,1,20,"

I am trying to reproduce results reported for IRGAN (information retrieval GAN) on the MovieLens 1M dataset. The results I want to reproduce and their sources are listed in the table below.

Model Precision@5 NDCG@5 Source
IRGAN 26.30% 26.40% CFGAN
IRGAN 30.98% 31.59% CoFiGAN
IRGAN 31.82% 33.72% BiGAN

While my implementation of IRGAN is able to reproduce the results on the MovieLens 100k dataset, I am having problems discovering the hyperparameters for reproducing the results on the MovieLens 1M dataset; currently my IRGAN implementation is achieving a precision@5 score of 21.7%. Unfortunately, the authors of the aforementioned papers do not share the hyperparameters used for training their version of IRGAN.

Thus, I want to ask if there is a repository with the used hyperparameters? Furthermore, I would be most grateful if you could provide me information on how to contact the authors?

",45392,,,,,5/21/2021 18:54,Hyperparameters for Reproducing the Results of IRGAN on MovieLens 1M,,0,0,,,,CC BY-SA 4.0 27911,1,27915,,5/21/2021 22:32,,6,105,"

Say I've got two Markov Decision Processes (MDPs): $$\mathcal{M_0} = (\mathcal{S}, \mathcal{A}, P, R_0),\quad\text{and}\quad\mathcal{M}_1 = (\mathcal{S}, \mathcal{A}, P, R_1)$$ Both have the same set of states and actions, and the transition probabilities are also the same. The only difference is in the reward functions $R_0$ and $R_1$. Suppose that we've found an optimal deterministic policy $\pi^*_0$ for the problem $\mathcal{M}_0$ and we've checked that this policy is also optimal for $\mathcal{M}_1$ $$\pi_0^*(s) = \arg\max\limits_a Q^*_0(s,a)\qquad Q_1^*(s,\pi_0^*(s)) = \max\limits_a Q^*_1(s,a)$$

Now, given the two MDPs one can build a whole family of MDPs interpolating between them: $$\mathcal{M}_\alpha = (\mathcal{S}, \mathcal{A}, P, \alpha R_0 + (1-\alpha) R_1)$$ Where $\alpha\in[0,1]$ is the interpolation parameter between the two problems - the rewards are linearly changing from $R_0$ to $R_1$ with this parameter. My question is - in general. will $\pi_0^*$ be optimal for all MDPs in the middle of interpolation interval?

$$Q_\alpha(s,\pi_0^*(s))\stackrel{?}{=}\max\limits_aQ^*_\alpha(s,a),\; \forall\alpha\in[0,1]$$

I feel like this could be generally true due to linearity of the dependence and convexity of the optimization problem. But I cannot neither prove it, nor find a counterexample.

",20538,,,,,5/22/2021 10:00,Reward interpolation between MDPs. Will an optimal policy on both ends stay optimal inside the interval?,,1,1,,,,CC BY-SA 4.0 27913,1,27922,,5/22/2021 7:39,,6,1753,"

I have trouble understanding the meaning of partially observable environments. Here's my doubt.

According to what I understand, the state of the environment is what precisely determines the next state and reward for any particular action taken. So, in a partially observable environment, you don't get to see the full environment state.

So, now, consider the game of chess. Here, we are the agent and our opponent is the environment. Even here we don't know what move the opponent is going to take. So, we don't know the next state and reward we are going to get. Also, what we can see can't precisely define what is going to happen next. Then why do we call chess a fully observable game?

I feel I am wrong about the definition of an environment state or the definition of fully observable, partially observable. Kindly correct me.

",47299,,2444,,5/22/2021 11:58,5/23/2021 12:46,What exactly are partially observable environments?,,3,5,,,,CC BY-SA 4.0 27914,1,,,5/22/2021 8:25,,2,33,"

Is there an optimal number of species for NEAT?

Since too low and too high is bad, I am thinking about adjusting the threshold of the distance function at runtime in order to have the number of species always between some bounds. Does this make sense? Is there an optimal range?

",47300,,2444,,5/22/2021 11:54,5/22/2021 11:54,Is there an optimal number of species for NEAT?,,0,0,,,,CC BY-SA 4.0 27915,2,,27911,5/22/2021 9:52,,4,,"

I believe the claim is true. Here is my attempt at a proof.

Let us consider the optimal infinite horizon value function $V_\alpha^*$ of $\mathcal{M}_\alpha$ at an arbitrary state $s \in S$. The value $V_\alpha^*(s)$ is the expected sum of discounted rewards under an optimal policy $\pi_\alpha^*$, i.e., \begin{equation} V_\alpha^*(s) = \mathbb{E}_{\rho_\alpha}\left[\sum\limits_{t=0}^{\infty}\gamma^t\left( \alpha R_0(s_t,\pi_\alpha^*(s_t)) + (1-\alpha)R_1(s_t, \pi_\alpha^*(s_t)) \right)\middle| s_0 = s, \right], \end{equation} with the expectation taken with respect to the steady state distribution $\rho_\alpha$ of states under $\pi_\alpha^*$. In the following, I drop the condition $s_0=s$ for conciseness, but you can assume it's in each expectation. Now, break up the sum: \begin{equation} V_\alpha^*(s) = \mathbb{E}_{\rho_\alpha}\left[ \alpha\sum\limits_{t=0}^{\infty}\gamma^t R_0(s_t,\pi_\alpha^*(s_t)) + (1-\alpha)\sum\limits_{t=0}^{\infty}\gamma^t R_1(s_t,\pi_\alpha^*(s_t)) \right]. \end{equation} Then, by linearity of expectation: \begin{equation} V_\alpha^*(s) = \alpha\mathbb{E}_{\rho_\alpha}\left[ \sum\limits_{t=0}^{\infty}\gamma^t R_0(s_t,\pi_\alpha^*(s_t)) \right] + (1-\alpha)\mathbb{E}_{\rho_\alpha}\left[ \sum\limits_{t=0}^{\infty}\gamma^t R_1(s_t,\pi_\alpha^*(s_t)) \right]. \end{equation} Note that the first expectation term is the value of $\pi_\alpha^*$ in $\mathcal{M}_0$, and the second expectation term is the value of $\pi_\alpha^*$ in $\mathcal{M}_1$. We already know that $\pi_0^*(s)$ is optimal in $\mathcal{M}_0$ with reward function $R_0$, and $\pi_1^*(s)$ is likewise optimal in $\mathcal{M}_1$ with $R_1$. Further, as per your assumption, $\pi_0^*(s) = \pi_1^*(s)$. So $\pi_\alpha^*$ can be at most as good as $\pi_0^*$ with reward function $R_0$ (resp., with $R_1$): \begin{equation} V_\alpha^*(s) \leq \alpha\mathbb{E}_{\rho_0}\left[ \sum\limits_{t=0}^{\infty}\gamma^t R_0(s_t,\pi_0^*(s_t)) \right] + (1-\alpha)\mathbb{E}_{\rho_0}\left[ \sum\limits_{t=0}^{\infty}\gamma^t R_1(s_t,\pi_0^*(s_t)) \right]. \end{equation} Note that we know take the expectation under the steady state distribution $\rho_0$ of $\pi_0^*$ instead. Thus, we have shown that $V_\alpha^*(s) \leq \alpha V_0^*(s) + (1-\alpha)V_1^*(s)$. Now it remains to argue that the case with a strict less than relation is not possible. Suppose this were the case, and we would have $V_\alpha^*(s) < \alpha V_0^*(s) + (1-\alpha)V_1^*(s)$. But then $\pi_0^*$ would attain a higher value than $\pi_\alpha^*$ in $\mathcal{M}_\alpha$, which is a contradiction (because we assumed that $\pi_\alpha^*$ is an optimal policy for $\mathcal{M}_\alpha$).

Thus, $V_\alpha^*(s) = \alpha V_0^*(s) + (1-\alpha)V_1^*(s)$ and furthermore, acting according to $\pi_0^*$ is optimal also in $\mathcal{M}_\alpha$.

",45529,,45529,,5/22/2021 10:00,5/22/2021 10:00,,,,3,,,,CC BY-SA 4.0 27916,1,35328,,5/22/2021 11:11,,0,236,"

So I've built an arcface model with this arcface layer implementation: https://github.com/4uiiurz1/keras-arcface/blob/master/metrics.py

I trained for a few epochs and now that I'm comparing the results I'm quite baffled.

According to the paper and resources I've read the embedding should produce embeddings where similar images are closer and should have a higher cosine similarity.

But I have the exact opposite case, I ran the models embedding layers through the hold out set and to 95% the mismatches are closer than the matches. Thus I have a reversed 95% accuracy.

My feed and labels are correct

I binned similar images in groups similar to here: https://www.kaggle.com/ragnar123/unsupervised-baseline-arcface but for a different dataset.

Could someone guess why this is happening? Is it possible that some initialization would produce the opposite goal?

",47302,,,,,4/26/2022 6:57,Arcface implementation for image similarity produces opposite embeddings for positive negative image pairs,,1,0,,,,CC BY-SA 4.0 27918,1,,,5/22/2021 15:01,,1,22,"

I have time-series data obtained from a video. The data is composed of bitrate and corresponding label pairs for each timestamp:

The distribution over the first 30 seconds is as follows:

I have built an LSTM model for this dataset to be able to classify the labels based on the bitrate. However, it seems that my model is not able to learn. Validation accuracy starts from approximately 0.3 (makes sense, since I have 2 classes (log2 = 0.3)) and it does not improve.

Do you have any idea about this, is it normal considering this sample data distribution, or is something might be wrong with my model? Thanks!

",41691,,41691,,5/22/2021 15:49,5/22/2021 15:49,"Is my dataset unlearnable, or is my LSTM model not smart enough?",,0,1,,,,CC BY-SA 4.0 27919,1,,,5/22/2021 19:10,,0,38,"

I have a dataset of two different type of images. Say, I have images of a person and his all 10 fingerprints. I want to create a relation between them to predict one from another. How I can do that and which architecture is suitable for this problem or similar type of problem.

",47314,,,,,10/15/2022 2:05,A neural network to learn the connection between two totally different type of images,,1,1,,,,CC BY-SA 4.0 27920,1,,,5/22/2021 19:45,,0,466,"

I want to calculate the similarity or distance of two faces. I'm using Python.

I have read and done what this tutorial says. However, the result is not good (the similarity of same faces and similarity of different faces are very very very close to each other!).

I have downloaded and used this Facenet model to get face embedding vectors, and then used 3 distance metrics (Euclidean, Manhattan, Cosine) to calculate the distance.

After that, I decided to retrain that Facenet model with my dataset. I read this article. I want to use the triplet loss to retrain that Facenet model.

How can I retrain that Facenet model with the triplet loss function? Or can you please send me some links to read?

",47315,,2444,,12/16/2021 14:40,1/10/2023 15:07,How to retrain a Facenet model with the triplet loss function?,,1,0,,,,CC BY-SA 4.0 27921,2,,27919,5/22/2021 22:26,,0,,"

I would try a pair of separate deep image embeddings with a contrastive loss. The idea is similar to the Siamese network architecture. In Siamese networks the pairs of images are of the same type - so both input images are fed through the copy of the same network. In your case the images are of different kinds, so I would just have separate nets for person images and fingerprints.

",20538,,,,,5/22/2021 22:26,,,,0,,,,CC BY-SA 4.0 27922,2,,27913,5/22/2021 23:32,,3,,"

First, note that the current state does not determine the next state. What determines the next state are the dynamics of the environment, which, in the context of reinforcement learning and, in particular, MDPs, are encoded in the probability distribution $p(s', r \mid s, a)$. So, if the agent is in a certain state $s$, it could end up in another state $s'$, but this is not only determined by being just in $s$, but also by $a$ (the action that you take in $s$) and $p$ (the dynamics of the environment).

Now, in their 3rd edition of the AIMA book, Russell and Norvig define fully observable environments as follows.

Fully observable vs. partially observable: If an agent's sensors give it access to the complete state of the environment at each point in time, then we say that the task environment is fully observable. A task environment is effectively fully observable if the sensors detect all aspects that are relevant to the choice of action; relevance, in turn, depends on the performance measure. Fully observable environments are convenient because the agent need not maintain any internal state to keep track of the world. An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data—for example, a vacuum agent with only a local dirt sensor cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other drivers are thinking. If the agent has no sensors at all then the environment is unobservable.

This definition is also the common definition used in reinforcement learning. So, to determine whether an environment is fully or partially observable, you need to determine whether you have or not full access to a (Markovian) state or what constitutes a state in your case. You can consider chess fully observable because you have access to the configuration of the board, so, in theory, you can take optimal action by considering all possible moves of the opponent (but, of course, in practice, not even AlphaZero does this). See figure 2.6, p. 45, of the AIMA book (3rd edition) for more examples of fully and partially observable environments.

A partially observable MDP (POMDP) is a mathematical framework that can be used to model partially observable environments, where you maintain a probability distribution over the possible current (or next) state that the agent may be in.

",2444,,2444,,5/22/2021 23:38,5/22/2021 23:38,,,,1,,,,CC BY-SA 4.0 27925,2,,27913,5/23/2021 6:04,,0,,"

A partially observable environment means it is from the agent's perspective that the agent observes the environment partially. At every time step, the agent takes action based on this partial observation. Based on the agent's action, the state of the environment changes, but the agent may not know all the changes.

",47323,,,,,5/23/2021 6:04,,,,0,,,,CC BY-SA 4.0 27927,1,,,5/23/2021 10:48,,1,11,"

The input data is a set of text chunks containing the description of the pathology or the surgical procedure:

For instance:

  1. Tere is a lumbar stenosis L3/4
  2. Patient ist suffering from [...], MRI and X ray showed lumbar stenosis L3/4, segmental instability L3-5, foraminal stenosis L5/S1 both sides
  3. The patient [...] underwent an MRI showing cervical stenosis C4-7 with myelopathy
  4. [...] showed lumbar adult scoliosis L2-S1 with Cobb angle of 42°
  5. Patient fell from the chair [...] showed osteoporotic fracture L3

Now, the ideal classificator would give me:

  1. Segments: L3,L4; typeofpathology: degenerative; subtypepathology: stenosis
  2. Segments: L3,L4,L5,S1; type of pathology: degenerative; subtypepathology: stenosis, instability
  3. Segments C4, C5, C6, C7;type of pathology: degenerative; subtypepathology: myelopathy
  4. Segments L2,L3,L4,L5,S1;type of pathology: deformity; subtypepathology: de novo scoliosis
  5. Segments L3; type of pathology: pathological fracture; subtypepathology: -

I think that this cannot be reasonably achieved by a pre-programmed algorithm, because the amount of the text before the description can vary, and the choice of words can vary too. Is there an approach using neural networks or NLP tools that would have chance at reaching such classification? How large would the dataset used for training have to be (approximately)?

Maybe it would be reasonable to separate the two problems: detection of the segments AND detection of the pathology. For the segments, one could search for a pattern of C? T? L? or S? with ? being a number and then include all such segment descriptions in the next 20-30 characters, then use an algorithm to mark the continuous segments from the upper to the lower vertebra.

Done this, do neural networks offer any significant advantages over simple keyword matching classification? Most importantly, which NLP neural network tools would be the ones you would start trying with?

",47328,,,,,5/23/2021 10:48,How to detect the description of spine segments in short text using a neural network?,,0,0,,,,CC BY-SA 4.0 27928,1,27930,,5/23/2021 12:18,,5,938,"

Do parallel environments improve the agent's ability to learn or does it not really make a difference? Specifically, I am using PPO, but I think this applies across the board to other algorithms too.

",45240,,2444,,5/23/2021 13:17,5/25/2021 17:04,What is the effect of parallel environments in reinforcement learning?,,1,0,,,,CC BY-SA 4.0 27929,2,,27913,5/23/2021 12:28,,4,,"

You are correct in the question that in RL terms chess a game of chess where the agent is one player, and the other player has an unknown state is a partially observable environment. Chess played like this is not a fully observable environment.

I did not use the term "fully observable game" or "fully observable system" above , because that is not a reinforcement learning term. You may also read "game of perfect information" which is similar - it means there are no important hidden values in the state of the game which may impact optimal play. This is a different concern to understanding the state of your opponent.

Here is a counter-example showing that games of perfect information are not fully observable systems when you have an opponent with an unknown strategy:

  • Optimal play in tic tac toe leads to a forced draw.

  • Let's define a reward signal from the agent's perspective of +1 for a win, 0 for a draw, and -1 for a loss.

  • If the agent's opponent always plays optimally, then a RL agent will learn to counter that optimal play and also play optimally. All action choices will have an expected return of 0 or -1, and the agent will choose the 0 options when acting greedily.

  • If the agent's opponent can make a mistake that allows the agent to win, then there will be a trajectory through the game with a return of 1, or perhaps some other postive value in cases where the mistake is only made according to random chance.

  • The value of states in the game therefore depends on the opponent's strategy.

  • The opponent's strategy is however not observable - it is unknown and not encoded into the board state.

This should match your intuition when asking the question.

Why then, do many two player game-playing reinforcement agents for games like chess perform well, without using POMDPs?

This is because game theory on these environments supports the concept of "perfect play", and agents that assume their opponent will also attempt to play optimally - without mistakes - will usually do well. Game theory analyses choices leading to forms of the minimax theory - making a choice that your opponent is least able to exploit.

That does mean that such an agent may in fact play sub-optimally against any given opponent. For example, they could potentially turn some losing or draw situations into a win, but have little or no capability to do so unless trained against that kind of opponent. Also, playing like this may be a large risk against other opponents, it may involve playing sub-optimally at some earlier stage.

I have observed a related issue in Kaggle's Connect X competition. Connect 4 is a solved game where player one can force a win, and the best agents are all perfect players. However, they are not all equal. The best performers tweak their agent's choices for player two, to force the highest number of wins against other agents who have not coded a perfect player one. Different kinds of learning strategy lead to different imperfections, and the top of the leaderboard is occupied by the current best perfect agent that also manages to exploit the population of near-perfect agents below it in the rankings. This difference in the top-ranking agents is only possible due to the partially-observable nature of the Connect 4 game played against agents with unknown policies.

What exactly are partially observable environments?

They are environments where in at least some states, the agent does not have access to information that affects the distribution of next state or reward.

Chess played against an opponent where you have a model of their behaviour - i.e. their policy - is fully observable to the agent. This is implicitly assumed by self-play agents and systems, and can work well in practice.

Chess played against an opponent without a model of their behaviour is partially observable. In theory, you could attempt to build a system using a partially observable MDP model (POMDP) to account for this, in an attempt to try and force an opponent into states where they are more likely to make a decision that is good for the agent. However, simply playing optimally as possible in response to all plays by the opponent - i.e. assuming their policy is the same near optimal one as yours even after observing their mistake - is more usual in RL.

The original Alpha Go actually had a separate policy network for its own choices and modelling those of humans. This was selected experimentally as performing slightly better than assuming human opponents used the same policy as the self-play agent.

",1847,,1847,,5/23/2021 12:46,5/23/2021 12:46,,,,13,,,,CC BY-SA 4.0 27930,2,,27928,5/23/2021 16:05,,5,,"

Do parallel environments improve the agent's ability to learn or does it not really make a difference?

Yes they can make a difference. There are two ways improvement is seen:

  • Collecting data from multiple trajectories at once reduces correlation in the dataset. This improves convergence for online learning systems like neural networks, which work best with i.i.d. data.

  • Data collection is faster overall, which improves clock time to obtain the same result. This may make better use of other resources too.

Of the two, the first improvement is important for stability, although it can be emulated by running multiple episodes - or restarting from multiple starting points - between batch learning updates.

Specifically, I am using PPO, but I think this applies across the board to other algorithms too.

It does apply to PPO, but the first improvement does not apply across the board. These things need to be true for environment run in paralell to be help with stability:

  • Using an on-policy method, or where experience replay is not an option.

  • Using a function approximator for policy and/or value function.

A lot of policy gradient methods match this, including PPO, A3C, REINFORCE. However, for an off-policy method like DQN, the main benefit will be faster data collection.

These effects are discussed in sections 1 and 4 of the paper Asynchronous Methods for Deep Reinforcement Learning which introduced A3C - thanks to DeepQZero for that reference.

",1847,,1847,,5/25/2021 17:04,5/25/2021 17:04,,,,2,,,,CC BY-SA 4.0 27931,1,27934,,5/23/2021 20:48,,5,1719,"

I'm new to reinforcement learning and I'm going through Sutton and Barto. Exercise 2.1 states the following:

In $\varepsilon$-greedy action selection, for the case of two actions and $\varepsilon=0.5$, what is the probability that the greedy action is selected?

They describe the $\varepsilon$-greedy method on pages 27-28 as follows:

...behave greedily most of the time, but every once in a while, say with small probability $\varepsilon$, instead select randomly from among all the actions with equal probability...

The above method makes the agent select an action randomly "every once in a while" from the action space uniformly with probability $\varepsilon$. I find the question imprecise since we don't know the "once in a while" in this exercise (i.e. is it once every $50$ timesteps? every time step?). If it's for every timestep, isn't it like a Bernouli problem where the parameter is $0.5$? I'd say that the agent has a $0.5$ chance to select a greedy action but I'm not sure at all.

",44965,,37607,,5/24/2021 15:37,5/24/2021 15:37,What is the probability of selecting the greedy action in a 0.5-greedy selection method for the 2-armed bandit problem?,,1,0,,,,CC BY-SA 4.0 27932,1,,,5/23/2021 21:44,,2,384,"

The PPO objective may include a value function error term when parameters are shared between the policy and value functions. How does this help, and when to use a neural network architecture that shares parameters between the policy and value functions, as opposed to two neural networks with separate parameters?

",32517,,2444,,6/4/2021 2:40,10/27/2022 16:07,How does sharing parameters between the policy and value functions help in PPO?,,1,0,,,,CC BY-SA 4.0 27934,2,,27931,5/23/2021 22:12,,4,,"

I read section 2.2 of Sutton and Barto, and I understand your confusion: the $\epsilon$-greedy algorithm is not defined precisely on page 27-28. Selecting an action randomly "every once in awhile" with probability $\epsilon$ means selecting an action randomly with probability $\epsilon$ at each timestep and selecting an action greedily with probability $1-\epsilon$ at each timestep. This definition is standard and will be clear as you progress through the book and other relevant literature. For reference, the $\epsilon$-greedy algorithm is used in the pseudocode on page 32 of Sutton and Barto.

The key distinction in this problem is that it's asking for the probability that the greedy action is selected, NOT the probability that an action is selected greedily. Specifically, the greedy action can be selected when the agent selects an action randomly because the greedy action is in the action space and the entire action space is sampled uniformly when selecting an action randomly.

Since $\epsilon=0.5$, the agent will select an action greedily 50% of the time, which will 100% of the time be the greedy action. The agent will select an action randomly the other 50% of the time. Since there are two actions in the action space, the greedy action will be selected 50% of the time when the agent selects an action randomly. Therefore, the probability that the greedy action is selected at any single timestep is as follows:

\begin{align} &p(\mbox{greedy action}) \\ =\ &p(\mbox{greedy action AND greedy selection}) + p(\mbox{greedy action AND random selection})\\ =\ &p(\mbox{greedy selection}) \cdot p(\mbox{greedy action}\ |\ \mbox{greedy selection}) \\ &\hspace{1em}+ p(\mbox{random selection})\cdot p(\mbox{greedy action}\ |\ \mbox{random selection})\\ =\ &(1-\epsilon) \cdot p(\mbox{greedy action}\ |\ \mbox{greedy selection}) + \epsilon \cdot p(\mbox{greedy action}\ |\ \mbox{random selection})\\ =\ &0.5 \cdot p(\mbox{greedy action}\ |\ \mbox{greedy selection}) + 0.5 \cdot p(\mbox{greedy action}\ |\ \mbox{random selection})\\ =\ &0.5 \cdot 1 + 0.5 \cdot 0.5 \\ =\ &0.5+ 0.25 \\ =\ &0.75. \end{align}

",37607,,,,,5/23/2021 22:12,,,,2,,,,CC BY-SA 4.0 27935,1,,,5/24/2021 1:25,,1,565,"

I'm attempting exercise 13.1 in the Sutton and Barto textbook. It asks for an optimal probability for selecting action right in the short corridor scenario (see first 6 lines of the image below for the scenario).

Exercise 13.1: Use your knowledge of the gridworld and its dynamics to determine an exact symbolic expression for the optimal probability of selecting the right action in Example 13.1.

My attempt: Letting $p$ denote the probability of choosing right, I understand that using the Bellman equations, we can solve for the value of $s_1, s_2, s_3$ where the states are numbered from left to right in terms of $p$. We have $v(s_1) = \frac{2-p}{p-1}$, $v(s_2) = \frac{1}{(p-1)p}$, $v(s_3) = -\frac{p+1}{p}$. I can see how we can find the max of each of these functions to get the best optimal policy, given the state we're currently in.

However, how do you find the optimal policy generally (irrespective of starting state)? I found solutions here, which magically arrives at $\frac{p^2-2p+2}{p(1-p)}$. Can someone explain this part?

https://github.com/brynhayder/reinforcement_learning_an_introduction/blob/master/exercises/exercises.pdf

",45562,,,,,10/23/2022 9:27,Sutton and Barto 2nd Edition Exercise 13.1,,3,1,,,,CC BY-SA 4.0 27936,1,,,5/24/2021 6:18,,0,338,"

I was designing a multi-speaker identification model, so I searched for some metrics that one may use. I found two metrics:

  1. EER (equal error rate)
  2. DCF (detection cost function)

What is the difference between them? Is one better than the other for my model?

",36578,,2444,,6/4/2021 2:36,10/27/2022 4:07,"What is the difference between the ""equal error rate"" and ""detection cost function"" metrics?",,1,1,,,,CC BY-SA 4.0 27937,1,,,5/24/2021 6:19,,0,53,"

I am new to deep Q learning and trying to train the open AI cartpole_V0 game using deep Q learning. Here is my code:

import gym
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' 
import tensorflow as tf
from collections import deque
import numpy as np
import random
import matplotlib
matplotlib.use('tkagg')
import matplotlib.pyplot as plt


EPISODES = 5000
output_dir = "/home/ug2018/mst/18114017/ML/"
EPSILON = 1
REPLAY_MEMORY = deque(maxlen=800)
MIN_EPSILON = 0.01
DECAY_RATE = 0.995
MINIBATCH = 750
GAMMA = 0.99

env = gym.make('CartPole-v0')
state_size = env.observation_space.shape[0]
action_size = env.action_space.n


class DQNagent:

    def __init__(self):

        self.fit_model = self.create_model()

        self.predict_model = self.create_model()
        self.predict_model.set_weights(self.fit_model.get_weights())

        self.targets = []
        self.states = []

    def create_model(self):

        model = tf.keras.models.Sequential()
        model.add(tf.keras.layers.Dense(64, activation ="relu",input_dim = state_size))
        model.add(tf.keras.layers.Dense(128, activation ="relu"))
        model.add(tf.keras.layers.Dense(256, activation ="relu"))
        model.add(tf.keras.layers.Dense(128, activation ="relu"))
        model.add(tf.keras.layers.Dense(64, activation ="relu"))
        model.add(tf.keras.layers.Dense(32, activation ="relu"))
        model.add(tf.keras.layers.Dense(action_size, activation="linear"))
        model.compile(loss="mse", optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), metrics=['accuracy'])
        return model

    def model_summary(self,model):
        return model.summary()

    def get_q(self, state):
        return self.predict_model.predict(state)

    def train(self,batch_size): 
        minibatch = random.sample(REPLAY_MEMORY, batch_size)
        for state, reward, action, new_state, done in minibatch:
            if done :
                target = reward
            else:
                target = reward + (GAMMA * np.amax(self.get_q(new_state)[0]))
            target_f = self.get_q(state) 
            target_f[0][action] = target

            self.states.append(state[0])
            self.targets.append(target_f[0])

        self.fit_weights(self.states,self.targets)
    
  
    def fit_weights(self, states, targets):
        self.fit_model.fit(np.array(states), np.array(targets), batch_size = MINIBATCH, epochs = 1 ,verbose=0)
    
    def predict_save(self, name): 
        self.predict_model.save_weights(name)
    def fit_save(self, name):    
        self.fit_model.save_weights(name)

    



agent = DQNagent()
print(agent.fit_model.summary())




x=[]
y=[]
z=[]

def update_graph(z,y):
    plt.xlabel("Episodes")
    plt.ylabel("Score")
    plt.plot(z,y)
    plt.pause(0.5)
plt.show()
    

for eps in range(EPISODES):
    env.reset()
    done = False
    state = env.reset()
    state = np.reshape(state, [1,state_size])
    time = 0
    exp=0
    elp=0
    while not done:
       
        if EPSILON >= np.random.rand():
            exp +=1
            action = random.randrange(action_size) 
        else:
            elp +=1
            action = np.argmax(agent.get_q(state)[0])
        new_state, reward, done, _ = env.step(action)
        new_state = np.reshape(new_state,[1, state_size])
        if not done:
            reward = -10
        else:
            reward = 10
        REPLAY_MEMORY.append((state,reward,action,new_state,done))
        state = new_state
        time += 1
    x.append([eps,exp,elp,time,EPSILON])
    y.append(time)
    z.append(eps)
    update_graph(z,y)
    if (len(REPLAY_MEMORY)) >= MINIBATCH:
        agent.train(MINIBATCH)
        if EPSILON > MIN_EPSILON:
            EPSILON *= DECAY_RATE
    if eps % 50 == 0:
        agent.predict_save(output_dir + "predict_weights_" + '{:04d}'.format(eps) + ".hdf5")
        agent.fit_save(output_dir + "fit_weights_" + '{:04d}'.format(eps) + ".hdf5")
    with open("score_vs_eps.txt", "w") as output:
        output.write("Episodes"+"   "+"Exploration"+"   " + "Exploitation" + "  "+ "Score" + "  " + "Epsilon"+"\n")
        for eps,exp,elp,time,epsilon in x:
            output.write("      "+str(eps)+"        "+str(exp)+"        "+str(elp)+"        "+str(time)+"       "+"{:.4f}".format(epsilon) +"\n")

agent.predict_model.save('CartPole_predict_model')
agent.predict_model.save('CartPole_fit_model')

Code is running perfectly but the model is taking too many episodes to get trained and even after it scores 200, there is no continuity of it. Could please help me with how can I train the model in fewer episodes and maintain the continuity of the 200 scores?

Here are some of the steps listed:

Episodes Exploration Exploitation Score    Epsilon
  0        18        0        18       1.0000
  1        32        0        32       1.0000
  2        43        0        43       1.0000
  3        17        0        17       1.0000
  4        17        0        17       1.0000
  5        16        0        16       1.0000
  6        13        0        13       1.0000
  7        21        0        21       1.0000
  8        16        0        16       1.0000
  9        20        0        20       1.0000
  10        35       0        35       1.0000
  11        14       0        14       1.0000
  12        13       0        13       1.0000
  13        12       0        12       1.0000
  14        16       0        16       1.0000
  15        17       0        17       1.0000
  16        27       0        27       1.0000
  17        24       0        24       1.0000
  18        14       0        14       1.0000
  19        28       0        28       1.0000
  20        20       0        20       1.0000
  21        13       0        13       1.0000
  22        12       0        12       1.0000
  23        23       0        23       1.0000
  24        17       0        17       1.0000
  25        43       0        43       1.0000
  26        61       0        61       1.0000
  27        29       0        29       1.0000
  28        21       0        21       1.0000
  29        17       0        17       1.0000
  30        41       0        41       1.0000
  31        9        0        9        1.0000
  32        18       0        18       1.0000
  33        23       0        23       1.0000
  34        28       0        28       0.9950
  35        24       0        24       0.9900
  36        25       0        25       0.9851
  37        28       1        29       0.9801
  38        26       1        27       0.9752
  39        35       2        37       0.9704
  40        19       0        19       0.9655
  41        48       0        48       0.9607
  42        25       2        27       0.9559
  43        13       0        13       0.9511
  44        20       2        22       0.9464
  45        21       0        21       0.9416
  46        13       0        13       0.9369
  47        28       5        33       0.9322
  48        23       3        26       0.9276
  49        24       1        25       0.9229
  50        20       2        22       0.9183
  51        13       0        13       0.9137
  52        19       1        20       0.9092
  53        13       1        14       0.9046
  54        18       1        19       0.9001
  55        12       1        13       0.8956
  56        29       7        36       0.8911
  57        28       2        30       0.8867
  58        16       1        17       0.8822
  59        28       6        34       0.8778
  60        13       3        16       0.8734
  61        15       4        19       0.8691
  62        19       2        21       0.8647
  63        27       4        31       0.8604
  64        19       4        23       0.8561
  65        16       1        17       0.8518
  66        60       9        69       0.8475
  67        24       1        25       0.8433
  68        21       7        28       0.8391
  69        14       0        14       0.8349
  70        31       4        35       0.8307
  71        64       13       77       0.8266
  72        58       13       71       0.8224
  73        32       9        41       0.8183
  74        15       1        16       0.8142
  75        23       6        29       0.8102
  76        27       5        32       0.8061
  77        66       6        82       0.8021
  78        30       6        36       0.7981
  79        74       22       96       0.7941
  80        14       1        15       0.7901
  81        18       1        19       0.7862
  82        28       7        35       0.7822
  83        28       4        32       0.7783
  84        12       2        14       0.7744
  85        10       2        12       0.7705
  86        21       4        25       0.7667
  87        13       6        19       0.7629
  88        19       6        25       0.7590
  89        16       4        20       0.7553
  90        46       16       62       0.7515
  91        12       1        13       0.7477
  92        30       15       45       0.7440
  93        38       9        47       0.7403
  94        14       7        21       0.7366
  95        10       1        11       0.7329
  96        16       8        24       0.7292
  97        10       2        12       0.7256
  98        20       5        25       0.7219
  99        19       7        26       0.7183
  100       31       9        40       0.7147
  .
  .
  . 
  1522        0        104       104      0.0100
  1523        0        35        35       0.0100
  1524        0        27        27       0.0100
  1525        0        52        52       0.0100
  1526        0        25        25       0.0100
  1527        1        199       200      0.0100
  1528        0        30        30       0.0100
  1529        0        57        57       0.0100
  1530        0        35        35       0.0100
  1531        0        25        25       0.0100
  1532        0        22        22       0.0100
  1533        0        24        24       0.0100
  1534        1        199       200      0.0100
  1535        0        68        68       0.0100
  1536        0        200       200      0.0100
  1537        0        22        22       0.0100
  1538        2        42        44       0.0100
  1539        1        111       112      0.0100
  1540        0        91        91       0.0100
  1541        0        45        45       0.0100
  1542        2        108       110      0.0100
  1543        1        181       182      0.0100
  1544        0        30        30       0.0100
  1545        0        21        21       0.0100
  1546        1        25        26       0.0100
  1547        4        196       200      0.0100
  1548        0        95        95       0.0100
  1549        0        53        53       0.0100
  1550        0        55        55       0.0100
  1551        0        29        29       0.0100
  1552        0        40        40       0.0100
  1553        0        25        25       0.0100
  1554        0        33        33       0.0100
  1555        0        63        63       0.0100
  1556        0        23        23       0.0100
  1557        0        45        45       0.0100
  1558        0        25        25       0.0100
  1559        0        36        36       0.0100
  1560        0        24        24       0.0100
  1561        1        31        32       0.0100
  1562        0        30        30       0.0100
  1563        1        56        57       0.0100
  1564        0        22        22       0.0100
  1565        0        20        20       0.0100
  1566        1        22        23       0.0100
  1567        0        45        45       0.0100
  1568        1        50        51       0.0100
  1569        0        25        25       0.0100
  1570        0        30        30       0.0100
  1571        2        198       200      0.0100
  1572        2        198       200      0.0100
  1573        1        185       186      0.0100
  1574        0        26        26       0.0100
  1575        4        196       200      0.0100
  1576        3        197       200      0.0100
  1577        1        29        30       0.0100
  1578        0        25        25       0.0100
  1579        0        32        32       0.0100
  1580        3        197       200      0.0100
  1581        1        23        24       0.0100
  1582        0        25        25       0.0100
  1583        0        66        66       0.0100
  1584        1        27        28       0.0100
  1585        0        32        32       0.0100
  1586        0        21        21       0.0100
  1587        0        23        23       0.0100
  1588        1        47        48       0.0100
  1589        0        42        42       0.0100
  1590        0        26        26       0.0100
  1591        0        47        47       0.0100
  1592        0        200       200      0.0100
  1593        2        52        54       0.0100
  1594        1        19        20       0.0100
  1595        0        33        33       0.0100
  1596        0        27        27       0.0100 
  1597        1        79        80       0.0100
  1598        0        54        54       0.0100
  1599        0        50        50       0.0100
  1600        0        25        25       0.0100

I have initiated the epsilon with 1 and it has already reached its minimum possible value. Still, the result is very fluctuating from the score's point of view. Why is this happening? How can I maintain the continuity?

",47342,,2444,,6/4/2021 2:36,6/4/2021 2:36,CartPoleV0 model is not getting trained in even after 1500+ episodes using deep Q-learning,,0,6,,,,CC BY-SA 4.0 27938,2,,27936,5/24/2021 9:51,,0,,"

Regarding DCF, Carbonell et al. (2007) state "Although it is an excellent evaluation tool, the DCF has the limitation that it has parameters that imply a particular application of the speaker detection technology."

For EER the authors make the following statement: "The EER is a concise summary of the discrimination capability of the detector. As such it is a very powerful indicator of the discrimination ability of the detector, across a wide range of applications. However, it does not measure calibration, the ability to set good decision thresholds."

References

Carbonell, J. G., Siekmann, J., Müller, C., van Leeuwen, D. A., & Brümmer, N. (2007). An Introduction to Application-Independent Evaluation of Speaker Recognition Systems. In Speaker Classification I (pp. 330–353).

",5763,,,,,5/24/2021 9:51,,,,0,,,,CC BY-SA 4.0 27939,2,,15621,5/24/2021 10:13,,1,,"

Look at the equation for computing the hidden state as a function of the cell state and output gate: $$ h_t = \tanh(C_t)\circ o_t $$ This equation implies that the hidden state and cell state have the same dimensionality.

",19183,,,,,5/24/2021 10:13,,,,0,,,,CC BY-SA 4.0 27940,1,,,5/24/2021 11:11,,0,30,"

I am implementing A3C for the CartPole environment. I want to compare the results I got from A3C with the ones I got from AC1. The problem is I don't know which process to look at. If I use, let's say, 11 processes, should I take the first one which got to average 495 points (over the last 100 episodes), last one, or should I take mean of all?

I don't want to take the first one that got to 495 since it is using a model that was already updated by the first few processes and it looks like cheating. Does some norm exist I can follow for valid results?

",47348,,2444,,6/4/2021 2:35,6/4/2021 2:35,How can I compare the results of AC1 with the results of A3C (on the CartPole environment)?,,0,4,,,,CC BY-SA 4.0 27942,1,,,5/24/2021 12:08,,0,332,"

I have a dataset of 20k images of infected mango. I have built a web-based app using Flask, where a user can upload a picture, and my CNN model detects the disease. I have 6 classes in the model, which correspond to 6 types of diseases.

My question is: how do I train the model so that, if a user uploads any picture except infected mango, the model will show "not mango"?

",47351,,2444,,6/4/2021 2:26,6/4/2021 2:26,"How can my CNN produce an ""unknown"" label?",,0,3,,,,CC BY-SA 4.0 27943,2,,27935,5/24/2021 13:00,,1,,"

There are problems with both the approach and the expressions that you have. I don't want to just give the correct solution, though, that's an exercise for you to go through and learn from your own experience trying to accomplish it. Instead, let me illustrate that your expressions for $v(s_i)$ are wrong. To do that we'll just do a Monte-Carlo estimate for a range of values of $p$ and compare to your expression.

Here's the Python code that runs a single payout starting from state state and following the policy with probability p of choosing "right". It returns the collected reward:

import numpy as np

def playout(state, p):
   reward = 0
   while state != 0:
       action_right = np.random.rand() < p
       move_right = action_right if state != 2 else (not action_right)
       state = state - 1 if move_right else state + 1
       state = 3 if state > 3 else state
       reward -= 1
   return reward

Then we make a simulation code that runs the playout multiple times, collects the reward counts, and returns the average reward (so, essentially estimates $v^\pi(s_i)$):

from collections import defaultdict

def simulate(state, p, n):
    rewards = defaultdict(lambda : 0, {})
    for _ in range(n):
        rewards[playout(state,p)] += 1
    results = np.array([[a,b] for a,b in rewards.items()]).T
    reward , nplayouts = results[0] , results[1]
    value = (reward * nplayouts).sum() / nplayouts.sum()
    return value

Finally, I make a grid in p and run each playout 10000 times:

p = np.linspace(0,1,51)[1:-1]
v3 = [simulate(3,p,10000) for p in p]
v2 = [simulate(2,p,10000) for p in p]
v1 = [simulate(1,p,10000) for p in p]

I've plotted the resulting value estimates for each state. Together with your expression for them (blue curves). And the correct expression (red curve) that I've obtained by actually writing down and solving the equations:

As you can see, the expressions you've presented are all too far off from the results that the simulation returns. More than that - the asymptotic behavior of your solutions at $p\to0$ and $p\to1$ doesn't make much sense.

I've obtained the expressions for the red curves above by solving the system for $v(s_i)$ on my own account - without relying on weird and wrong solutions that I googled on the internet. Which I'd recommend you do as well.

Finally, the question of the exercise is to find an optimal $p$ for a policy that starts at $s_3$ - not "irrespective of starting state" as you've thought is should be. Unlike your expression the correct expression for $v(s_3)$ has a maximum, which can be found analytically and it is $$\max_p v(s_3) = -6-4\sqrt2 \simeq -11.6$$ $$ \text{at}\quad p = ??? \simeq 0.59$$

",20538,,,,,5/24/2021 13:00,,,,2,,,,CC BY-SA 4.0 27944,1,27961,,5/24/2021 15:13,,0,22,"

My problem: I own warning system where I collect data from institutions and send them over through various ways to users. I would like to hear your advice on what approach I can use for solving my problem with earthquake intensity far from epicenter. Since seismogical institutions mostly issue info about intenstiy of an earthquake for the epicenter, I would need to predict and classify what intensity the earthquake can have for places distant of several km/miles from the epicenter.

As an input/training set, I can use data of historical earthquakes and their magnitudes in an epicenter. Then I would need to fill mostly "by hand" an information about intensity based on seismological records, historical testimonies, chronicles atc.

What I need from AI: I need "something" that would predict earthquake intensity based on dataset of historical earthquakes.

Example/TLDR: There is an earthquake with magnitude 3.8, distant 80 km with depth 6 km. Based on dataset of historical earthquakes (with same type of information + witnessed and collected intensity), and output, I would need prediction of intensity of an eartquake 80 km from the epicenter.

",44567,,,,,5/25/2021 19:56,What approach would work well for predicting earthquake intensity based on historical data?,,1,0,,,,CC BY-SA 4.0 27946,2,,27850,5/24/2021 18:31,,0,,"

I reviewed some videos of Amidar on YouTube, and it seems that the game screen is fixed for a few seconds before the start of each level. This is probably to give the human player a chance to rest between levels and maybe even pause the game to take a break, as the gameplay looks very intense.

How do I get this environment to move immediately?

I suggest creating a gym wrapper to change the reset function of the environment to produce a different start state. Since the initial 85 frames are not influenced by the agent's actions, these frames are unhelpful and unnecessary for your agent's training. You could consider calling the step function 85 times in your new implementation of the reset function using a random action; then return the resultant state. When calling your newly implemented reset function, the agent's start state will be frame 85 instead of frame 0. As you noted, frame 85 is immediately followed with many frames of meaningful movement, which will provide higher-quality training data. In contrast, the original implementation of the reset function starts at frame 0, which is followed by 85 unhelpful fixed frames.

Because of this, my buffer is almost always filled with the same images and hence my network isn't learning anything.

ATARI environments are difficult even for some of the more powerful RL algorithms. For example, it is one of the harder environments for DQN to solve (see the Nature paper, pg. 531). Don't worry if your agent isn't learning right away. It could take hours or even days to learn enough to noticeably improve, even with a correct implementation of a powerful algorithm. Good luck, and welcome to AI Stack Exchange!

",37607,,,,,5/24/2021 18:31,,,,0,,,,CC BY-SA 4.0 27947,1,27953,,5/24/2021 21:31,,2,213,"

In music information retrieval, one usually converts an audio signal into some kind "sequence of frequency-vectors", such as STFT or Mel-spectrogram.

I'm wondering if it is a good idea to use the transformer architecture in a self-supervised manner -- such as auto-regressive models, or BERT in NLP -- to obtain a "smarter" representation of the music than the spectrogram itself. Such smart pretrained representation could be used for further downstream tasks.

From my quick google search, I found several papers which do something similar, but -- to my surprise -- all use some kind of symbolic/discrete music representation such as scores. (For instance here or here).

My question is this:

Is it realistic to train such an unsupervised model directly on the Mel spectrogram?

The loss function would not be "log softmax of next word probability", but some kind of l2-distance between "predicted vector of spectra" and "observed vector of spectra", in the next time step.

Did someone try it?

",9092,,2444,,5/26/2021 8:13,9/17/2021 16:34,Is it realistic to train a transformer-based model (e.g. GPT) in a self-supervised way directly on the Mel spectrogram?,,2,0,,,,CC BY-SA 4.0 27949,1,,,5/24/2021 23:02,,1,43,"

I am interested in your opinion on the topic if you think that it makes sense to use batch normalization layer in a network that is trained with a batch size of 1. This is a special case as part of an experiment. What effects can be expected?

",13295,,,,,5/24/2021 23:02,Does it make sense to apply batch normalization to a batch size of 1?,,0,0,,,,CC BY-SA 4.0 27951,1,,,5/25/2021 3:27,,1,51,"

In 2014 it was widely reported that the Turing Test had been passed, and that this was a major AI milestone.

See: Computer AI passes Turing test in 'world first [BBC]; Turing Test Success Marks Milestone in Computing History [reading.ac.uk]; What is the Turing test? And are we all doomed now? [The Guardian]

Never mind that the "Imitation Game" is subjective, and that porn bots have been passing it since there were porn bots—University of Reading was clear about their metrics.

But I understand that it did lead to revised tests, and extension of thinking on what constitutes passing such a threshold.

  • How have Turing Tests been extended since 2014?

How strong was the 2014 test? What have been the criticisms of the 2014 determinations?

",1671,,,,,5/25/2021 3:27,"Current extensions of the ""Turing Test""?",,0,1,,,,CC BY-SA 4.0 27953,2,,27947,5/25/2021 9:02,,3,,"

The reason most music-generation models use discrete representations is because the long-term structures of music are very challenging to model. Note that the MIDI data in MAESTRO (used in the two papers you linked) encodes performances, not scores, so they include timing and accents of real performers--but are still sequences of discrete events, not audio.

There's been some work on learning discrete representations directly from audio, such as with vector-quantized variational autoencoders (VQ-VAEs). Typically an autoregressive model is trained on top of the learned representation; Jukebox used a transformer for that. By the way, I'd highly recommend reading through the "related work" section of the Jukebox paper for an overview of work on the audio/speech synthesis task.

wav2vec is probably closest to what you're describing. They train a transformer on raw audio, self-supervised, in order to learn good representations of human speech for the speech-to-text task.

As far as training directly on spectograms, there's MelNet, a somewhat exotic RNN trained for a variety of audio synthesis tasks, including music.

Hope this helps!

",41227,,41227,,6/22/2021 3:09,6/22/2021 3:09,,,,0,,,,CC BY-SA 4.0 27954,1,,,5/25/2021 9:31,,3,284,"

I'm currently trying to understand how AlphaZero works. There is one thing with the training of the AlphaZero's policy head that confuses me. Basically, in AlphaGo Zero's paper (where the major part of AlphaZero algorithm is explained) a combined neural network is used to estimate both, the value of the position and a good policy. More precisely, the loss function used is:

$$L = (z-v) - \pi^t \log(\textbf{p}) + c \Vert \theta \Vert$$

where $z$ is the outcome of the game, $v$ is the value estimated by the neural network, $\pi$ is the policy calculated by the MCTS and $\textbf{p}$ is the policy predicted by the neural network.

I would like to focus on the policy head loss. Basically, we are trying to minimize the difference between the policy calculated by the MCTS and the policy predicted by the neural network. That makes sense when the player has won the game, but it doesn't (at least from my point of view) when the player has lost it. You would be teaching your neural network a policy that has lost. Maybe the loss was unavoidable, but if it wasn't that's definitely not the policy we want to learn.

I have programmed a slightly simplified version that works well with Tic Tac Toe. But for Connect 4 some problems related to this arise. Basically, it learns a bad policy. At the beginning of the training, the values estimated for each board are quite random, and that makes the policy shift to a random (and wrong) direction. At the same time, that makes the value function to be wrong (because we are losing games that we could have easily won), worsening even more the policy.

I suppose that with enough training this problem disappears. The correct value and policy should backpropagate from the leaf nodes of our simulation. Even if the neural network policy gives a probability of 0 to the optimal action, thanks to the Dirichlet noise added to the probabilities the MCTS can find that optimal action and learn it.

However, several things confuse me:

  1. In AlphaGo's paper, they take into account whether if the outcome of the game has been a win or a loss when training the policy network with reinforcement learning. More precisely, the optimization made is

    $$\Delta p \propto \frac{\delta \log{p(a_t|s_t)}}{\delta p} z_t $$

    where $z_t = 1$ if we have won or $z_t = -1$ if we have lost. So DeepMind's take into account if the action was good or not and change the direction to optimize.

  2. I haven't found anywhere in AlphaGo Zero's paper that we are training just with the examples where the player has won, so they might be using all the data gathered, including also the examples where the player has lost. As far as I know, they don't mention anything related to this problem.

  3. $\pi$ (the policy provided by the MCTS) is calculated as the exponentiated visit count of each action

    $$\pi(a|s_0) = \frac{N(s_0,a)^{1/\tau}}{\sum_b N(s_0,b)^{1/\tau}}$$

    where $\tau$ is a parameter that controls the ``temperature''. DeepMind's team sets $\tau = 1$ during the 30 first movements to ensure exploration. After that, they set it to $\tau \approx 0$ to ensure that the action that is considered the best one (and thus has been simulated more times) is the one played. However, that means $\pi$ is something like

    $$[0,0,\dots,0,1,0, \dots, 0]$$

    making the policy changes a bit agressive and especially harmful if the movement is not the good one (making it even harder to recover from a bad action).

Am I missing something? Is this the intended way of working of the algorithm? Is there any way to overcome the learning of bad policies?

",47373,,,,,8/20/2022 10:09,How does policy network learn in AlphaZero?,,1,1,,,,CC BY-SA 4.0 27955,2,,1987,5/25/2021 12:43,,0,,"

This is the architecture proposed and tested on the playground tensor flow for the Spiral Dataset. Two Hidden Layers with 8 neurons each is proposed with Tanh activation function.

",47378,,2444,,5/25/2021 12:46,5/25/2021 12:46,,,,0,,,,CC BY-SA 4.0 27957,1,27974,,5/25/2021 14:55,,0,137,"

Text here refers to either character or word or sentence.

Is there any recent textbook that encompasses from classical methods to the modern techniques for embedding texts?

If a single textbook is unavailable then please recommend a list of books covering the whole spectrum as mentioned above.

Modern textbooks that are similar to Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press. 2008 are highly encouraged.

This question asks for textbook/research paper on word embedding only.

",18758,,18758,,6/22/2021 4:22,6/22/2021 4:22,Book(s) for text embedding,,2,0,,,,CC BY-SA 4.0 27958,1,,,5/25/2021 17:23,,0,25,"

I want to build a machine translation system from English to Georgian. Georgian is a language similar (and simpler) to the Russian language. its syntax looks like base + suffix, only suffix changes, most of the time base is frozen, to describe the time the only suffix is changed. Unfortunately, I couldn't find a morphological analyser for the Georgian language, so could you link or provide useful resources to help me to build one? or can you give me some suggestions?

",36107,,2193,,5/27/2021 8:47,5/27/2021 8:47,How to build a custom morphological analyser for translation system,,1,3,,,,CC BY-SA 4.0 27960,1,,,5/25/2021 17:49,,1,33,"

Lately, I've been getting into energy-based models (EBMs) through some of Yann LeCun's recent talks, where he advocates the use of non-normalized models because it allows for more flexibility in the choice of the loss function and convenient inference over high-dimensional spaces.

However, after reading some papers on the recent approaches to training EBMs (e.g Kingma's How to Train Your Energy-Based Models), most approaches still use likelihood to optimize the EBMs parameters.

I'm lost in the necessity of using a normalized likelihood for training, while the whole idea of EBMs is that they are not normalized. Why are methods that shape the energy-function directly not popular?

",47383,,2444,,6/4/2021 2:21,6/4/2021 2:21,Necessity of likelihood in training energy-based models,,0,0,0,,,CC BY-SA 4.0 27961,2,,27944,5/25/2021 19:56,,0,,"

You can do prediction like this with regression. Take a look at scikit-learn models for regression: https://scikit-learn.org/stable/modules/classes.html#module-sklearn.linear_model

Here's an example:

# This is made up data!
#        epicenter magnitude, depth_m, distance_km, local_magnitude
data = [[3.5,                 6000,    61.3,        2.1],
        [9.1,                 16000,   261.0,       6.0],
        [9.1,                 16000,   96.5,        8.2],
        [3.8,                 6000,    80.0,        2.6],
        [2.5,                 4000,    101.3,       0.4],
        [7.0,                 9000,    81.9,        6.1],
        [5.1,                 7000,    21.3,        4.1]]

from sklearn.linear_model import LinearRegression
import numpy as np

data = np.asarray(data) # to numpy array
X=data[:,:3] # the input variables
y=data[:,3] # the output variable (local magnitude)
reg = LinearRegression().fit(X, y) # fit the model based on the data
reg.predict(np.array([[6.6, 8000, 27]])) # Get a prediction based on new inputs

>>> 6.1511027
",12278,,,,,5/25/2021 19:56,,,,0,,,,CC BY-SA 4.0 27963,1,,,5/25/2021 22:55,,1,91,"

I am currently studying as an MSCS student and my research is based on Evolutionary Algorithm as Reinforcement Learning, and I am confused about the following terms:

  1. What is the difference between Evolutionary Reinforcement Learning and Evolutionary Algorithm by considering it as Reinforcement Learning?
  2. if the Evolutionary Algorithm is Reinforcement Learning, what's the definition of state?
  3. which part of the Evolutionary Algorithm is the rewards of Reinforcement Learning?
  4. What will be the transition and Action function? can anyone please help me with this
",47390,,2444,,6/5/2021 13:31,6/5/2021 13:31,What is the difference between ERL and EA by considering it as RL?,,0,4,,,,CC BY-SA 4.0 27965,1,,,5/26/2021 2:01,,2,42,"

I am trying to train a CNN to learn 5D (kind of) data. The data is structured as follows. It has three spatial dimensions [x, y, z], but it also has two "internal dimensions" [theta, phi] at each [x, y, z]. What I am trying to do is upsample the internal space from fewer [theta, phi] data points.

When I train a 2d residual network with random [x, y, z] points in just the internal space it learns -- but there is some noise in the x, y, z space, there should be a correlation with neighbouring points.

What I wanted was some way to also include convolutions over the 3D [x, y, z] space to try and remedy this.

A possible but maybe naive approach is to do the following: Stack the images as [theta * phi, x, y, z] (so, many input channels) and then have some 3d convolution layers, then after that stack as [x * y * z, theta, phi] and take 2d convolutions in the internal space.

Another approach is to use 5d filters that span over all dimensions. This might be hard to implement for me and probably very memory hungry.

Are there any other ways?

",47393,,2444,,6/4/2021 11:50,6/4/2021 11:50,"How to implement a (3 + 2)-dimensional convolutional layer where the 2d space is ""internal""?",,0,3,,,,CC BY-SA 4.0 27968,2,,27958,5/26/2021 7:52,,1,,"

Huggingface has a Helsinki-NLP/opus-mt-ka-en repository with a Georgian (Ka) to English (en) model. A tokenizer_config.json is available

",40434,,,,,5/26/2021 7:52,,,,1,,,,CC BY-SA 4.0 27970,1,27988,,5/26/2021 14:23,,0,93,"

I am considering a rather typical regression problem, but, for practice, I am trying to implement this as a classification problem.

The setup is as follows. I have $\mathbb{R}$-valued labels $y_i \in [-1,1]$, which I then discretize to $N$ buckets -- my classification problem is to then predict the labels to the nearest bucket.

This is rather straightforward and easy to implement with a cross-entropy loss function. However, I do not believe that this is the best option, as I would ideally like my predictions to be close to their correct bucket, even if I do not predict them correctly (which will be more difficult as if I take $N$ larger).

My current approach involves using a mean-squared error loss function. My network outputs logits for each bucket, I apply a softargmax (so the network remains differentiable) and then convert the output of the network into the $\mathbb{R}$-valued prediction.

My (very premature) results are nothing to write home about. So, I ask, is there a more natural loss function that I could consider for this exercise?

",47379,,2444,,6/4/2021 1:58,6/4/2021 1:58,Which loss function could I use to solve a regression problem as a classification problem (where we discretize the labels into buckets)?,,1,2,,,,CC BY-SA 4.0 27971,1,,,5/26/2021 17:03,,3,91,"

Can we use reinforcement learning for sequence-to-sequence tasks? If yes, whether or not this is a good choice, how could this be done?

",47423,,2444,,6/4/2021 2:03,6/4/2021 2:03,Can Reinforcement Learning be used to generate sequences?,,1,0,,,,CC BY-SA 4.0 27972,2,,27971,5/26/2021 18:48,,2,,"

One renowned example for the specified case is SeqGAN

Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update. The RL reward signal comes from the GAN discriminator judged on a complete sequence, and is passed back to the intermediate state-action steps using Monte Carlo search. Extensive experiments on synthetic data and real-world tasks demonstrate significant improvements over strong baselines.

",4446,,,,,5/26/2021 18:48,,,,0,,,,CC BY-SA 4.0 27973,2,,26956,5/26/2021 21:19,,-1,,"

Just a few commonsensical remarks about why this kind of intelligence definition seems unable to capture the logic of life:

  1. Optimization only makes sense in a stationary environment. When many agents learn and interact, they are building a constantly changing environment.

  2. Survival and reproduction is the only thing that really matters, and it does not require optimization, just good enough solutions.

  3. The survival of individual living organisms heavily depends on adequate hardwired sensory abilities that can slowly change throughout many generations. But smarter individuals can use their brains (or whatever tools they may have endowed with plasticity) to quickly learn and adapt in non-stationary environments. Fast learning, not optimization, is what these agents really need.

",47432,,40434,,5/27/2021 8:45,5/27/2021 8:45,,,,1,,,,CC BY-SA 4.0 27974,2,,27957,5/26/2021 23:26,,1,,"

I propose you try this. It's about modern Natural Language Processing, Computational Linguistics and Speech Recognition, including Embeddings methods.

",36055,,36055,,6/1/2021 12:53,6/1/2021 12:53,,,,3,,,,CC BY-SA 4.0 27976,2,,26956,5/27/2021 1:44,,2,,"

My sense is that everyone is pretending Intelligence doesn't have a grounded definition, from which all other definitions arise:

  • Intelligence is a measure of utility in an action space μ(υ)

It can be a relative measure, in relation to other rational agents, or absolute in relation to solved games (problems). An action space is any context, and formalized as problems or sets of problems, typically grouped by complexity class.

Hutter and Legg is an explication of this grounded definition which accounts for unlimited contexts (complexity classes/environments) and for increasing utility of a given agent over time (learning/optimization.) Intelligence itself does not require learning or general applicability, but Hutter & Legg does not refute this, merely grades static intelligence and narrow intelligence as more limited.

Even this is subject to context, as more limited rationality can be more optimal.

  • The definition of intelligence is grounded because, while the term "intelligence" is a symbol, intelligence itself is function, the strength of which evaluated by a measurement of some result (utility)

It doesn't require defining the function to understand it as a function: measurement of a result requires mechanism and decision/action.

You will find this natural language definition applies to even emotional intelligence, which relates to the observational capability of the rational agent in context, and allow that rational agent to make more optimal decisions in context.

This is similar in spirit to truth, only grounded in a formal logic context, where it is a condition and result, not an assertion. By contrast, the the conditionality of truth is often obscured in a natural language context, where is is routinely applied to unvalidatable informal statements, and can even be conflated with the statement itself à la: "this is the truth!"

",1671,,1671,,5/27/2021 5:34,5/27/2021 5:34,,,,0,,,,CC BY-SA 4.0 27979,1,,,5/27/2021 8:21,,4,129,"

Transformers are modified heavily in recent research. But what exactly makes a transformer a transformer? What is the core part of a transformer? Is it the self-attention, the parallelism, or something else?

",45239,,2444,,11/30/2021 15:12,6/13/2022 20:04,What makes a transformer a transformer?,,2,1,,,,CC BY-SA 4.0 27981,2,,27979,5/27/2021 12:03,,1,,"

There is not one answer to this question, but one could argue that transformers heavily rely on

  • transforming each input into latent subspaces of queries, keys and values in order to generate attention score
  • a pool of transformations of the attention vectors (multi-head) according to which models can capture richer interpretations as different sections of the input embedding can attend different per-head subspaces that link back to each input
",40560,,,,,5/27/2021 12:03,,,,0,,,,CC BY-SA 4.0 27983,1,,,5/27/2021 15:04,,0,268,"

I have created an RL model that uses QBased policy with a neural network for estimating Q values. My action space is of 27 actions, where each action is a 3 tuple where each value can be 1, 2 or 3. After training, the model always chooses the same action regardless of the state. For example (1, 2, 3) for all states. But I know this is wrong and not an optimal policy. But I cannot figure out why this is happening. The policy I am using is given below (code). Code is in Julia language and uses ReinforcementLearning.jl library.

# Now we use a QBasedPolicy and neural network to estimate values
# Create a flux based DNN for q - value estimation
STATE_SIZE = length(env.channels) # 3
ACTION_SIZE = length(action_set)    # 27     
model = Chain(
        Dense(STATE_SIZE, 48, relu),
        Dense(48, 48, relu),
        Dense(48, 48, relu),
        Dense(48, 48, relu),
        Dense(48, ACTION_SIZE)
    ) |> cpu

# optimizer 
η = 1f-2 # Learning rate  
η_decay = 1f-3
opt = Flux.Optimiser(ADAM(η), InvDecay(η_decay))

# Create policies for each agent
single_agent_policy = Agent(
        policy = QBasedPolicy(;
                learner = BasicDQNLearner(;
                    approximator = NeuralNetworkApproximator(;
                        model = model,
                        optimizer = opt
                    ),
                    min_replay_history = 500
                ),
                explorer = EpsilonGreedyExplorer(
                    kind = :linear,
                    ϵ_stable = 0,
                    ϵ_init = 0.5,
                    warmup_steps = 300,
                    decay_steps = 700,
                    is_training = true,
                    is_break_tie = false,
                    step = 1
                )
            ),
            trajectory = CircularArraySARTTrajectory(;
                        capacity = 500,
                        state=Array{Float64, 1} => (STATE_SIZE)
                    )
            )

During training, the model explores and exploits various actions in different states, but, during the testing/exploitation phase, it always outputs the same action for every state.

I searched for similar questions on the web, but none of the questions were well answered.

",47447,,2444,,6/4/2021 1:54,1/12/2022 10:39,DQN learns to always choose the same action for all states,,0,2,,,,CC BY-SA 4.0 27985,1,,,5/27/2021 17:00,,0,100,"

I have a training dataset of 80 text documents with an average number of characters in each document of 25000 and 210 unique tags.

How can I perform multi-class text classification with such a small dataset, without using the pre-trained model? If it cannot be done without a pre-trained model, then which pre-trained model should I use?

",47446,,2444,,6/4/2021 1:51,7/4/2021 2:05,How to perform multi-class text classification with a dataset of 80 documents?,,1,0,,,,CC BY-SA 4.0 27986,2,,27985,5/27/2021 18:58,,1,,"

For pretrained models in NLP, look at BERT and RoBERTa. If you can find a language model trained on your data's superset on Huggingface, then, use that pretrained model.

In order to multiclass classification, since your data is less, look at augmentations in NLP (most notably, backtranslation amongst others). Use focal loss (to handle class imbalance).

Since, you are going to finetune use small learning, 1e-5. But, you will be adding your own layers also, so keep 1e-5 for the pretrained model and 1e-3 for the new layer you put.

",37203,,,,,5/27/2021 18:58,,,,0,,,,CC BY-SA 4.0 27988,2,,27970,5/27/2021 19:06,,0,,"

Bucketing is usually used in industry because we are not sure what we are going to get. Some amount of observation on the distribution will give you a good idea of bucketing. But, there is caveat, if there isn't a lot of variance then bucketing won't help much.

Also, in terms of losses, since different buckets will have different number of examples, class imbalance will arise. So, focal loss is the better loss here.

I hope you get results you can write home about.

",37203,,,,,5/27/2021 19:06,,,,2,,,,CC BY-SA 4.0 27989,1,,,5/27/2021 22:58,,1,38,"

I am trying to apply an auto-encoder for dimensionality reduction. I wonder how it will be applied on a large dataset.

I have tried this code below. I have total of 8 features in my data and I want to reduce it to 3.

from keras.models import Model
from keras.layers import Input, Dense
from keras import regularizers
from sklearn.preprocessing import MinMaxScaler
import pandas as pd
data = pd.read_csv('C:/user/python/HR.csv')
columns_names=data.columns.tolist()
print("Columns names:", columns_names)
print(data.shape)
data.head()
print(data.dtypes)
# Normalise
scaler = MinMaxScaler()
data_scaled = scaler.fit_transform(data)
# Fixed dimensions
input_dim = data.shape[1]  # 8
encoding_dim = 3
# Number of neurons in each Layer [8, 6, 4, 3, ...] of encoders
input_layer = Input(shape=(input_dim, ))
encoder_layer_1 = Dense(6, activation="tanh", activity_regularizer=regularizers.l1(10e-5))(input_layer)
encoder_layer_2 = Dense(4, activation="tanh")(encoder_layer_1)
encoder_layer_3 = Dense(encoding_dim, activation="tanh")(encoder_layer_2)
# Crear encoder model
encoder = Model(inputs=input_layer, outputs=encoder_layer_3)
# Use the model to predict the factors which sum up the information of interest rates.
encoded_data = pd.DataFrame(encoder.predict(data_scaled))
encoded_data.columns = ['factor_1', 'factor_2', 'factor_3']

I have read in this tutorial that, if you have 8 features and your aim is to get 3 components, in order to set up a relationship with PCA, we need to create four layers of 8 (the original amount of series), 6, 4, and 3 (the number of components we are looking for) neurons, respectively. How does it make sense?

Now, let's say that I initially have 500 features and I want to reduce them to 20, what should I do?

According to my understanding, I need to reduce the number of neurons from the first to the last layer. So,

  • in the first layer, I have 500 neurons
  • in the second layer, it will be 250
  • in the third layer, it will be 130
  • in the fourth layer, it will be 60
  • in the fifth layer, it will be 20

Is this correct, and why?

And can I get matrix-like PCA at the end to see the components I got?

",47435,,2444,,5/28/2021 9:14,5/28/2021 9:14,How do I select the number of neurons for each layer in an auto-encoder for dimensionality reduction?,,0,0,,,,CC BY-SA 4.0 27990,1,,,5/27/2021 23:04,,4,157,"

I have a genetic algorithm which is working fairly well. It's got all the standard operators, including initial random population, crossover ratio, mutation rate, degree of mutation, etc.

This works fairly well, and I have tuned and optimized the hyperparameters as much as possible, including some adaptive variants. The one thing that ruins the results EVERY TIME is when I implement elitism. It does not seem to matter if I include 1 elite, or a certain percentage of elites. I have tried 1% through 10%, tried a decay variable so that elites would only survive a certain number of generations, and numerous other tactics. Every single time I add elitism, the solution gets stuck in a local optimum so deeply that there is no escape.

Most of the literature recommends to have elites, but the elites ruin my GA every single time, without fail.

Ideas?

",14854,,2444,,5/27/2021 23:17,5/28/2021 0:37,Does elitism cause premature convergence in genetic algorithms?,,1,0,,,,CC BY-SA 4.0 27991,2,,27482,5/27/2021 23:53,,1,,"

Practically, when optimizing VAE, you assume that prior $p(z)\sim N(0,1)$; i.e. the unit Gaussian distribution. However, in testime you sample z from $p(z|x)$; the encoder model. Why is that?

Let's go back to the start. We have a model $p_{\theta}(x)$ and the data $\{x_1, ..., x_N\}$. Solving the maximum log-likelihood problem, we have \begin{equation} \begin{split} \theta &= argmax_{\theta}\frac{1}{N}\sum_i log p_{\theta}(x_i) \\ &= argmax_{\theta}\frac{1}{N}\sum_i \left( \int p_{\theta}(x_i|z)p(z)dz \right) \end{split} \end{equation}

which is intractable to calculate. So what to do now?

Here it comes the Variational Inference: "Use the expected log-likelihood instead." \begin{equation} \begin{split} \theta &= argmax_{\theta}\sum_i E_{z \sim p_{\theta}(z|x)}\left[ logp_{\theta}(x,z) \right] \end{split} \end{equation}

We approximate $q(z) \approx p_{\theta}(z|x)$. Thus, we unfold $logp(x)$: \begin{equation} \begin{split} logp(x) &= log \int p_{\theta}(x_i|z)p(z)dz \\ &= log \int p_{\theta}(x_i|z)p(z) \frac{q(z)}{q(z)}dz \\ &= log E_{z \sim q(z)}\left[ \frac{p_{\theta}(x|z)p(z)}{q(z)} \right] \\ &\geq E_{z \sim q(z)}\left[ log \frac{p_{\theta}(x|z)p(z)}{q(z)} \right] \\ &= E_{z \sim q(z)}\left[ logp_{\theta}(x|z) + logp(z) \right] - E_{z \sim q(z)}\left[ logq(z)\right] \\ &= E_{z \sim q(z)}\left[ logp_{\theta}(x|z) + logp(z) \right] + H(q) \\ &= ELBO(p,q) \end{split} \end{equation}

where $H(q)$ is the entropy of q. As you highlighted we can also write $$ ELBO(p,q) = logp(x) - D_{KL}\left( q(z) || p(z|x) \right) $$

This means that (a) maximizing $ELBO(p,q)$ w.r.t. to $q$ then KL-Divergence is minimized and (b) maximizing $ELBO(p,q)$ w.r.t to $p$ then the model is improved as the log-likelihood is improved. So this point of yours is true.

However, how can we actually train this model?

The answer is by using Amortized Variational Inference! Practically, we use 2 Neural Networks, $\phi$ and $\theta$, so that we have 2 models: $q_{\phi}(z|x)$ (encoder) and $p_{\theta}(x|z)$ (decoder). Thus, we replace $q(z)$ with $q_{\phi}(z|x)$ and $ELBO(p,q)$ with $ELBO(\theta,\phi)$ .

To fully answer your question, I should reach your first ELBO formula. I will unfold my first equation about ELBO:

\begin{equation} \begin{split} ELBO(\theta,\phi) &= E_{z \sim q_{\phi}(z)}\left[ logp_{\theta}(x|z)\right] + E_{z \sim q_{\phi}(z)}\left[ logp(z) \right] + H\left(q_{\phi}(z|x)\right) \\ &= E_{z \sim q_{\phi}(z)}\left[ logp_{\theta}(x|z)\right] - D_{KL}\left( q_{\phi}(z|x) || p_{\theta}(z) \right) \end{split} \end{equation}

Therefore, using the Amortized Variational Inference, we maximize the above objective (which is the same as the one above).

",36055,,36055,,6/22/2021 23:29,6/22/2021 23:29,,,,3,,,,CC BY-SA 4.0 27992,2,,27990,5/28/2021 0:37,,3,,"

There are many ideas to escape from local optima in GA. One solution is selecting the population for the next iteration based on the probability that is defined based on the individual score. In that case, you have a chance to select a bad score individual to escape from the local optima.

Another efficient solution is playing with the mutation rate to get rid of local optima. In that way, you can increase the rate smoothly, to find a proper rate.

",4446,,,,,5/28/2021 0:37,,,,3,,,,CC BY-SA 4.0 27993,1,27995,,5/28/2021 0:40,,1,45,"

I read some papers (for example, this) and blogs that spoke about the advantages of distributional Q learning. However, it no longer seems to come up in literature. Did it have any shortcomings that led to its failure? If yes, can someone can talk about it here?

",31755,,2444,,5/28/2021 9:03,5/28/2021 9:03,Why did Distributional Q Learning go out of popularity?,,1,0,,,,CC BY-SA 4.0 27994,2,,16087,5/28/2021 1:27,,0,,"

We cannot approximate $ E^2(x)$ by $\frac{1}{N} \sum_i x_i \frac{1}{N} \sum_j x_j $. Because $cov(x)=E(x^2)-E^2(x)$.

",47461,,36055,,5/28/2021 13:38,5/28/2021 13:38,,,,1,,,,CC BY-SA 4.0 27995,2,,27993,5/28/2021 1:37,,1,,"

Actually, distributional RL is a well-studied field in deep RL. Generally speaking, distributional RL needs more computing sources (1.1X), because of the quantile head. We can also find new distributional RL literature in NeurIPS2020 :).

",47461,,,,,5/28/2021 1:37,,,,1,,,,CC BY-SA 4.0 27996,1,28001,,5/28/2021 2:09,,4,970,"

Feature selection is a process of selecting a subset of features that contribute the most.

Feature extraction allows getting new features that are not actually present in the given set of features.

Representation learning is the process of learning a new representation that contributes the most.

I can see no difference between feature extraction and representation learning.

Is feature extraction the same as representation learning? If not, where do they differ? Do they differ at the application level only?

",18758,,18758,,1/14/2022 23:27,1/14/2022 23:28,Where do the feature extraction and representation learning differ?,,1,0,,,,CC BY-SA 4.0 27997,1,,,5/28/2021 2:49,,1,15,"

I love cats, and over the years have noticed that they have recurrent patterns of vocalizations. For example, upon seeing a bird, a cat may start chittering, but the same cat would never chitter at humans. Then there are complex vocalizations, like meow-wow, which I have observed across multiple cats on different continents. At the same time, we have birds and monkeys which have vocabularies of up to 300 (?) words. It seems like cats are communicating something, but humans may be too tone-deaf to understand that.

It seems to me like the task of understanding what a cat is trying to communicate to humans is suitable for some kind of machine learning process.

My question is: has any of these unsupervised models been applied to cat vocalizations? In other words, if a model can draw or generate text, can it generate cat meows? How close are we to understanding what cats are meowing about and translating it into English?

I remember that some work has been done with trying to decode dolphin vocalizations, but as you can imagine, that requires specialized equipment, while a cat model can be tested in the real world with simpler equipment.

",47462,,2444,,6/4/2021 1:48,6/4/2021 1:48,Can unsupervised models learn something from cat vocalizations?,,0,0,,,,CC BY-SA 4.0 28001,2,,27996,5/28/2021 10:15,,5,,"

Feature extraction (FE) is not the same as representation learning (RL), but they are similar and related.

You describe accurately what feature extraction typically refers to, i.e. the process of extracting (new) features from existing ones or raw data (e.g. images). For example, let's say you have a dataset associated with a car. You have only two features in your dataset: distance and velocity. However, from these two, you can extract a third feature, e.g. the acceleration. So, feature extraction can be performed with a fixed algorithm (e.g. PCA for dimensionality reduction or SIFT) or manually.

Representation learning is the collection of all techniques that extract features automatically from the data (i.e. they learn the features or representations, hence the name representation/feature learning). So, for example, a convolutional neural network trained on ImageNet can (and/or needs to) learn general features in order to solve the corresponding classification task. (Chapter 9 of the book Deep Learning by Goodfellow et al. talks more about this topic.) These features are learned from the data (and that's why CNN's are data-driven), and they can later be exploited for transfer learning (TL), i.e. TL is based on the idea that neural networks learn general representations (which is thus a synonym for features) of data that can be exploited to solve other tasks (sometimes known as downstream tasks, especially in the context of self-supervised learning).

Yoshua Bengio et al. defines representation learning as follows

learning representations of the data that make it easier to extract useful information when building classifiers or other predictors

So, RL is a subset of FE, given that RL also extracts features, but RL emphasizes the extraction of features automatically.

",2444,,18758,,1/14/2022 23:28,1/14/2022 23:28,,,,0,,,,CC BY-SA 4.0 28002,1,,,5/28/2021 11:21,,4,1505,"

Obviously, this is somewhat subjective, but what hyper-parameters typically have the most significant impact on an RL agent's ability to learn? For example, the replay buffer size, learning rate, entropy coefficient, etc.

For example, in "normal" ML, the batch size and learning rate are typically the main hyper-parameters that get optimised first.

Specifically, I am using PPO, but this can probably be applied to a lot of other RL algorithms too.

",45240,,2444,,5/29/2021 0:29,5/29/2021 15:08,What are the best hyper-parameters to tune in reinforcement learning?,,2,1,,,,CC BY-SA 4.0 28003,1,28010,,5/28/2021 12:05,,1,49,"

I have the following results I am trying to make sense of. I have attached the loss curves here for reference.

  1. As you can see, the first issue is that the validation loss is lower than the training loss. I think this is due to using a pre-trained model with a high dropout rate (please correct me if I am wrong here).

  2. As one can see, the mean_auc score is increasing consistently, and so it seems that the network is indeed learning something and the validation loss is also better behaved relatively.

  3. The training loss is what bugs me a lot. It is not at all consistent and varies a lot. This is a naive question, but is this graph giving me any sort of information about the learning rate, etc, or am I in a situation wherein everything is incorrect essentially?

Any response would be really appreciated.

",47475,,2444,,5/29/2021 17:33,5/29/2021 17:33,"Why is the validation loss less than the training loss, and what can be said about the effect of the learning rate?",,1,0,,,,CC BY-SA 4.0 28004,2,,28002,5/28/2021 13:53,,3,,"

Personally, I would choose the following two as the most important:

  • epsilon: When using an epsilon-greedy policy, epsilon determines how often the agent should explore and how often it should exploit. Balancing exploration and exploitation is crucial for the success of the learning agent. Too little exploration might not teach anything to the agent and too much exploration might just waste your time.
  • learning rate: The learning rate determines how fast do you learn from new states of experience. A learning rate that is too high might not be good in cases when the environment has many states with high probabilities of negative rewards, i.e. many penalizations. This might make your agent move back and forth in the same place in order to avoid getting penalized. Also, a learning rate that is too low might make your agent learn very slowly and depending on your epsilon, the agent might enter a phase of exploitation with very little knowledge of an optimal policy.
",36447,,36447,,5/28/2021 14:42,5/28/2021 14:42,,,,0,,,,CC BY-SA 4.0 28006,1,,,5/28/2021 15:48,,3,1511,"

I have a custom environment for stock trading where an episode can be as long as 2000-3000 steps. I've run several experiments with td3 and sac algorithms, average reward per episode flattens after few episodes. I believe average reward per episode should further improve, so I thought whether my training episode is too long. What is the recommended upper limit on the episode length?

",32517,,2444,,5/28/2021 17:00,5/28/2021 17:00,Optimal episode length in reinforcement learning,,0,2,,,,CC BY-SA 4.0 28007,1,28008,,5/28/2021 17:37,,0,162,"

How to estimate the number of parameters in CNN for object detection? I know that there are some well-known architectures that was trained on a lot of data (AlexNet, ResNet, VGG, GoogleLeNet). But they were trained for example for classifying 1000 classes. Or they were used as backbones in the algorithms like YOLO to localize 80 classes of objects. Now let's say that I want to classify only 5 classes. Or I want to perform object detection and I am interested only in cars and people. I want to detect/classify this small number of objects. So the network must learn only the features of cars and people (instead of learning the features of hundreds of objects).

So my intuition is that I can use smaller network with fewer number of parameters. Correct me if I am wrong. And my second intuition is that the number of layers should not have a big impact. I mean, you shouldn't decrease the number of layers only because you have less classes. Because the network learns more and more sophisticated features in deeper layers. And it wouldn't be able to detect advanced features of cars (or other objects) if you don't have enough layers.

Recently I tried to use CenterNet https://arxiv.org/abs/1904.07850 to detect digits on 64x64 grayscale images and I achieved success having quite simple 900k convnet. Then I tried to use slightly modified GoogLeNet to detect cars using 224x224, 448x448, and 512x512 images. I trained it on 450 images. After a lot of trials and errors I still cannot train a good model. GoogLeNet is quite small network in compare to other well-known architectures, but I heard that it's very good. It was carefully designed to be very powerful despite being small (7M parameters).

So to be clear. My question is about the dependencies between the number of classes and the number of layers and parameters.

",,user40943,,,,5/28/2021 18:05,Number of classes vs number of parameters/layers?,,1,0,,,,CC BY-SA 4.0 28008,2,,28007,5/28/2021 18:05,,0,,"

You are correct about your first assumption, but not about your second assumption.

More layers does not always mean better pattern detection. The analogy that in the deeper layers the network learns more complex features is an oversimplified one. It is true to some extent, although it is not enough to explain very complex architectures like GoogLeNet.

Moreover, using more layers than neccessary increases the risk of observing vanishing gradients. Because the network is too deep i.e. has too many layers, the gradient will be vanishingly small when backpropagating so many layers in the beginning will not get any significant update during training.

",36447,,,,,5/28/2021 18:05,,,,2,,,,CC BY-SA 4.0 28010,2,,28003,5/29/2021 6:44,,3,,"

This is very difficult to tell with the information provided, but the phenomenon is something that I have encountered many times before. Sometimes this is not a bad thing, here are some possible considerations/explanations:

  1. Data from the training set could be identical or leaking in to the validation set.
  2. Using a high dropout rate can cause this as well as other generalization techniques.
  3. The training set may contain outliers, contributing to higher loss and difficulty in learning.

Also, note that the there is not so much an issue with the training loss bouncing around as long as it decreases over time. You may want to try different optimizers as well to see if that changes this behavior.

",36131,,,,,5/29/2021 6:44,,,,0,,,,CC BY-SA 4.0 28011,1,28012,,5/29/2021 7:11,,5,372,"

What does the term "embedding" actually mean?

An embedding is a vector, but is that vector a representation of a word or its meaning? Literature loosely uses the word for both purposes. Which one is actually correct?

Or is there anything like: A word is its meaning itself?

",18758,,2444,,5/29/2021 12:45,5/29/2021 13:01,Is an embedding a representation of a word or its meaning?,,2,0,,,,CC BY-SA 4.0 28012,2,,28011,5/29/2021 9:50,,5,,"

An embedding is a representation of a word that can be used as a proxy for some of its linguistic properties.

The 'human' representation of a word, a sequence of letters and other symbols, is not related at all to its meaning or use in actual text. It only serves as a look-up key into our cognitive language processing facility (however that actually works) which enables us to understand the meaning in the context of its usage. However, a computer system does not have such a facility.

In order to convert the character string into something more usable for language processing, embeddings are created. These are typically vectors describing other words that surround a word in question (as the meaning of a word depends on its context). There are obvious problems with ambiguities (eg is bank the side of a river or a place where you deposit money?), but there are probably ways around that.

So the embedding (a vector) represents the usage of a word, which strongly correlates with its meaning. Because words as character sequences are useless for most sub-symbolic processing, the terms embedding and word are possibly used interchangably.

",2193,,,,,5/29/2021 9:50,,,,0,,,,CC BY-SA 4.0 28013,1,28015,,5/29/2021 10:52,,0,77,"

I am trying to formulate a problem that aims to prolong the lifetime of the simulation, the same as the Cartpole problem. I aware that there are two types of return:

  • finite horizon undiscounted return (used for episodic problems)

$G = \sum_{t=0}^T R_t$

  • infinite horizon discounted return (used for non-episodic problems).

$G = \sum_{t=0}^\infty \gamma^t R_t$

However, I'm confusing that "Is Cartpole episodic task?". Ideally, the simulation lasts forever. This is my final objective (prolonging the lifetime). But it still has some termination states. Should I introduce the termination state and use it with a discounted return like:

$G = \sum_{t=0}^T \gamma^t R_t$

",46753,,,,,5/29/2021 14:52,How to formulate discounted return in cartpole?,,1,0,,,,CC BY-SA 4.0 28014,2,,28011,5/29/2021 13:01,,2,,"

Although we have had multiple similar questions (see here, here and here) and it seems to me that you focused on word embeddings (probably because you were not aware of the application of embeddings to other contexts), in addition to what is stated in the other answer, it's important to note that the concept of an embedding does not just apply to words. For example, there are also code embeddings (see e.g. code2vec) and graph embeddings (see e.g. this), and there are probably other examples. The linked posts contain answers that explain what an embedding and embedding space generally are, so you may want to read them.

",2444,,,,,5/29/2021 13:01,,,,0,,,,CC BY-SA 4.0 28015,2,,28013,5/29/2021 14:52,,0,,"

It's not true that finite horizon MDP can't use a discount factor. Any discount factor $\gamma \leq 1$ is fine for finite horizon.

Whether or not cartpole is a finite or infinite horizon depends on the environment implementation. The default environment in openai gym uses $T=200$ as the horizon. This is customizable, and most papers I see which use cartpole as a test environment use $T=1000$ instead. The task is considered "solved" when the agent can reach $T$ timesteps consistently (e.g., the last 100 episodes have all reached $T$ timesteps).

If you want to treat cartpole as an infinite horizon task, that's fine too. Clearly you need $\gamma < 1$ here, and you can just $T$ to some arbitrary large number.

",47080,,,,,5/29/2021 14:52,,,,0,,,,CC BY-SA 4.0 28016,2,,28002,5/29/2021 15:08,,2,,"

You should read this study https://arxiv.org/abs/2006.05990 which does some empirical study on this question, specifically for on-policy, continuous action space DRL.

It suggests that discount factor and learning rate are the two most important parameters to tune, followed by the width of the policy/value functions.

That study also reports that it's very important to normalize the observations, and initialize the policy so that the initial actions are zero mean with a very small variance.

",47080,,,,,5/29/2021 15:08,,,,0,,,,CC BY-SA 4.0 28018,1,,,5/29/2021 20:57,,0,42,"

Suppose I have a loss function as a polynomial with its variables being the weights of a network I wish to tune. Now, we want to find the minima of the loss function - so basically argmin.

In ML, we simple use SGD with any initialization. But consider this: we take a few $n$ random combinations of weights and plot a visual graph (not to be confused with computation graph) where we find the local minimas of the graph (basically any point surrounded by larger point values would be minima). We store the weights (value of the variables in the polynomial) used for each point in the graph in a data structure.

Theoretically, if $n$ is big enough to be computationally efficient while being quite descriptive, we can simply take the weights of a random minima as initialization to the network and then perform SGD on it to converge to a global minima (hopefully).

This method would be quite faster since the initialization is better, and we don't need to compute for large values of $n$ - simply having a decent enough estimate. SGD would finally be used with a low learning rate to give the final push and we can be done with easier and faster.

So why don't we do this instead of having random initialization? is there theoretical basis on which this can't work?

",36322,,,,,5/29/2021 20:57,Why don't we use this intialization with SGD rather than random?,,0,4,,,,CC BY-SA 4.0 28019,2,,23234,5/30/2021 8:32,,0,,"

What you're describing are "Ensemble Models" -- where multiple models are trained in parallel, and then combined at inference time to squeeze out better performance.

This article gives a decent overview: https://towardsdatascience.com/ensemble-models-5a62d4f4cb0c

A single algorithm may not make the perfect prediction for a given dataset. Machine learning algorithms have their limitations and producing a model with high accuracy is challenging. If we build and combine multiple models, the overall accuracy could get boosted.

And they're also covered in Standford's 2017 deep learning computer vision course, lecture 7: https://youtu.be/_JB0AO7QxSA?t=3098.

Rather than having just one model, we’ll train 10 different modesl independently from different initial random restarts. At test time, we’ll run our data trhough all of the 10 models and average the predictions.

I recommend you check out this course, since it also delves quite a bit into convolutional networks too.

",41227,,,,,5/30/2021 8:32,,,,0,,,,CC BY-SA 4.0 28022,2,,2729,5/30/2021 16:08,,0,,"

I find it easy to get a quick intuition of the truth value of logical statements involving negations by converting them wherever possible.

So assume by way of contradiction that $\textit{f}_1 \iff \textit{f}_2$, then the two-way contrapositive $\neg \textit{f}_1 \iff \neg \textit{f}_2$ also holds, hence $A \lor B \iff A \land B$ (De Morgan's law). Since $A \lor B \implies A \land B$ is easily confirmed false (by plugging in A=True and B=False) this is a contradiction. $\Box$

",16752,,,,,5/30/2021 16:08,,,,0,,,,CC BY-SA 4.0 28023,1,28024,,5/30/2021 18:03,,2,289,"

I have read it said that the "stable training" of a deep learning model is important. What is meant by "stable training" of a deep learning model?

",16521,,2444,,5/30/2021 20:37,5/30/2021 20:37,"What is meant by ""stable training"" of a deep learning model?",,1,0,,,,CC BY-SA 4.0 28024,2,,28023,5/30/2021 20:18,,1,,"

I can say "Stable Learning" of a supervised machine learner is as follows:

A stable learning algorithm is one for which the prediction does not change much when the training data is modified slightly.

You can follow this link to know more in details about how can we measure the stability in the context of computational learning theory.

",4446,,,,,5/30/2021 20:18,,,,0,,,,CC BY-SA 4.0 28025,1,28026,,5/30/2021 20:31,,2,219,"

When reading about NLP, I saw it said that "input embeddings" are a main element of encoder-decoder learning frameworks for sequence modelling. What is an "input embedding" in the context of NLP?

",16521,,,,,5/30/2021 21:03,"What is an ""input embedding"" in the context of NLP?",,1,0,,,,CC BY-SA 4.0 28026,2,,28025,5/30/2021 21:03,,0,,"

An embedding is a vector that semantically represents an object/input, which, in the context of NLP, can be a character, word, or sentence, depending on the task. The main property of embeddings is that they are close/similar to each other (for some notion of similarity, e.g. the cosine similarity) if the corresponding original objects/inputs also have similar meanings or are contextually related. So, for example, the words "boy" and "man" are expected to be mapped to vectors/embeddings that are close to each other. There are different ways to create these embeddings. An example is word2vec. Note that the notion of an embedding also applies to other contexts, e.g. geometric deep learning. You can read more about this topic in chapter 6 of this book.

",2444,,,,,5/30/2021 21:03,,,,2,,,,CC BY-SA 4.0 28028,1,,,5/31/2021 6:44,,1,186,"

The paper here in section 2.1 Coarse-to-fine prediction:

To increase the field of view presented to the CNN and reduce the redundancy among neighboring voxels, each image is downsampled by a factor of 2. The resulting prediction maps are then resampled back to the original resolution using nearest-neighbor interpolation.

What does it actually mean to downsample by a factor of 2?

If I have an image size of $256 \times 256 \times 170$, and if I downsample it by a factor of 2, then will it result in an image of size $128 \times 128 \times 85$?

Similarly, would upsampling/resampling be the opposite interpolation method, getting back to the original size of $256 \times 256 \times 170$?

",47542,,16521,,5/31/2021 21:46,10/24/2022 1:08,What does 'downsampling' and 'upsampling' mean in coarse-to-fine segmentation?,,1,0,,,,CC BY-SA 4.0 28029,2,,28028,5/31/2021 7:57,,0,,"

What is actually the downsampling of 2 mean?

If I have an image size of 256x256x170 and if I downsample it by a factor of 2, it will result in an image of size 128x128x85?

Yes, that is correct.

Similarly, upsampling or resampling is the opposite interpolation method to original size 256x256x170?

Yes, correct result again. Resampling is a more general term and describes resizing an image (or other grid-based signal) using an automatic rule which could be enlarging or reducing the size.

What resampling does not mean in digital signal processing is going back to the original source and taking a second sample. That might be a reasonable interpretation of the word when coming across it for the first time, but it is not correct here.

Just saying that a signal is resampled does not give enough information to say exactly what was done numerically to the signal, because it does not specify the rule of what happens with the lost (in case of downsampling) or missing (in case of upsampling) information. There are a few different ways to perform the resize, and different kinds of interpolation. It looks like the paper does give more information here, but you may need to read more or inspect the authors' code to be certain - for instance "nearest neighbour interpolation" probably involves looking at values of the six adjacent voxels to each new one you create, and taking the mean value of the ones that exist in the original grid (some will not, they will be other new cells), but there are a few different ways to manage that process.

",1847,,1847,,5/31/2021 8:03,5/31/2021 8:03,,,,5,,,,CC BY-SA 4.0 28031,1,,,5/31/2021 10:46,,0,142,"

I have a transform-encoder only architecture, which has the following structure:

      Input
        |
        v
     ResNet(-50)
        |
        v
fully-connected (on embedding dimension)
        |
        v
positional-encoding
        |
        v
transformer encoder
        |
        v
Linear layer to alphabet.

I am trying to visualize the self-attention of the encoder layer to check how each input of the attention attends other inputs. (E.g. https://github.com/jessevig/bertviz)

Where I encounter difficulty is in how I can visualize these activations in terms of the original input of the ResNet and not its output, in order to make my model visually interpretable.

Do you have any ideas or suggestions?

",42868,,2444,,6/1/2021 9:04,6/1/2021 9:04,Visualizing encoder-attention after ResNet in terms of ResNet input,,0,2,,,,CC BY-SA 4.0 28032,1,,,5/31/2021 12:07,,0,158,"

Positional encoding (PE) is an essential part of the self-attention layers in the transformer architectures since without adding it in some way (fixed of learnable) to the input embeddings model has ultimately no notion of order and is permutationally equivariant and the given token attends to the far separate and local tokens identically.

The convolution operation with a local filter, say of size $3, 5$ for 1D convolutions or $3 \times 3, 5 \times 5$ for 2D convolutions, has some notion of locality by construction. However, within this neighborhood, all pixels are treated in the same way.

However, it may be the case, that it is important, that the given pixel is the central for the application of this filter, whereas the other is close to the boundary. For small filters $3 \times 3$ it is probably not an issue, but for the larger - injection of PE can be useful.

Has this question been investigated in the literature? Are there any architectures with PE + convolutions?

",38846,,2444,,11/30/2021 15:11,11/30/2021 15:11,Has positional encoding been used in convolutional layers?,,0,4,,,,CC BY-SA 4.0 28035,1,,,5/31/2021 14:27,,1,21,"

I am new to AI, and I am a bit lost about finding the relevant materials that define the face detection problem formally/mathematically.

Can anyone help me formally define face detection, or at least point me towards papers that define it formally?

",40411,,2444,,6/6/2021 2:17,6/6/2021 2:17,What are the most relevant resources that define the face detection problem formally?,,0,0,,,,CC BY-SA 4.0 28036,1,,,5/31/2021 15:06,,2,151,"

I've been reading several papers and reviews about Graph Neural Networks, and I still feel a bit confused about the difference between the two approaches, and also if the spatial approaches have somehow 'overcome' spectral ones. I will add some of my understanding:

Graph Neural Networks take inspiration from the convolution operation between two signals in a Euclidean domain, as a way to combine the features on the nodes as it happens for Convolutional Neural Networks. To do this, a notion of convolution has been required for the graph domain. Given $\bf{x},\bf{y} \in \mathbb{R}^N$ two signals, then

$$\bf{x} \, \, *_G \, \, \bf{y} := U(U^Tx \, \odot \, U^Tx)$$

namely, we perform convolution on the graph domain and then take everything back using the inverse Fourier Transform. If we choose a filter $\bf g_\theta = diag(\theta_1, \dots, \theta_N)$ parametrized by some $\theta \in \mathbb{R}^N$ then the convolution becomes

$$\bf{x} \, *_G \bf{g_\theta} = Ug_\theta U^T$$

The approach above presents several limitations in terms of non-localization of filters (which depend on the entire graph) and scalability issues if presence of perturbations. So the authors of ChebNet proposed the following approximation:

$$\bf{x} \, *_G \bf{g_\theta} = \sum_{i=0}^K \theta_iT_i(\tilde{L})x$$

where $\tilde{L} = 2L/\lambda_{max}-I_n$, $L$ is the Laplacian and $T_i$ are Chebyshev polynomials.

Now the crucial step is that Kipf et Al. (2017) have bridged the gap between spectral and spatial approaches by proposing a first order approximation of the above equation (assuming $\lambda_{max} =2$ and $\theta = \theta_0 = -\theta_1$):

$$\bf{x} \, *_G \bf{g_\theta} = \theta(I_n+ D^{-1/2}AD^{-1/2})$$

Now, from what I've read so far, it seems that from now on several improvements have been made on the spatial approach which defines convolutions on top of node's graph neighbourhood.

The question is, does it still make sense to focus on spectral approaches?

",47551,,2444,,12/19/2021 12:00,12/19/2021 12:00,Are spectral approaches to Graph Neural Networks still considered?,,0,0,,,,CC BY-SA 4.0 28037,1,,,5/31/2021 15:35,,0,212,"

My autoencoder give latent vectors with many zeroes components like:

[3.0796502  2.9488854  0.9002177  0.         0.         0.
 0.         0.         0.         1.0181859  0.         0.68507403
 0.         0.6128702  0.         0.         0.         0.
 0.         1.763725   0.         0.         0.         1.0947669
 0.         1.5330162  0.         0.         0.         0.
 0.         1.7434856  0.         0.         1.8942142  2.0379465
 0.         0.         1.2500542  0.         0.         0.
 0.         0.         0.         2.7917862  0.         0.
 2.2105153  0.         0.         0.         1.5798858  0.
 0.         3.7405093  0.8692952  0.01490922 0.         0.
 2.8320081  0.         0.         0.        ]

Certain components are always zero, another not always. Why might this be happening? How can I figure this out?

",47013,,2444,,5/31/2021 16:06,5/31/2021 16:06,Why would an auto-encoder produce latent vectors with many zeros?,,0,2,,,,CC BY-SA 4.0 28038,1,28064,,5/31/2021 16:36,,0,61,"

Assume we have two vectors, containing random samples (maybe audio data?). Their distribution can be approximated to a normal distribution, so we can calculate their mean and standard deviation.

  • I am looking for a way to "fit" the second vector's samples, in a way that their mean and standard deviation correspond to the first vector's mean and standard deviation.

  • Also, I am looking for a way to do this by "moving the second vector's samples the least possible". This is because, an easy way to solve this problem could be to replace the second vector's data, with random samples that fit the first vector's parameters. This solution is easy, but not interesting.

Questions

  • Is this kind of problem correlated with machine learning in general? If yes "how"?

  • Is there a way to perform this kind of operation with some kind of neural network? If yes, how could it be modelled?

",47554,,2444,,12/12/2021 21:19,12/12/2021 21:19,Fitting a Gaussian distribution into another distribution,,1,2,,,,CC BY-SA 4.0 28039,1,,,5/31/2021 16:38,,0,39,"

I'm training a complex model for motion prediction using a VAE, however the KL divergence has a very strange behavior.

A scheleton of the network is the following:

At the end my network compute the MSE loss of the trajectory and the Kullback Leibler loss (with gaussian prior with mean 0 and std equal to 1) given as:

kld_loss = -0.5*torch.sum(1 + sigma - mu.pow(2) - sigma.exp())

Any idea of the possible causes? Do you need further details?

",32694,,32694,,5/31/2021 19:05,5/31/2021 19:05,Weird KL divergence behaviour,,0,2,,,,CC BY-SA 4.0 28040,2,,8560,5/31/2021 16:50,,3,,"

The batch size can also have a significant impact on your model’s performance and the training time. In general, the optimal batch size will be lower than 32 (in April 2018, Yann Lecun even tweeted "Friends don’t let friends use mini-batches larger than 32“). A small batch size ensures that each training iteration is very fast, and although a large batch size will give a more precise estimate of the gradients, in practice this does not matter much since the optimization landscape is quite complex and the direction of the true gradients do not point precisely in the direction of the optimum

",47555,,,,,5/31/2021 16:50,,,,0,,,,CC BY-SA 4.0 28043,1,,,5/31/2021 22:20,,1,70,"

I’m currently working on my dissertation which is centred around forecasting social conflict events. I’m using data from GDELT (Global Database of Events, Tone, and Language) to develop my forecasting model. For the sake of conveying the problem and limiting the length of this post, I have simplified the features used in my investigation. These can be summarised as follows:

(please feel free to skip the feature description to the end of this post indicated by "The Question" marked in bold if TL:DR)

Temporal Attribute:

  • FractionDate: Date of event [numerical].

Actor Attributes:

  • Actor1Type: The type of actor who performed the action [factor]. (e.g. Government, Rebels, Civilians, etc.)
  • Actor2Type: The type of actor who received the action [factor]. (e.g. Government, Rebels, Civilians, etc.)

Event Action Attributes:

  • EventClass: Verbal cooperation, material cooperation, verbal conflict, and material conflict encoded as 1,2,3,4 respectively [factor].
  • EventImpact: A numeric score from [-10,10] capturing the potential impact that type of event may have on the stability of a country [numerical].

Spatial Attributes:

  • ActionGeoLong: Longitude where the action took place [numerical].
  • ActionGeoLat: Latitude where the action took place [numerical].

The database is updated on a daily schedule and is roughly 50 MB on average for single days data. The data is filtered to include only events that took place in a single country, which decreases the file size to about 1-2 MB. These events are then aggregated on a weekly basis.

One notable modeling method to predict spatio-temporal data is by means of ConvLSTM models. These models have been successfully implemented in, for example, predicting precipitation or traffic flow. So the strategy that I have so far is:

  1. Aggregate the spatial data to generate weekly geographical heatmaps, showing the intensity (for the sake of simplicity, can be thought as a weighted product of frequency and EventImpact) of events for each EventClass. That is, you are left with a time series of 4 heatmaps similar to the ones below.

  1. Aggregate the actor data to generate weekly actor "Interaction" matrices [I]. These matrices show the intensity (yet again, can be thought as a weighted product of frequency and Eventimpact) of interaction between each actor for each EventClass. Actor 1 (performer) are on the rows and Actor 2 (receiver) are on the columns, therefore, [I]_{n,m} would mean the intensity of Actor n doing something to Actor m. (Note that these matrices won't be symmetrical, the intensity of actor n doing something to actor m, is different from actor m doing something to actor n). Then you are left with a time series of 4 matrices similar to the ones below:

The two above (geographical heatmaps and interaction matrices) will be the "input" to my model, and it should be able to predict the next weeks heatmap and interaction matrix given the history of events. In theory I should be able to construct ConvLSTM model for the geographical heatmaps or the interaction matrices separately. Therefore, the problem I am faced with is building a sort of ensemble of ConvLSTM which is able to learn from both input sources simultaneously.

The Question:

Is there a way to construct a ConvLSTM that can learn from two different "types" of input tensors? The first being a sequence of geographical heatmaps (with 4 channels), and the second being a matrix (also with 4 channels). If so, how would you implement this in Keras? It is very important that the model considers both sources in order to learn underlying mechanisms of the system. An example of the model Input and Output is provided below.

Thank you for taking the time to read. I would appreciate additional opinion or other applicable modeling methods very much.

",47561,,,,,5/31/2021 22:20,Forecasting of spatio-temporal event data,,0,0,,,,CC BY-SA 4.0 28044,1,,,6/1/2021 3:19,,1,11,"

I have a dataset with financial stock data, some of the features are shared, for example daily gold prices, while the stock price for each individual stock is different, the gold price would be the same for everybody that day.

When I split 80/10/10 randomly, it's "cheating" and while the result accuracy is great the actual real world live result is bad.

When I split sequentially, ie first 8 years of data in training, next year in validation, last year in testing. The result accuracy is bad, and live testing is also bad.

What I want to ask is, should I do random split between just training and validation on first 9 years of data, then do testing on last year of data separately?

OR is sequentially as good as it's gonna get and I simply can't predict the future?

",46779,,,,,6/1/2021 3:19,Split on dataset with some shared features?,,0,3,,,,CC BY-SA 4.0 28045,1,,,6/1/2021 6:01,,1,45,"

While reading some explanations of why dense word embeddings work better than sparse word embeddings, the following statement has been given in the chapter Vector Semantics and Embeddings, showing a drawback of sparse word embeddings.

Dense vectors may also do a better job of capturing synonymy. For example, in a sparse vector representation, dimensions for synonyms like car and automobile dimension are distinct and unrelated; sparse vectors may thus fail to capture the similarity between a word with car as a neighbor and a word with automobile as a neighbor.

It says that the dimensions of synonyms may be unrelated and distinct. I am facing difficulty in understanding it.

Can anyone provide me a simple example to understand it by taking some simple dimensions which are unrelated and distinct?

You can consider either documents or (context) words as dimensions for the example.

",18758,,2444,,6/4/2021 14:32,6/4/2021 14:32,How do sparse word embeddings fail to capture synonymy?,,0,0,,,,CC BY-SA 4.0 28053,1,,,6/1/2021 14:53,,0,569,"

I have read that when using VAE-GANs, first what happens is the VAE's encoder encodes some image to another encoded image, which from GAN's point of view is considered a noise, and then the GAN part generates another image from that noise which from VAE's point of view is just an encoded image.

Is that encoded image better suited for GAN to generate better images or not?

The problem which bugs me is that there are not that many articles about VAE-GANs, especially in the last 2 years.

As a side question, does that mean that VAE-GANs do not have any significant performance benefits than just simple GAN?

",40591,,2444,,10/30/2021 13:43,11/24/2022 16:04,Is there a performace benefits using VAE-GAN instead of just GAN?,,1,1,,,,CC BY-SA 4.0 28054,2,,28053,6/1/2021 18:56,,0,,"

In my experience, it's not a matter of performance benefits; Variational Auto-Encoder GANs are much more useful if you want to have "knobs" to turn to influence the generated output. Since you have a latent layer that represents possibly the mean and the distribution of the data, you can tune to different "positions" in that latent space to influence the output of the GAN.

Without this, the generated output is more difficult to predict and/or you will end up with some outputs that are great and others that are completely meaningless.

",30426,,30426,,6/1/2021 22:57,6/1/2021 22:57,,,,1,,,,CC BY-SA 4.0 28057,1,,,6/2/2021 1:16,,2,33,"

I was recently reading Hinton's GLOM idea How to represent part-whole hierarchies in a neural network, and I am simply unsure about what exactly he means when he says parsing images into "part-whole hierarchies".

Moreover, wouldn't semantic segmentation "parse" the parts from the whole image? So what is different here?

",47593,,2444,,6/6/2021 2:14,6/6/2021 2:14,"What is meant by Hinton when he refers to ""Part-Whole Hierarchies"" in his GLOM framework",,0,0,,,,CC BY-SA 4.0 28059,1,,,6/2/2021 4:57,,5,64,"

I am not a researcher, but I am curious to know what considerations are relevant to take into account during research for the invention of a new neural network model, and what relevant knowledge researchers typically possess in the area.

And an accompanying question: is a background in neuroscience relevant to such an investigation?

",47597,,2444,,7/30/2021 12:19,12/23/2021 10:20,"In the field of Deep Learning research, what considerations do researchers take into account when inventing new neural network models?",,1,0,,,,CC BY-SA 4.0 28060,1,28118,,6/2/2021 8:29,,1,338,"

I haven't been able to find a good discussion specifically comparing the two (only one describing a classification and regression problem). I am training a classifier to learn both age and gender based on genomic data. Every sample has a known age and known gender (20 classes in total).

Currently, I am using a single neural network with a sigmoid activation in the last layer with a binary_crossentropy loss. This works fine. However, I also see people using multi-head neural networks where, for example, a set of shared layers would split in to two either additional dense layers or in to two final layers for classification – each with an independent loss (in my case likely a categorical_ce).

What I am unsure of, though, are the advantages and disadvantages between the two (maybe advantages and disadvantages are not the right words to use – actual differences between the two might be more appropriate and when one might use one of those over the other)?

I want to be able to calculate the usual metrics – TP, FP, etc. after training – presumably it would be easier with two heads at the end of the network, as you can work with two independent sets of predictions to calculate these?

",34530,,16521,,6/4/2021 21:13,6/6/2021 13:11,What are pros and cons of using a multi-head neural network versus a single neural network for multi-label classification?,,1,0,,,,CC BY-SA 4.0 28061,2,,23067,6/2/2021 10:58,,1,,"

For LIME, the local model that is trained can be found at lime.lime_base.explain_instance_with_data under the name "easy_model".

",46441,,40434,,6/3/2021 23:25,6/3/2021 23:25,,,,0,,,,CC BY-SA 4.0 28063,2,,28059,6/2/2021 13:03,,4,,"

In my experience researchers typically base their architectures on previously identified successful architectures and principles. That is to say published methods that have been successful in practice on similar tasks to the current one. This can be followed back to very early networks like the Perceptron which took a lot of inspiration from existing successful mathematical techniques.

This is not to say that researchers do not draw from neuroscience or from broader knowledge to influence their decisions. I believe most research is guided by more abstract intuitions derived from broader experience. Some researchers are very strongly influenced by neuroscience and I understand that some try to replicate behaviours seen in the brain. The similar structure and functions of the brain will surely make knowledge of it useful. Some major advances may have been motivated by direct analogy to the brain, although I am not aware if that is the case. But, on balance, having zero knowledge of neuroscience is not an impediment to undertaking successful research into neural architectures. Having zero knowledge of existing successful techniques is a far greater impediment.

Having a good level of mathematical understanding is also very useful. I will take perhaps a slight risk of going beyond my knowledge and suggest that university-level knowledge of mathematics is also more beneficial than knowledge of neuroscience. I think that is true for research so far, maybe it will change in the future. All these comments are based only on my personal experience and not on any formal research. Also, note my bias: I have minimal knowledge of neuroscience myself.

",45018,,18758,,12/23/2021 10:20,12/23/2021 10:20,,,,1,,,,CC BY-SA 4.0 28064,2,,28038,6/2/2021 14:24,,0,,"

This is precisely the optimal transportation problem. If both vectors are defined on the same space, you are trying to minimize the Wasserstein distance (which I think is equivalent to what N. Kiefer suggested). Associated to the Wasserstein distance / optimal transport cost is an optimal transport plan, which tells you how to transport the mass from vector 2 onto vector 1.

Optimal transport has become fairly popular in ML, check the WGAN for example.

",37829,,,,,6/2/2021 14:24,,,,0,,,,CC BY-SA 4.0 28065,1,,,6/2/2021 14:59,,1,39,"

It's often assumed in literature that BERT embeddings are contextual representations of the corresponding word. That is, if the 5th word is "cold", then the 5th BERT embedding is a representation of that word, using context to disambiguate the word (e.g. determine whether it's to do with the illness or temperature).

However, because of the self-attention encoder layers, this embedding can in theory incorporate information from any of the other words in the text. BERT is trained using masked language modelling (MLM), which would encourage each embedding to learn enough to predict the corresponding word. But why wouldn't it contain additional information from other words? In other words, is there any reason to believe the BERT embeddings for different words contain well-separated information?

",47609,,40434,,6/3/2021 10:39,6/3/2021 10:39,Why are BERT embeddings interpreted as representations of the corresponding words?,,0,0,,,,CC BY-SA 4.0 28066,1,28074,,6/2/2021 16:19,,0,226,"

This is a theoretical question. Is it possible to overfit a model on infinite amounts of data?

Let me clarify there are no duplicates.

Say, we have a generator function that produces data, with the correct classification/regression value, and we can generate infinite amounts of valid data. How long does it take for the model to overfit?

This question arose because I'm training an RNN model for fake news classification, and MSE loss is almost always 0.000, only 25% of the training data.

Will it be possible to overfit with one epoch of training on the infinite data generator?

(I'm thinking what will happen is the model will either get perfect, or sync into the generator's non-perfect randomness, and learn nothing)

",47610,,2444,,6/4/2021 14:46,6/4/2021 14:46,Is it possible to overfit a model on infinite amounts of data?,,1,3,,,,CC BY-SA 4.0 28070,1,28077,,6/3/2021 4:29,,2,179,"

I have been working to design a system that uses multiple machine learning models to make sense of data that is dynamically webscraped. Each AI would handle a specific task, for example:

An AI model would identify text in an image, then attempt to create plain text of what it might be. Once the text is extracted, it would be passed in a stored variable to an AI that can read the text to determine if it is a US city/state.

I tried to look into if others have done this, but didn't find much on it relating to what I was looking for. Does anyone know if there are potential issues with this? Logically, it looks good to me, but I figured I'd ask.

If anyone can put me in the right direction for reading material or further information, I would appreciate it.

",47626,,2444,,6/4/2021 14:22,6/4/2021 14:22,Which AI techniques are there that combine multiple models to make sense of data at different stages?,,1,0,,,,CC BY-SA 4.0 28071,1,28117,,6/3/2021 4:37,,0,138,"

I am trying to understand the problem below, represented as an MDP with four states (PU, PF, RU, and RF) and two actions (AS).

Let's consider V(RF), the value of the state RF. At time-step $h$, V(RF) = 10. When we go to the previous time-step $h-1$, V(RF) increases to 19.

Why is the value of RF increasing backward, i.e. at time-step $h$, which is the last step, it's 10, but in $h-1$ it's 19?

Also, when I apply the Bellman equation, I am not getting the value of V(RF) at time-step $h-2$, which is 25.08, according to the table.

Below is my solution which I am applying on V(RF):

Lets suppose for RF, I know that
Vh (RF) = max {R(RF,A), R(RF,S)}
        = max ({10,10}
Vh (RF) = 10

    **for h-1**
    Vh-1 (RF) = max R(RF,act) + gamma E (summation state) P(State|RF,act) Vh(State)
              = max {10+0.9(1*0), 10+0.9(0.5(10)+0.5(10))}
              = max (10,19)

    **for h-2**
    Vh-2(RF) = max R(RF,act) + gamma E(summation state) P(State|RF,act) Vh(State h-1)
             = max {19+0.9(1*0), 19+0.9(0.5(10)+0.5(10))}
             = 28.0

So, in the above scenario, the reward is 0.9, but I am not sure how we get the third result in V(RF) as 25.08. Where are we using this last part Vh(State) from the equation?

",47629,,2444,,6/9/2021 8:36,6/9/2021 8:36,"How do we get the value of this state of an MDP, at time-step $h-2$, using dynamic programming?",,1,0,,,,CC BY-SA 4.0 28074,2,,28066,6/3/2021 9:19,,0,,"

The trivial answer is that yes, it is possible. Consider standard Gaussian data and a generator sampling points $1/n$ with $n \rightarrow \infty$. Since you never see points beyond $(0,1]$, you are unlikely to learn the parameters of the distribution.

More generally, and for a supervised problem with data $X,Y$, if your data generator covers a subset of the domain with probability $1-\delta$ (with respect to the distribution of $X,Y$) for a very small $\delta$ and your model $\hat{f}$ fits with error $\epsilon$ here, then your generalization error is $\mathbb{E}_{X Y}[\text{loss}(\hat{f}(X),Y)] \leq \epsilon + \delta \sup_{X Y} |\text{loss}(\hat{f}(X),Y)|$. So if you knew that the loss were bounded and you could make $\delta$ arbitrarily small, you wouldn't overfit, because there wouldn't be any region of the sample space significant enough where you wouldn't be able to generalize to.

In other words, it all boils down to your generator being able to cover a set with high enough probability and your loss being "well behaved".

Note that because you will only have a finite coverage of the domain, e.g. an $\epsilon$-cover, you need some assumption on the growth of the loss for points which are $\epsilon$ apart, e.g. a Lipschitz condition (and also for the learned $\hat{f}$).

",16557,,,,,6/3/2021 9:19,,,,0,,,,CC BY-SA 4.0 28076,1,,,6/3/2021 9:57,,0,48,"

Consider an input to RNN $ x = \{x_i\}_{1}^{n}$. Assume that the length of each input $x_i$ is k.

Now, consider the following diagram from p5 of this pdf

My doubts are:

  1. What should I pass as $h_0$? is it a zero vector?

  2. Does RNN updates its weight matrices $U, W, V$ after each token of input $x_i$ ? Or updates after passing all tokens of a particular input $x$?

",18758,,18758,,12/20/2021 7:25,12/20/2021 7:25,Initial Input $h_0$ for RNN and updation of weights,,1,1,,,,CC BY-SA 4.0 28077,2,,28070,6/3/2021 10:57,,2,,"

You have explained a pipeline of AI algorithms for text in images: 1) Text detection, 2) OCR, 3) named entity recognition (NER). There are reams of paper on these topics.

Extracting City and Country Name from Text

Papers on Text from Images

Websites on Text from Images

",5763,,5763,,6/3/2021 11:06,6/3/2021 11:06,,,,1,,,,CC BY-SA 4.0 28079,1,,,6/3/2021 14:33,,7,1736,"

I am implementing some "classical" papers in Model Free RL like DQN, Double DQN, and Double DQN with Prioritized Replay.

Through the various models im running on CartPole-v1 using the same underlying NN, I am noticing all of the above 3 exhibit a sudden and severe drop in average reward (with a sudden and significant increase in loss) after achieving peak scores.

After reading online, I can see that this is a recognized problem but I cant find a suitable explanation. Things I have tried to mitigate:

  • adapt model architecture
  • tune hyperparams like LR, batch_size, loss function (MSE, Huber)

This problem persists, and I cannot seem to achieve any sustained peak performance.

Useful links I found:

Example:

  • till ~250 episodes in Double DQN with PR (with annealing beta), performance steady goes up in both increase in reward and decrease in loss
  • after that stage, the performance dips suddenly in both decreased average reward and increased loss as seen in output below
Episode: Mean Reward: Mean Loss: Mean Step
  200 : 173.075 : 0.030: 173.075
  400 : 193.690 : 0.011: 193.690
  600 : 168.735 : 0.015: 168.735
  800 : 135.110 : 0.015: 135.110
 1000 : 157.700 : 0.013: 157.700
 1200 :  99.335 : 0.013: 99.335
 1400 :  97.450 : 0.015: 97.450
 1600 : 102.030 : 0.012: 102.030
 1800 : 130.815 : 0.010: 130.815
 1999 :   89.76 : 0.013: 89.76

Questions:

  • what is the theoretical reasoning behind this? Does this fragile nature mean we cannot use the above mentioned 3 algorithms to solve CartPole-v1?
  • if not, what steps can help mitigate this? Could this be overfitting and what does this brittle nature indicate?
  • any references to follow up with regarding this "catastrophic drop"?
  • I observe similar behavior in other environments as well, does this mean that the above mentioned 3 algorithms are insufficient?

Edit: Taking from @devidduma's answer, I added time based LR decay to the DDQN+PRB model and kept everything else same. Here are the numbers, they look better than before in terms of the magnitude of the performance drop.

   10 : 037.27 : 0.5029 : 037.27
   20 : 121.40 : 0.0532 : 121.40
   30 : 139.80 : 0.0181 : 139.80
   40 : 157.40 : 0.0119 : 157.40
   50 : 225.10 : 0.0107 : 225.10 <- decay starts here, factor = 0.001
   60 : 227.90 : 0.0101 : 227.90
   70 : 227.00 : 0.0087 : 227.00
   80 : 154.30 : 0.0064 : 154.30
   90 : 126.90 : 0.0054 : 126.90
   99 : 154.78 : 0.0057 : 154.78

Edit:

  • after further testing, pytorch's ReduceLROnPlateau seems to be working best with patience=0 param.
",47642,,47642,,6/14/2021 15:46,1/17/2023 9:49,"Deep Q-Learning ""catastrophic drop"" reasons?",,1,0,,,,CC BY-SA 4.0 28080,1,,,6/3/2021 15:46,,0,108,"

I'm trying to build a neural network (NN) for classification using only N-bit integers for both the activations and weights, then I will train it with some heuristic algorithm, based only on the NN evaluation.

Currently, I'm using a non-linear activation function for hidden units. Because of its probability interpretation, I am forced to use the softmax (or the sigmoid for 2-class case) for the output layer. However, because of the use of integers, the linear combination of the activations and weights can easily be too large, and this causes a problem to the exponential in the softmax evaluation.

Any solution?

",47643,,2444,,6/5/2021 12:41,6/30/2022 16:04,Which solutions are there to the problem of having too large activations before the softmax (or sigmoid) layer?,,1,1,,,,CC BY-SA 4.0 28082,2,,28080,6/3/2021 17:29,,1,,"

First of all, check out this question. Generally, you don't need to apply softmax and using raw logits leads to better numerical stability.

The numerical issue that you are talking about is well known and dealt with the so-called logsumexp trick. This usually is already incorporated in standard NN libraries. For example keras CategoricalCrossentropy loss can be configured to compute it from_logits.

",20538,,,,,6/3/2021 17:29,,,,0,,,,CC BY-SA 4.0 28083,1,,,6/3/2021 18:19,,0,43,"

My knowledge of GANs is relatively basic at the moment but I seem to remember reading somewhere that GANs that generate images from a text prompt, when they fail to understand some of the text/words render those text/words in the image itself instead of interpreting and rendering what they refer to - this is apparently a known bug or failure.

Can somebody confirm that this is true or false? And if possible provide a link to writing about it that will allow me to verify the details e.g. if its specific to a type of GAN, version etc. Many thanks in advance.

",47649,,,,,6/3/2021 18:19,Text to image GANs and failure,,0,2,,,,CC BY-SA 4.0 28087,2,,28079,6/3/2021 23:00,,7,,"

This is a case of overfitting the Q function leading to compounding errors when selecting actions.

  1. You have been training your policy for too long on the same data distribution.
  2. Overfitting Q functions will then lead to data distribution mismatches more often in action selection and compounding errors will happen earlier than before.

You should probably train until 400 up to 600 episodes and then stop training the policy. Consider the following slide on compounding errors:

Whenever a wrong action is selected, because of overfitted Q value function, the agent can not generalize well on how to recover from that mistake. Eventually, compounding errors increase quadratically in time.

It will only get worse for your agent once the wrong action is picked.

In Temporal Difference learning methods like TD-0, SARSA or Q learning, finite-state and finite-action MDP's converge to the optimal action-value if the following two conditions hold:

  1. The sequence of policies $\pi$ is Greedy in the Limit of Exploration (GLIE)
  2. The learning rates $\alpha_t$ satisfy the Robbins-Munro sequence such that:
  • $ \sum^{\infty}_{t=1} \alpha_t = \infty $
  • $ \sum^{\infty}_{t=1} \alpha^2_t < \infty $

You can infer that by any means we should use a decaying learning rate in order to satisfy the Robbins-Munro sequence. There are three types of decaying learning rates:

  • time-based decay
  • step-decay
  • exponential decay

In the limit of exploration, they guarantee convergence to the optimal policy, for any Temporal Difference learning based algorithm. I would suggest you use a time-based decaying learning rate, which is the default choice in Keras when you set the decay parameter.

$ lr = 1 / (1 + decay\_factor * iteration) $

One iteration here means one epoch. You probably train your neural network every step taken by the agent, so one iteration means one epoch, one step taken by the agent.

In Keras you can set the decay like: model.compile(loss='mse', optimizer=Adam(lr=self.learning_rate, decay=decay_rate)) I would suggest a value of $0.001$, that should be a good starting point.

",36447,,36447,,1/17/2023 9:49,1/17/2023 9:49,,,,4,,,,CC BY-SA 4.0 28088,2,,28076,6/4/2021 3:12,,1,,"

I am answering this question based on classical backpropagation through time (BPTT) only.

What should I pass as $h_0$? is it a zero vector?

Yes. We know that $h_{t-1}$ is the footprint of the first $t-1$ tokens of the input sequence. For the first time, we need to pass $h_0$ along with the token $x_1$. Since, footprint is not formed yet, we need to pass zero vector only. Check Figure 9.4 of your pdf.


Does RNN updates its weight matrices $U,W,V$ after each token of input $x_i$ ?

No, RNN does not update its weight matrices $U,W,V$ after each token of input $x_i$. RNN forward pass generates output sequence $y = \{y_i\}_{1}^{n}$ for given input $x$ and the various weight matrices $U, V, W$ are shared across all timestamps of the forward pass.


Or updates after passing all tokens of a particular input x?

Yes, all the weight matrices get updated only after generating the complete $y$.

But since the classical BPTT may take much time and the gradients may vanish gradually, truncated backpropagation through time is generally used. And the updation of weights takes place after generating the partial output only.

",18758,,18758,,6/4/2021 3:30,6/4/2021 3:30,,,,0,,,,CC BY-SA 4.0 28089,1,28097,,6/4/2021 3:27,,1,143,"

Word embedding refers to the techniques in which a word is represented by a vector. There are also integer encoding and one-hot encoding, which I will collectively call categorical encoding.

I can see no fundamental difference between the categorical encoding and word embedding at a fundamental level. They may be different at an application level.

Is it true that categorical encoding is a type of word embedding? And are different names solely due to the task in which apply the technique?

",18758,,2444,,6/6/2021 0:17,6/6/2021 0:17,Is categorical encoding a type of word embedding?,,1,0,,,,CC BY-SA 4.0 28091,2,,26870,6/4/2021 6:30,,0,,"

With the existing frameworks PyTorch, Tensorflow you can easily implement this functionality, by keeping some of the intermediate computations inside forward or call method and passing them as an input to the given layer. For example:

x[i] = layer[i-1](x[i - 1])
...
x[j] = layer[j-1](x[j - 1] + x[i]) # resnet-like skip connection
or
x[j] = layer[j-1](concat(x[j - 1], x[i])) # densenet-like skip connection

However, if you are asking whether, there is some more educated way, or a constructor of this functionality, as far as I know, it is not implemented in these frameworks.

In case you want to have a computational graph of some arbitrary DAG structure, you will need to create some structures inside you NN class in order to know, where to keep activations in order to pass them further as skip connection and where to sum/concatenate them with the output of some other layer.

",38846,,,,,6/4/2021 6:30,,,,0,,,,CC BY-SA 4.0 28092,1,,,6/4/2021 11:25,,1,21,"

I've been trying to get a simple regression experiment going with a neural network and I would like some help interpreting what is going wrong.

My goal is to see what level of regression accuracy I can achieve with a feed forward neural network. I have N pairs of inputs, x, and outputs, y.

As an example:

X is made up of serial integers starting at 0: e.x [0 1 2 3 ... N]
Y is made up of pairs of random floats between 0 and 1: e.x [[.263 .548] [.157 .014] [.988 .478] ... Nth [.356 .245]].

So my neural network's structure has 1 input neuron and 2 output neurons and some hidden layers in between whose properties are part of this experiment.

These are the questions I seek to answer:

  1. Can this neural network with some configuration of hidden layers map these inputs and outputs with perfect accuracy?
  2. If not, what is the best accuracy that can be achieved with a reasonably sized network?
  3. Is there a limit on the value of N such that the accuracy of the mapping deteriorates past an accuracy threshold t for each value a and b of an output pair, [a, b], of +- z where z is fairly small? In other words can the prediction z stay withing a - t and a + t and the same for b?

Here is my model and some relevant functions as it stands:

# Custom Loss
def absoluteDifference(y_true, y_pred):
  return abs(y_pred - y_true)

model = Sequential()
model.add(Dense(1, activation='relu', input_dim=1, kernel_initializer='he_uniform'))
model.add(Dense(10, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(magnitudes))

model.compile(loss=absoluteDifference, optimizer='adam', metrics=['accuracy'])

The results of my experiments are confusing. It always seems to bottom out at some accuracy threshold and never progress. When I compare the outputs of the network to the y value pairs they are either wildly different than y or the predictions seem to be flattening at one value:

Here are some actual results:

Results 1: Flattening:
Y:     
[[0.2890625  0.43554688]
 [0.1171875  0.02734375]
 [0.44921875 0.11328125]
 [0.04296875 0.25585938]
 [0.4921875  0.42578125]
 [0.09960938 0.04101562]
 [0.265625   0.05273438]
 [0.421875   0.26757812]
 [0.40625    0.0859375 ]
 [0.25976562 0.1328125 ]]

Predictions:
[[0.27030337 0.11708295]
 [0.27030337 0.11708295]
 [0.27030337 0.11708295]
 [0.27030337 0.11708295]
 [0.27030337 0.11708295]
 [0.27030337 0.11708295]
 [0.27030337 0.11708295]
 [0.27030337 0.11708295]
 [0.27030337 0.11708295]
 [0.27030337 0.11708295]]

Results 2: Different:
Y:
[[0.3515625  0.47851562]
 [0.16796875 0.28320312]
 [0.140625   0.453125  ]
 [0.21679688 0.44726562]
 [0.21484375 0.0859375 ]
 [0.47265625 0.15429688]
 [0.37304688 0.30078125]
 [0.06054688 0.04492188]
 [0.49609375 0.41992188]
 [0.4453125  0.40820312]]

Predictions:
[[0.18319808 0.33377975]
 [0.16718353 0.33032405]
 [0.19876595 0.32554907]
 [0.23596771 0.3200497 ]
 [0.2726316  0.31566542]
 [0.30836326 0.31124872]
 [0.34609824 0.30569595]
 [0.38202912 0.30291694]
 [0.4184485  0.2967524 ]
 [0.45500547 0.29231113]]

I would have expected Y and Predictions to match after training. What am I doing wrong here?

",47660,,,,,6/4/2021 11:25,Neural Network Regression Experiment Going Wrong,,0,0,,,,CC BY-SA 4.0 28093,1,28094,,6/4/2021 13:16,,2,128,"

From my readings, I have been taught that the state-action value depends on the policy being followed. That seems logical because the expected return from actual actions will be different depending on which actions follow it.

On page 58 of Sutton & Barto's book, we have

So, how is it possible that Q-learning can learn a state-action value without taking into account the policy followed thereafter (i.e. the policy followed after having taken action $a$ in the state $s$)?

",33566,,2444,,6/4/2021 15:01,6/4/2021 16:43,How is it possible that Q-learning can learn a state-action value without taking into account the policy followed thereafter?,,2,0,,,,CC BY-SA 4.0 28094,2,,28093,6/4/2021 13:53,,2,,"

Q-learning can learn about the greedy policy (the policy that we define as $\pi(s) = \arg\max_a Q(s, a)$) whilst following some arbitrary exploratory policy because Q-learning is an off-policy algorithm.

In Q-learning, we are updating our values of $Q(s, a)$ using a bootstrapped value from one time step in the future. This means that we don't need to worry about any importance sampling type re-weighting of the action chosen by the exploratory policy that we use for action selection because of how $Q(\cdot, \cdot)$ is defined: $$Q(s, a) = \mathbb{E}_\pi \left[ G_t | S_t = s, A_t = a \right] \; ;$$ where $G_t$ is the (discounted) future returns defined in Sutton and Barto. Now, because we have defined $Q(\cdot, \cdot)$ such that we condition on knowing action $A_t$, it really doesn't matter which distribution this action came from, because as mentioned we have conditioned on knowing it. This allows us to make updates in Q-learning for any state-action pair using the target from the optimal policy despite us not using this for action selection in the environment.

If we were to do some kind of $n$-step Q-learning with an update target looking something like $$R(s_t, a_t) + R(s_{t+1}, a_{t+1}) + ... + R(s_{t+n}, a_{t+n}) + \max_{a'} Q(s_{t+n+1}, a')$$ then we would need to use importance sample to re-weight the trajectory to account for the actions $a_{t+i}$ for $i \geq 1$ as these are actions that the optimal policy may not have taken, as they were taken from the exploratory policy and the Q-function assumed that the trajectory (i.e. future actions) is generated under the policy associated with the Q-function.

",36821,,36821,,6/4/2021 15:46,6/4/2021 15:46,,,,5,,,,CC BY-SA 4.0 28095,1,,,6/4/2021 14:01,,3,288,"

Are there references or links to examples about loss functions "Distance Metrics" which could be used to minimize the distance between two sets for a neural network. More precisely, this distance metric must depend on the whole set in calculation and not only a single point as the Euclidean distance.

It is known that Hausdorff distance is used to find the distance between two sets and it is well used for images comparison but it depends additionally on a point in calculation. For my case, I can't depend on a single point for the distance metric but I must consider the whole set to compare it with the other set! Is there any recommendation?

",46494,,,,,6/4/2021 14:01,Loss function to minimize the distance between sets,,0,2,,,,CC BY-SA 4.0 28096,2,,27932,6/4/2021 14:02,,2,,"

Think of the network as a feature extractor followed by a policy head and a value function head. The feature extractor compresses the inputs into a lower dimensional feature vector that we hypothesize will be useful to both the policy and the value function. Then you train the policy and value function with those learned features as inputs (which is in theory much easier than training both with higher dimensional inputs). This way you save a bunch of parameters and hopefully accelerate training.

",37829,,,,,6/4/2021 14:02,,,,0,,,,CC BY-SA 4.0 28097,2,,28089,6/4/2021 14:10,,2,,"

One-hot encoding is different than the concept of a word embedding, although both approaches use vectors to represent the objects (e.g. words).

A one-hot vector contains one element that is 1 and all other elements are 0. So, for example, the vector $[0, 0, 1, 0]$ is a one-hot vector, while the vector $[0, 2, 0.2, 0]$ is not. (Given that the sum of all elements in the one-hot vector is equal to 1 and all elements are in the range $[0, 1]$, a one-hot vector is a probability vector, although this may be irrelevant.) To represent your objects as one-hot vectors, you first need to know how many objects you have (in the context of NLP, this is often the vocabulary size). Let's say you have $N$ distinct elements (e.g. words), then, in order for two objects not to be mapped to the same one-hot vector, you need to have one-hot vectors of $N$ dimensions. So, if $N$ is very large, this can be a disadvantage of one-hot encoding, although, in principle, you only need to store the index of the $1$ for each one-hot vector.

As opposed to one-hot vectors, which do not encode the meaning of the objects they represent, word embeddings semantically represent the objects, i.e. similar objects (for some notion of similarity) are typically mapped to similar word embeddings. Here's a picture that illustrates this concept (taken from the book that I cite below).

Moreover, word embeddings can easily be smaller than one-hot vectors and do not necessarily contain only 0s and 1s. More importantly, word embeddings are derived from data (often learned).

A similar argument can be said for integer encoding, i.e. they are not learned from data and they do not carry any meaning. Chapter 6 of this book discusses word embeddings more in detail.

To conclude, the common definition of a word embedding usually implies that word embeddings semantically represent objects and are derived/learned from data, so, generally, one-hot encoding is not a form of word embedding, but it's a way to embed objects into a vector space.

",2444,,2444,,6/4/2021 14:15,6/4/2021 14:15,,,,0,,,,CC BY-SA 4.0 28098,2,,28093,6/4/2021 16:43,,0,,"

The action-value function DOES take into account the policy being followed - that's precisely what the notation $\mathbb{E}_\pi$ is for. Specifically, $\mathbb{E}_\pi$ is a shorthand for \begin{align*} \mathbb{E}_{a_i \sim~ \pi(a_i \,|\, s_i), \, (r_{i+1}, s_{i+1}) \sim p(r_{i+1}, s_{i+1} \, |\, s_{i}, \, a_{i}), \, \forall i} \end{align*} where $p$ represents the environment's joint distribution of reward/transitions. This means that the expectation is with respect to the actions, states, and rewards you see under the policy $\pi$. If you want to make this more concrete, we can write out the expectation like this \begin{align*} & \mathbb{E}_\pi\left[\sum_{k=0}^{\infty}\gamma^k r_{k+t+1} | s_t, a_t \right] \\ :=& \int_{(r_{t+1}, s_{t+1})}p(r_{t+1}, s_{t+1})\bigg[r_{t+1} + \int_{a_{t+1}}\pi(a_{t+1})\int_{(r_{t+2}, s_{t+2})}p(r_{t+2}, s_{t+2})\bigg[r_{t+2} + \ldots \end{align*} (Really, you should be integrating over dummy variables, and I've omitted the conditional expectations, e.g. $p(r_{t+1}, s_{t+1})$ should be $p(r_{t+1}, s_{t+1} | s_t, a_t)$). Of course you can just replace the integrals with sums if you want discrete actions and/or states. So you can see just writing $\mathbb{E}_\pi$ sweeps a lot of the notation and true meaning under the rug, and I assume that is the source of the confusion.

",47080,,,,,6/4/2021 16:43,,,,4,,,,CC BY-SA 4.0 28099,1,28103,,6/4/2021 18:43,,0,46,"

My understanding is that normal recurrent neural networks (RNNs) are not good at keeping past information from different time scales. Furthermore, my understanding is that Gated RNNs, such as Long Short-Term Memory, model the keeping and forgetting mechanisms explicitly with sigmoid activations, namely gates. What is it about sigmoid activations in particular that allows for the keeping and forgetting of past information from different time scales?

",16521,,16521,,6/4/2021 19:00,6/4/2021 21:18,What is it about sigmoid activations in particular that allows for the keeping and forgetting of past information from different time scales?,,1,0,,,,CC BY-SA 4.0 28100,1,28233,,6/4/2021 20:19,,0,224,"

I am relatively new to Python but I taught myself enough to code a two-player board game that is similar to chess. It has a simple Tkinter UI. Now I am dipping into machine learning, and I want to write another program to play itself in this game repeatedly and "naturally" learn strategies for playing the game.

Can anyone give advice on what I might be able to use for this? Is Tensorflow a good option? Is there a Python library well suited for this that I could adapt and train? I am partially through the buildingai.elementsofai.com course, but I am still very new at ML / AI.

",47671,,,,,6/13/2021 18:36,Best way to use/learn ML for board-game reinforcement learning,,2,0,,,,CC BY-SA 4.0 28102,2,,28100,6/4/2021 20:53,,0,,"

There are many approaches - the initial one would be a rule based one with some amount of randomness. The ML-AI approach is some variation of reinforcement learning, defining your game as an environment, see for instance openai-gym. ”Some variation” might be Deep Q Learning or A3C.

",41728,,,,,6/4/2021 20:53,,,,0,,,,CC BY-SA 4.0 28103,2,,28099,6/4/2021 21:18,,1,,"

It is not the sigmoid in particular. LSTMs and other memory-based recurrent networks are based on the idea of keeping an internal state that acts as a "canvas" in which the model can decide what to write (and thus keep in memory) and what to erase (and thus what to forget).

Observe the top horizontal line in the image below. The line represents the "internal state" $C$. Such state is not used as output for the outside world; it is only passed from the cell to itself at different time steps. This is where the old information is "stored".

The sigmoid is a convenient means to write information in $C$, as the output of a sigmoid ranges between $[0,1]$. For example: think of computing point-wise multiplication between $C$ and some other generic vector $v$. If all values of $v$ are in range $[0,1]$, we are essentially masking $C$, i.e. selectively erasing information from it.

LSTMs try to learn the values of such vector $v$. To make sure that is it in the desired range, sigmoid activations are used.

For more information, I recommend consulting Goodfellow's book Deep Learning, that has a section dedicated to recurrent nets and sequence modeling in general.

",47673,,,,,6/4/2021 21:18,,,,2,,,,CC BY-SA 4.0 28104,1,,,6/4/2021 22:48,,-1,52,"

I am a little confused on how, you can find online papers that describe complex Machine Learning formulas in a mathematical/probabilistic way, and, in the other hands, easy tutorials that teach you how to use frameworks to create neural networks, without mentioning the maths behind.

What is not clear is, what is the correlation between these two worlds? What are the "parameters" that make you understand i.e. how many layer to code, what kind of perceptrons to use, etc?

to make an example:

Let's take this formula, which in Wikipedia Italy is described as "the standard learning algorytm":

And suppose that size of w and x is 4. , and g(x) and f(x) are, for examples, linear functions. What next? Where do I start coding a neural network that solves this problem?

It would seem more logical to me to code this "directly" without defining perceptrons, convolution, layers etc.

",47554,,47554,,6/5/2021 12:52,11/9/2021 10:02,how to go from mathematical problem to neural network (and back)?,,1,1,,,,CC BY-SA 4.0 28105,1,,,6/4/2021 23:49,,1,96,"

An example is the halting problem, which states computing cannot be solved by exhaustion, but which humans avoid trivially by becoming exhausted.

Humans typically give up what seems like a lost cause after a certain point, whereas a computer will just keep chugging along.

  • Do flaws have utility?

Can inability be a strength and will AGI require such limitations to achieve human-level intelligence? Humans are simply not capable of infinite loops, except arguably in cases of mental illness. Are there other problems similar to the halting problem where weakness is a benefit?

",1671,,2444,,10/16/2021 22:58,10/16/2021 22:58,Might AGI need to be flawed?,,0,9,,,,CC BY-SA 4.0 28106,1,28109,,6/5/2021 4:03,,1,2645,"

I'm aware that the ground-truth of the example at the top left-hand corner of the image below is "zero"

However, I am confused about the meaning of the terms ground truth and ground-truth labels. What is the difference between them?

",45689,,2444,,12/7/2021 16:41,12/7/2021 16:41,"What is the difference between ""ground truth"" and ""ground-truth labels""?",,2,0,,,,CC BY-SA 4.0 28107,2,,1987,6/5/2021 7:18,,0,,"

May be you need reset all settings, and select x squared and y squared, only 1 hidden layer with 5 neurons.

",47681,,,,,6/5/2021 7:18,,,,0,,,,CC BY-SA 4.0 28108,2,,28106,6/5/2021 12:47,,2,,"

Ground Truth

'Ground truth' is that data or information that you have that is 'true' or assumed to be true. That means that you have high or perfect knowledge of what it is. For example, in your image of numbers, you know that the first row are zeros, the second row are ones, the third are twos, and so on. You have 10 rows of data, each row is of a different class or category. Each class has 16 samples. Ground truth data is used to train machine learning or deep learning models. The example you provided is from the Modified National Institute of Standards and Technology (MNIST) database which is commonly used for building image classifiers for handwritten digits.

Ground Truth Labels

The 'ground-truth labels' are the names you choose to give them. You may choose to label the classes as '0', '1', '2', etc., or as 'zero', 'one', 'two', etc. Maybe you think in Greek. If so label them as 'μηδέν', 'ένα', 'δύο', etc.

Reference MNIST database

",5763,,5763,,6/5/2021 13:00,6/5/2021 13:00,,,,0,,,,CC BY-SA 4.0 28109,2,,28106,6/5/2021 13:00,,1,,"

These two terms could easily refer to the same thing, depending on the context. For example, a lazy person could easily say something like this

We compute the loss/error between the prediction (of the model) and the ground truth.

Here, the ground-truth refers to the "officially correct" label (categorical or numerical) for a given input with which you compute the prediction. So, in this case, ground-truth would be a synonym for a ground-truth label.

However, in general, ground-truth refers to anything, not just labels, that are correct or true (hence the name), so it could be used more generally. For instance, you could say something like this

We assume that the ground-truth underlying probability distribution from which the data is sampled is a Gaussian.

However, in this case, you could also leave out the ground-truth part, as it's more or less implied by the fact that you're assuming something.

So, the difference between the two is that "ground-truth" can be used more generally to refer to anything that is "true".

",2444,,,,,6/5/2021 13:00,,,,1,,,,CC BY-SA 4.0 28111,2,,28104,6/5/2021 14:45,,0,,"

Basic feed forward neural nets (MLPs) are essentially just computing sequences of matrix multiplications (with nonlinear activations in between), so this is in fact easy to code "directly" like you mentioned. The more difficult part is computing the gradients with respect to the parameter matrices (usually with backpropagation). However there really isn't anything that fancy behind the basic neural network model, it really is just sequences of simple blocks.

You certainly want at least one hidden layer usually, because without it you'll just have a generalized linear model. Neural networks are useful for creating arbitrarily nonlinear models, which can be achieved by adding more layers or more neurons per layer.

If you want some clarification about how to code a neural net, I recommend taking the classic Andrew Ng ML course on Coursera.

With regard to the amount of layers, number of neurons, etc, these are hyperparameters -- there is no known way to determine the correct values for them without experimentation.

",37829,,,,,6/5/2021 14:45,,,,0,,,,CC BY-SA 4.0 28112,1,,,6/5/2021 16:23,,1,17,"

I have a dataset with about 85 columns. Out of the 85 columns, 70+ are categorical. My goal is to identify the outliers in this dataset through clustering methods as I do not have a target column.

What is the best way to approach this? Is it advisable to convert all the 70+ columns to dummies in pandas and use a clustering algorithm like DBScan?

",47686,,2444,,6/6/2021 2:24,6/6/2021 2:24,What is the best clustering method to detect anomalies for data with mostly categorical data?,,0,0,,,,CC BY-SA 4.0 28113,1,,,6/5/2021 16:32,,0,286,"

[LONG POST!!] I am working on a DNN model that works as an improviser to generate music sequences. The idea of generating music is based on taking a sequence of music nodes (their index representation) and generating sequences that are distinctive with more context and coherent structure as well as capturing syntactic and structural information from the original sequences. Therefore I am dealing with a time series dataset. Similar work was reported in "Attentional Networks for music generation" but in our case, we have a different model architecture and different dataset. It has been known that Transformer (attention) suffers in multivariate time series dataset (Source: Attention for time series forecasting and classification). But given these problems were reported two years ago, the SOTA should be better by now. For that reason, my target is to use the attention mechanism in a way to overcome these challenges.

Recently I have been using the multiheaded attention layer from TF and testing with head size between 128 and 3074 and head number from 1 to 10 and dropout from 0.1 to 0.5. Based on the results there was no noticeable improvement in the model performance, it seems that the multi-headed attention layer didn't have contribution during training.

Therefore and after carefully reading the literature I found that autoregressive attention is the best option for this types of problem. Basically, by making the attention autoregressive, it will compute the attention over the previous (decoder) outputs in such a way as to avoid using future information to make current predictions (to preserve the notion of causality). So the attention has to designed so that at each time step it needs to be autoregressive, for example, use previously generated sequences as extra input while generating the next symbol.

In "Autoregressive Attention for Parallel Sequence Modeling" paper they introduced the autoregressive attention mechanism in order to maintain the causality in the decoder. I didn't understand what they mean in Section 3.3 which describe the implementation of autoregressive attention. My problem is in the autoregressive implementation, in the paper they stated that autoregressive mechanics was implemented using the masking technique which changes all of the elements in the upper-right triangle and the diagonal to −∞ 3 to ensure that all the scores that would introduce future information into the attention calculation are equal to 0 after the softmax. I was hoping to see how it was implemented in the code to get a better idea of how it works.

Here is how the attention is implemented in tensorflow:

def multiHeadedAttentionLayer(cell_input):  
    cell_state = None    

    if cell_state is None:
        cell_state = cell_input

    mha = tfa.layers.MultiHeadAttention(head_size=128, num_heads=5, dropout = 0.5)  
    cell_output = mha([cell_input, cell_state])   
    cell_state = cell_input

    return cell_output

Then the function is recalled in the model architecture with the rest of the layers (below is a section of the model architecture only):

x = MaxPooling1D(pool_size=2)(x) # previous layer
x = multiHeadedAttentionLayer(x) # attention layer
x = LSTM(lstmneurons, kernel_regularizer=regularizers.l2(kreg3_rate), dropout=dropout3_rate, recurrent_dropout=dropout4_rate)(x) # following layer
x = BatchNormalization()(x) # following layer

etc....

Based on my intuition the autoregression should take the output results and feed them back to the input at every time step, so my questions are:

Why do we need the masking technique?

How to implement the masking technique in this case?

Is there is a code for the autoregressive attention that I have a look at for reference?

Is my current intuition about autoregressive attention correct as shown in the diagram?

",47329,,,,,6/5/2021 16:32,How do autoregressive attention mechanism work in multi-headed attention?,,0,2,,,,CC BY-SA 4.0 28114,2,,27675,6/5/2021 23:12,,1,,"

While it won't work as you've possibly imagined it, you might find that implementing it as an autoencoder will allow you to train on one class and then identify things that are "not that."

With an autoencoder, the network works to build a latent, significantly lower dimensionality, representation of $x$. Rather than generating a $\hat{y}$ prediction, you are really generating $\hat{x}$. As a result, the loss function is measuring how well the output of the network matches the original input.

To apply this to your problem, after training the autoencoder you might either measure the binary cross-entropy loss of $(x, \hat{x})$ or the KL loss. If you graph output loss of items within the class vs items that are not in the class you will usually find that a very clear linear boundary can be defined to distinguish things that are not of the class you're interested in.

",30426,,,,,6/5/2021 23:12,,,,0,,,,CC BY-SA 4.0 28117,2,,28071,6/6/2021 2:26,,1,,"

Wow, that's a really confusing example, if I were you I would check out some other RL resources. I wouldn't consider h being the last step and h-1 being the previous step. In terms of steps of iterations of the dynamic programming algorithm, h is actually the first step, h-1 the next step and so on. Viewing it in these terms it makes sense that the Value of RF increases from 10 to 19, because after the first step of dynamic programming the state RF incorporates some of the value from RU.

Here is the correct calculation for h-2.

$$10 + 0.9(0.5\times19+0.5\times14.5) = 25.08$$

You are doing a couple of things incorrectly in your calculation:

  • Firstly, you are mistakenly assigning the reward a value of 19. The reward should be 10. Note that the reward and Value are two different quantities. As we iterate through the dynamic programming algorithm, our current approximation of the values will change but reward will always remain the same (it is the number as indicated in the bubbles in the diagram). It just so happens that the Value and reward are equal on the first step (h).
  • You are using the Values for states RU and RF (=10) from step h to calculate the values step h-2 which is incorrect. You should be using the values from step h-1 which are 14.5 and 19 respectively.

Using this understanding, the calculation for the next step h-3 would be (notice that I am now using values from step h-2).

$$10 + 0.9*(0.5*25.08+0.5*16.53) = 28.72$$

",46543,,46543,,6/6/2021 2:32,6/6/2021 2:32,,,,0,,,,CC BY-SA 4.0 28118,2,,28060,6/6/2021 2:33,,2,,"

If I understood things correctly: You have a task which you need to estimate two values, gender and age. Your question revolves about the difference between networks which share layers for both inputs, whether the shared layers should be followed independent linear layers.

Firstly, using shared layers in the networks of two related tasks may be useful to create more general latent representations in the network hidden layers. It also can speed up training, the shared layer will learn useful features more quickly than if there were two separate networks for each task. Some examples which demonstrate the potential benefit of shared networks can be found in the papers for two RL algorithms, A3C and PPG (PPG adds some extra tricks to the shared layers it):
http://proceedings.mlr.press/v48/mniha16.html
https://arxiv.org/abs/2009.04416v1

Whether the shared layers should be followed by many separate linear layers or a single one, at least for me, isn't something easy to deduce. Intuitively, having a single linear mapping after the shared layers will help prevent over-fitting because the shared layers induce more general features in those layers. While having many separate layers may be useful if there is some complex non linear mapping between the final output and the latent features from the shared layers.

I think the best way to find out is just to experiment with it and see which gives the best results.

A little bit anecdotal, but, an example from my experience:
Shared layers are commonly used in RL for actor critic algorithms. A network takes an image as input and outputs an action and a value (the output for the actor and the critic, receptively). Generally a single linear mapping from the shared layers works just fine, even better than more complex networks.

=========== Edit
In pseudo-code, this is the networks that came to my mind:

# Network 1  
 shared_layer = Linear(input_dim, latent_dim)  
 output_layer = Linear(latent_dim, m + n)  

# Network 2  
 shared_layer = Linear(input_dim, latent_dim)  
 output1_layer = Linear(latent_dim, m)  
 output2_layer = Linear(latent_dim, n)  

And, in the question you mention about using BCE with Network 1 and changing to CE for one of the outputs of the Network 2.
The networks themselves are equal to one another. One implementation might be more practical than the other, but they are the same. Depending on the framework you use, you can use either BCE or CE loss in Network 1 and 2. In Network 1 this would mean taking the output of the last layer and slicing the outputs for age and gender into two variables and applying a loss function in each of them.

That said, I would expect to see a difference between using CE or BCE for age classification. When training with BCE it's possible that one estimator turn to be more 'optimist' or 'pessimist' and gives overall high/low probabilities (this will depend on factors such as whether there is class unbalance, if those are taken into account in the training procedure, etc...). And, that will mean that when you take the maximum probability of the age outputs there will be some bias.
CE seems to me to be a more appropriate choice for age classification. Using CE will not prevent bias if there is class unbalance in your data, but with it is more straightforward to handle these issues.

",37510,,37510,,6/6/2021 13:11,6/6/2021 13:11,,,,5,,,,CC BY-SA 4.0 28119,1,,,6/6/2021 8:24,,0,24,"

I have a series of ordered pdf pages which own to different documents. Let me give you an example:

Pages: 1 2 3 4 5 6

True Pages: 1 2 | 1 2 3 4

So I have like six ordered pages, two of which from document A, and the remaining from document B. I do not have documents labels so the grouping should be done in an unsupervised way.

Which could be a reasonable approach? Using only CNN to detect border pages shouldn't be enough to discern documents, so I was thinking to something like RNN->CNN or CNN->RNN but I don't know how it would practically work because it is the first time I don't use labels in my TF model.

Do you think it would be a reasonable idea?

",47695,,47695,,6/7/2021 7:54,6/7/2021 7:54,Document clustering from ordered pages list,,0,2,,,,CC BY-SA 4.0 28121,2,,26871,6/6/2021 10:08,,2,,"

I assume in your case what you need to be doing is to collate your 3 datasets together - these would form the training dataset, and then leave the testing dataset aside.

During Meta-Training, the code will sample a batch of tasks in each iteration. This batch of tasks will be split into support and query, the algorithm will train on the support and update model parameters, then it will test on the query and then re-update initial theta.

This Meta-Training is a form of 'cross validation'. Now during meta testing, you must ensure that it samples from the testing dataset.

You can use this code as a start. In data_generator.py it has a place where it splits the images into training and validation. What you need to be doing is to do the same thing but for your structured data, there, you will specify which datasets to sample for training and which dataset to sample for testing.

",43764,,,,,6/6/2021 10:08,,,,0,,,,CC BY-SA 4.0 28124,1,,,6/6/2021 18:30,,1,30,"

How would you design a model which learns the transitions in a given 1-hour DJ Mix? To be specific, the model should be able to learn transitions, specify the occurring time and the type (Crossfade, Infinite Loop, and so on). Data annotation is way too long since I have 3000+ DJ Mixes that are 1 hour long each, as mentioned. It's almost impossible to annotate the transitions in each mix without spending lots of money. Is there a way to do it unsupervised?

",47704,,40434,,6/7/2021 6:40,6/8/2021 23:04,How to learn transition type in a 1-hour extended DJ Mix?,,0,0,,,,CC BY-SA 4.0 28125,2,,22936,6/7/2021 3:19,,2,,"

It is possible for some classes of problems. For instance, WolframAlpha can generate an induction proof to the problem posed in the question.

According to the author of this proof generator, he built a library of pattern-matched proofs to generate the proofs. More details about his approach can be find in his write-up about the problem.

Other alternative (thought not induction-based) for automatically verifying these kind of identities (in special, hypergeometric identities) is by using algorithms such as Zeilberger's method along with the HYPER algorithm, both described in the excellent book A=B, currently available for free by one of its co-authors.

",25202,,,,,6/7/2021 3:19,,,,0,,,,CC BY-SA 4.0 28126,1,,,6/7/2021 3:46,,1,74,"

I'm trying to create a neural network that would able to look at the current price of a crypto asset and classify between a "BUY", "SELL" or "HOLD". So far for my input features, I've decided to go with the past 40 opens, closes, highs, lows, turnover, and volumes (240 features + the current price so 241 total features).

Would it be redundant/not ideal if I had another feature that was the average of the past 40 opens for example? What about the max/min of the past opens?

My thinking was that with only the raw prices data of the past 40 days, the neural network would be able to "detect" and create the most optimum features like the average or max in the hidden layers. And therefore, having the avg. or the max/min of some existing features would be unnecessary or perhaps worsen the performance of the model.

Or is there no clear answer and would this be something I'd only be able to figure out by testing against data?

Thanks for your help!!

",47715,,47715,,6/8/2021 15:58,11/1/2022 0:05,"Selecting features for a neural network: is it redundant to have a feature that is an average (or max, or min) of some other features",,1,1,,,,CC BY-SA 4.0 28127,2,,22936,6/7/2021 6:15,,3,,"

There are programming languages that allow you to verify a proof by induction. For example, I used Coq, but I'm sure there are also others.

",47300,,2444,,6/7/2021 21:45,6/7/2021 21:45,,,,0,,,,CC BY-SA 4.0 28128,1,28187,,6/7/2021 10:26,,0,46,"

Currently I'm dealing with an assignment that made us implement the network mentioned in this paper. The network has an architecture similar to this:

As you can see it uses a Unidirectional RNN (in my case LSTM), which does the many to many sequence prediction task while training, giving LSTM outputs to dense layers with softmax activation. For generating the captions, the network is only given the image at first, and then using the prediction of the image, generates a word, which is then fed to the network along with the generated hidden state, and the model does this recursively to find a unique stop token. Here's the prediction code:

def predict(self, image, max_len = 30):
      output = []
      hidden = None
      inputs = self.encoder(image).unsqueeze(1) # Image features
      for i in range(max_len): # Recursively feed generated words to LSTM 
        lstm_out, hidden = self.decoder.lstm(inputs,hidden)
        output_vocab = self.decoder.fc(lstm_out)    
        output_vocab = F.softmax(output_vocab.squeeze(1), dim=1).detach().cpu().numpy()
        words_indices = output_vocab.argsort(axis=1).squeeze()
        word = words_indices[-1]
        if word == self.unk_token_index:
            word = indices[-2]
        output.append(int(word))
        if word == self.end_token_index:
          break  
        inputs = self.decoder.embed(torch.LongTensor([[word]]).to(image.device))
      return output

The problem I'm having right now is that I don't know whether this generation scheme works with BiLSTMs. Right now my training loss is way better for the sequence to sequence prediction task than the UniLSTM, but my generated captions are far worse.

This is a sample caption generated by Bi-LSTM:

This is a sample caption generated by UniLSTM:

My training loss for BiLSTM converges to 10e-3, while for UniLSTM it converges to 0.5. But the problem is that even before overfitting, BiLSTM is only generating gibberish.

",36376,,,,,6/10/2021 17:26,Changing a CNN-LSTM image captioning architecture to use BiLSTMs,,1,0,,,,CC BY-SA 4.0 28129,2,,25228,6/7/2021 14:02,,3,,"

Here is a similar contradiction based answer using basic coordinate geometry.

Is there a proof to explain why $XOR$ cannot be linearly separable?

Let us suppose, if possible, that the $XOR$ function, given by following table, is linearly separable. \begin{array}{|c|c|c|} \hline x& y & x \text{ xor } y\\ \hline 0&0&0\\ \hline 0&1&1\\ \hline 1&0&1\\ \hline 1&1&0\\ \hline \end{array} This ensures the existence of a line $L:ax+by+c=0$ such that the points $(0,0)$ $\&$ $(1,1)$ both lie on the same side of $L$ and the points $(0,1)$ $\&$ $(1,0)$ also lie on same side of $L$ but opposite to that of $(0,0)$ $\&$ $(1,1).$
Also from basic coordinate geometry, we know that if the points $(x_1,y_1)$ $\&$ $(x_2,y_2)$ lie on same side of a line given by $px+qy+r=0$ then, $$(px_1+qy_1+r)\cdot(px_2+qy_2+r)>0$$ and if they are on opposite sides of the line then, $$(px_1+qy_1+r)\cdot(px_2+qy_2+r)<0$$ Since $(0,0)$ and $(1,1)$ lie on same side of $L$, so \begin{equation} c\cdot(a+b+c)>0 \tag{1} \end{equation} And, as $(1,0)$ and $(1,1)$ lie on different sides of $L$, so \begin{equation} (a+c)\cdot(a+b+c)<0. \tag{2} \end{equation} Similarly as $(0,1)$ and $(1,1)$ lie on different sides of $L$, \begin{equation} (b+c)\cdot(a+b+c)<0 \tag{3} \end{equation} On adding equations $(2)$ and $(3)$ we get, $$(a+b+2c)\cdot(a+b+c)<0$$ $$\implies (a+b+c)\cdot(a+b+c)+c\cdot(a+b+c)<0$$ $$\implies (a+b+c)^2+c\cdot(a+b+c)<0$$ Since $(a+b+c)^2\geq0$ for any choice of numbers $a,b,c,$ so $$c\cdot(a+b+c)<-(a+b+c)^2\leq0$$ $$c\cdot(a+b+c)<0$$ which is a contradiction of equation $(1)$, proving the non-existence any such line $L.$

The $XOR$ function is not linearly separable.

",47721,,-1,,10/2/2021 20:58,10/2/2021 20:58,,,,0,,,,CC BY-SA 4.0 28131,1,,,6/7/2021 14:54,,1,749,"

Multi-class classification is simply assigning all data points into one of up to any finite number of mutually exclusive labels. I am new to the field(s) of AI/ML and I keep hearing people use the term "semantic segmentation."

I want to "translate" this AI/ML jargon into something more familiar to me. The best video I have found so far to explain what it is made me wonder, what is the difference between semantic segmentation and classification?

NOTE: I am specifically not referring to so-called multi-label "classification" which allows a data point to have more than one label at a time. In my experience, that sort of labeling is not classification at all, which is a division into mutually exclusive sets (no overlap).

",47724,,,,,10/30/2022 20:03,"What is the difference (if any) between semantic segmentation and multi-class, mutually exclusive classification?",,2,1,,,,CC BY-SA 4.0 28132,2,,28131,6/7/2021 15:50,,0,,"

Both things are similar. But, I think there is a bit of a difference in interpretation.
If what you are solving is a multi-class classification problem in an image, a proper measure of performance of an algorithm would be the accuracy of the prediction for each pixel.
While one of the most used measures of performance for semantic segmentation is the IOU (intersection over union) for each class. Which, makes sense if your objective is to create a segmentation (a mask) for each class.

",37510,,,,,6/7/2021 15:50,,,,0,,,,CC BY-SA 4.0 28133,1,,,6/7/2021 16:09,,0,52,"

I am solving a combinatorial optimization problem, where I do not have a global optimum, so the goal is to improve the objective function as much as possible. So, to do this, I was inspired by this article Reactive Search strategies using Reinforcement Learning, local search algorithms and Variable Neighborhood Search, I apply during several iterations, heuristics to improve the solution, that is to say, that at each iteration I must choose a heuristic and apply it on the current solution.

In this article, they have defined the state space as the set of heuristics to apply and the action space is the choice of a heuristic among these heuristics.

Regarding the reward, they gave +1 if the solution is improved and -1 if the solution is not improved.

Sincerely, I did not understand how we define the reward for example here -1 and 1, and according to which criteria we choose the reward to use?

",47726,,2444,,6/7/2021 17:39,6/7/2021 18:48,How to choose the reward in reinforcement learning?,,0,2,,6/7/2021 18:47,,CC BY-SA 4.0 28134,1,,,6/7/2021 16:51,,1,39,"

What do we mean by mutual exclusivity of tasks?

This work (E Pan, 21) and this one (M Yin, 20) state that most classification meta-learning algorithms fail for non-mutually exclusive tasks as the model may over-fit to a task, and no model can solve all the tasks at once (respectively).

I had trouble understanding the exact meaning of a "task" in meta classification here. [E Pan, 21] uses "task" synonymously with "new class", while [M Yin, 20] states "...prior work uses a per-task random assignment of image classes to N-way classification labels". However, some priors on few-shot learning [S. Hugo, 17], and [Y Wang, 19] agree with FFLab's, (20) description of "task" which I found more clear:

The number of classes (N) in the support set defines a task as an N-class classification task or N-way task, and the number of labeled examples in each class (k) corresponds to k-shot, making it an N-way, k-shot learning problem.

Where the support set $D_s$ here is part of the meta training data $D$ which comprises a support and test set $D_t$ $D = <D_s, D_t>$ [Weng, 18].

However, even with a better understanding of what a "task" is, I still couldn't get what constitutes mutually exclusive tasks.

",40671,,2444,,6/7/2021 17:04,6/7/2021 17:04,What's mutual exclusivity in meta-learning?,,0,0,,,,CC BY-SA 4.0 28135,2,,28131,6/7/2021 17:26,,0,,"

Image/object classification (or recognition)

(Multi-class) image/object classification (or recognition) typically refers to the task of assigning one label to an image, so we typically assume that there's only one main object in the image. The multi-class only refers to the fact that we have more than 2 possible classes or labels (if we had only 2, this would be binary classification), but note that this does not mean that we have more than one main object in each image. So, in this task, we are not interested in labelling each pixel, but to label the whole image, so, in this sense, this is a sparse classification task. An example of a dataset that is used for image classification is MNIST, where there's only one object (a number) per image. Here's a picture that shows 3 MNIST images, each of them has only one associated label (below), which corresponds to the number in the image.

Semantic segmentation

Semantic segmentation is the task of classifying each pixel in an image (or at least groups of pixels), so that objects of different classes have their pixels labelled differently. Instance segmentation is a similar task, but we additionally want to differentiate between different objects of the same class, so we assume that there could be more than one object in the image and there could even be more objects of the same type/label. Given that we label pixels, this is a dense classification task.

Here's an example of an image that has been segmented, i.e. pixels associated with the same object (e.g. the umbrella) have the same label (color).

Object detection

So, in this way, semantic (or instance) segmentation is more similar to object detection, which is both a classification and regression task, because we want both to classify one or more objects in the image, but we also want to draw a bounding box around them (and this is often solved as a regression problem). The reason why we draw a bounding box around each object is that, as opposed to image/object classification, there can be more than one object in the image, so we need a way to identify the locations of the objects. As opposed to semantic/instance segmentation, this is also a sparse classification task.

Here's an image to which object detection has been applied.

",2444,,,,,6/7/2021 17:26,,,,5,,,,CC BY-SA 4.0 28136,2,,27957,6/7/2021 19:31,,1,,"

Having a sound understanding on language processing will help you understand all its concepts. This summarise must reads for NLP.

",47729,,,,,6/7/2021 19:31,,,,0,,,,CC BY-SA 4.0 28138,1,,,6/7/2021 22:33,,2,1710,"

I'm building a model for facial expression recognition, and I want to use transfer learning. From what I understand, there are different steps to do it. The first is the feature extraction and the second is fine-tuning. I want to understand more about these two stages, and the difference between them. Must we use them simultaneously in the same training?

",47316,,2444,,6/8/2021 2:14,1/12/2022 15:26,What is the difference between feature extraction and fine-tuning in transfer learning?,,2,2,,,,CC BY-SA 4.0 28139,1,28145,,6/7/2021 23:34,,0,33,"

I use PyTorch for training a simple neural net for a regression task on a dataset with 12 numerical features + target (target is the 13th column) + 2 categorical features

Before training, I execute

# numeric_columns = numeric_columns[:-1]
scaler = StandardScaler()
scaler.fit(df_train[numeric_columns]])

Also, in my custom torch.util.data.Dataset I scale the data using my scaler object. After each epoch, I evaluate the RMSE("reversed scaled" prediction, non-scaled target), like the following:

y_pred = (y_pred * self.scaler.scale_[13]) + self.scaler.mean_[13] 
loss += self.criterion(y_pred , y_true).item()

RMSE if I don't scale the target (the first comment would be uncommented and the y_pred row would be commented) is around 0.95 (I tried multiple hyperparameters) RMSE if I scale the target is 1.7

The target has mean 3.3 and standard deviation of 2.

What am I doing wrong? I thought scaling the target is a must when dealing with neural networks.

",44456,,44456,,6/7/2021 23:41,6/8/2021 10:29,Why do I have better RMSE when I don't scale the target?,,1,0,,6/9/2021 14:24,,CC BY-SA 4.0 28140,2,,28138,6/7/2021 23:34,,0,,"

Typically, in transfer learning, you have 2-3 stages

  1. Pre-training: pre-train some base model $M_\text{base}$ on some "general" dataset $A$; note that you may not necessarily need to train $M_\text{base}$, but it may already be available e.g. on the web. During this phase, we extract (general) features or learn representations of the data, which can "bootstrap" the learning task with your specific dataset

  2. Training: You replace the last layers of $M_\text{base}$ (i.e. the classifier/regression part) with new layers to solve your task, then you might freeze the initial layers (e.g. the convolutional layers) that are assumed to contain the general extracted features that can also be useful for your task: let's call this model $M_\text{main}$; at this point, you train this partially frozen model $M_\text{main}$ with your dataset $B$.

  3. Fine-tuning: after training, you could unfreeze some of the frozen layers in $M_\text{main}$, especially the ones closest to your new classifier, then train again

In all 3 stages, one could say that we're extracting features (because we're learning weights), but some people, I guess, will refer to the pre-training phase as the feature extraction phase. I think I've seen people call the training stage also the fine-tuning stage (and the previous version of this answer actually was referring to the training phase as the fine-tuning phase), but, in the end, these terms could be used inconsistently anyway, so the important thing is that you understand what's going on and keep context into account.

You can find more information about this topic here. Note that there may be other more sophisticated or simply different approaches to transfer learning.

",2444,,2444,,1/12/2022 15:26,1/12/2022 15:26,,,,4,,,,CC BY-SA 4.0 28141,1,28144,,6/8/2021 4:17,,2,252,"

I have read most of Sutton and Barto's introductory text on reinforcement learning. I thought I would try to apply some of the RL algorithms in the book to a previous assignment I had done on Sokoban, in which you are in a maze-like grid environment, trying to stack three snowballs into a snowman on a predefined location on the grid.

The basic algorithms (MC control, Q-learning, or Dyna-Q) seemed to all be based on solving whichever specific maze the agent was trained on. For example, the transition probabilities of going from coordinate (1,2) to (1,3) would be different for different mazes (since in one maze, we could have an obstacle at (1,3)). An agent that calculates its rewards based on one maze using these algorithms doesn't seem like it would know what to do given a totally different maze. It would have to retrain: 1) either take real life actions to relearn from scratch how to navigate a maze, or 2) be given the model of the maze, either exact or approximate (which seems infeasible in a real life setting) so that planning without taking actions is possible.

When I started learning RL, I thought that it would be more generalizable. This leads me to the question: Is this problem covered in multi-task RL? How would you categorize the various areas of RL in terms of the general problem that it is looking to solve?

",45562,,,,,6/8/2021 13:29,What are the various problems RL is trying to solve?,,1,0,,,,CC BY-SA 4.0 28143,1,28151,,6/8/2021 6:53,,0,102,"

I have used the stable-baseline3 implementation of the SAC algorithm to train policies in a custom gym environment. So far the results look promising. However, I would like to test the robustness of the results. What are common ways to test robustness? So far, I have considered testing different seeds. Which other tests are recommended?

",45210,,,,,6/8/2021 15:40,How to test the robustness of an agent in a custom reinforcement learning environment?,,1,0,,,,CC BY-SA 4.0 28144,2,,28141,6/8/2021 7:45,,2,,"

The basic algorithms (MC control, Q-learning, or Dyna-Q) seemed to all be based on solving whichever specific maze the agent was trained on.

All RL algorithms are based on creating solutions to a defined state and action space. If you limit your state space representation and training to a single maze, then that is what will be learned. This is no different from other machine learning approaches - they learn the traits of a population by being shown samples from that population (not just one example). They also need to be built for the range of input parameters that you need them to solve.

In the case of RL, and your maze solver, that means the state representation needs to cover all possible mazes, not just a location in a single maze (there are ways to internalise some of the representation to the learning process such as using RNNs, but that is not relevant to the main answer here).

The toy environments in Sutton & Barto are often trivial to solve using non-RL approaches. They are not demonstrations of what RL can do, instead they have been chosen to explain how a particular issue related to learning works. Sutton & Barto does include a chapter on more interesting and advanced uses of RL - that is chapter 16 "Applications and Case Studies" in the second edition.

When I started learning RL, I thought that it would be more generalizable.

It is, but without some kind of pre-training to support generalisation from a low number of examples, you have to:

  • Model the general problem

  • Train the agent on the general problem

RL agents trained from scratch on new problems can seem very inefficient compared to learning by living creatures that RL roughly parallels. However, RL is not a model for general intelligence, but for learning through trial and error. Most examples start from no knowledge, not even basic priors for a maze such as the grid layout or the generalisable knowledge of movement and location.

If you do provide a more general problem definition and training examples, and use a function approximator that can generalise internally (such as a neural network), then an agent can learn to solve problems in a more general sense and may also generate internal representations that (approximately) match up to common factors in the general problem.

",1847,,1847,,6/8/2021 13:29,6/8/2021 13:29,,,,0,,,,CC BY-SA 4.0 28145,2,,28139,6/8/2021 10:29,,0,,"

The score should be better with scaling, or not worse at least. Check the indexing. The 13th column has the index 12, because index is zero-based. Also, if you want to leave out the 2 last categorical columns it should be columns[:-2]

",16940,,,,,6/8/2021 10:29,,,,1,,,,CC BY-SA 4.0 28149,1,,,6/8/2021 14:36,,0,190,"

We want to try and distinguish real voices from (deep)fake voices using the graphs generated by a discrete fourier transform (generated from .wav audio files). We know from each image if it is a real or a fake voice, so it's a supervised classification problem. An image would look like this:

We think that real voices generate a graph with clear spikes, whereas fake voices have more noise resulting in less clear spikes. For this reason, we thought of using a CNN to take such an image as input (with x and y-axes ommited), and classify it as real or fake. Our concern is that it's actually a graph and not an image of an object, so we're not sure if this would be a good approach. We could also use the arrays generated from the fourier transform, but we're not sure how we could use that as input as we want to classify if it's real or fake, and not predict y for each x.

",47747,,47747,,6/9/2021 9:29,11/1/2022 14:02,Can you use a graph as input for a neural network?,,1,0,,,,CC BY-SA 4.0 28150,1,28153,,6/8/2021 15:22,,3,1781,"

In Sutton & Barto's Reinforcement Learning: An Introduction, page 54, the authors define the terminal state as following:

Each episode ends in a special state called the terminal state

But the authors also say:

the episodes can all be considered to end in the same terminal state, with different rewards for the different outcomes. Tasks with episodes of this kind are called episodic tasks.

I believe there is also a fundamental difference between a terminal state, nonterminal states and plain, normal states:

In episodic tasks we sometimes need to distinguish the set of all nonterminal states, denoted S, from the set of all states plus the terminal state, denoted S+.

In the first quote, it appears as if the terminal state is just a term to describe the final state of an episode, but, from the second quote, I understand that the terminal state is the same no matter the outcome of the episode. If we consider the game of chess, what would we consider as a terminal state? Would it be the state that, if reached, will end the game (checkmate), no matter the result (win, loss)? But then how can we describe a state that would lead to draw? If we say about a state that leads to a draw that it's a nonterminal state since we can play an "infinite" number of turns without reaching a win or a loss hence without reaching the terminal state, aren't we implicitly supposing that reaching a draw isn't a result for which we should attribute a reward (e.g. 0)? And if we name a state that leads to a draw a terminal state, then what would be the difference between a normal state and a nonterminal state?

",44965,,2444,,6/10/2021 9:56,6/10/2021 9:56,"What is the difference between terminal state, nonterminal states and normal states?",,1,0,,,,CC BY-SA 4.0 28151,2,,28143,6/8/2021 15:40,,1,,"

This depends on your definition of robust.
Robust to what exactly?

Testing different random seeds will test the robustness of the algorithm on stochasticity of the environment and the algorithm's optimization procedure.
Trying different hyperparameters would test the robustness of the algorithm to hyperparameter changes.

Some RL benchmarks have their own definition of robustness:
The L2RPN benchmark (https://competitions.codalab.org/competitions/25426) defines robustness as a policy that is able to respond properly when unexpected events or adversarial attacks happen. In benchmarks such as Atari or Procgen, which have multiple tasks, a robust algorithm is one that can solve all the tasks.

If you mean robust, as in, the algorithm do indeed learn some inherent pattern on how to solve the task, as opposed to superficially memorizing sequences of actions, you could add noise to the observations (e.g. simple gaussian noise), try to add some sort of adversarial attacks, or try the sticky action idea used in Atari benchmarks.

",37510,,,,,6/8/2021 15:40,,,,0,,,,CC BY-SA 4.0 28152,2,,28149,6/8/2021 15:50,,1,,"

There no problem with the use of the data in form of an array to classify, whether the audio belongs to a real or fake voice. Just use 1d convolutional neural network with downsamplings or some global pooling operations, such that in the final layer the temporal extent of the signal has length 1. This would be the logit for binary classification.

However, as far as I understand, you get rid of phase after the Fourier transform, but it can be useful for the prediction. Probably, a better approach would be to use mel_spectrogram https://en.wikipedia.org/wiki/Mel-frequency_cepstrum for this problem.

",38846,,,,,6/8/2021 15:50,,,,2,,,,CC BY-SA 4.0 28153,2,,28150,6/8/2021 16:43,,3,,"

Terminal state is always the same in the sense that it represents the same thing, that the episode is over. They don’t need to be the exact same state; for instance you could have an $n$ by $n$ grid world where the top right and bottom left states are terminal as when you reach these your agent dies. These are both terminal but not the same state.

For chess it would be any state that when reached the game ends (regardless of win/draw/lose). The difference between these terminal states is what reward you will receive for reaching it.

Finally, normal states are non-terminal states, so there is no difference.

",36821,,,,,6/8/2021 16:43,,,,1,,,,CC BY-SA 4.0 28154,1,,,6/8/2021 16:47,,1,823,"

I am a 3rd-year math major, who is interested in computer science, particularly algorithms and competitive programming (did some olympiads in high school, ACM ICPC in university, etc.), and I have been meaning to get into AI.

I have all the prerequisites to get started, but the problem is that I really, really hate statistics. I took a course on it last year and found it to be very dry.

I've heard people say that AI is mostly statistics and I am very concerned if it's true. I can tolerate some amount of stats, but, if the field literally revolves around it, I will not be able to do it.

So, exactly how much statistics is involved in AI? Are there fields of AI which use it less than others?

",47754,,2444,,12/20/2021 22:14,12/20/2021 22:14,How much statistics is involved in AI?,,3,3,,,,CC BY-SA 4.0 28155,1,,,6/8/2021 17:04,,0,86,"

I am creating a sentiment analysis model using Naive Bayes. When I test the model, I get an average accuracy of 65%; however, the sensitivity of the model is much higher, 90%.

So, I am wondering if there are methods to fixing this data; or, since the sensitivity is very high, then would it be ok to move forward with the model?

",47757,,2444,,7/10/2021 22:09,12/8/2021 0:06,Is it ok to have an accuracy of 65% and a sensitivity of 90% with Naive Bayes for sentiment analysis?,,1,0,,,,CC BY-SA 4.0 28156,1,28183,,6/8/2021 17:05,,0,445,"

I am using MobileNetV3 from TF keras for doing transfer learning; I removed the last layer, added two dense layers, and trained for 20 epochs.

  1. How many dense layers should I add after the MobileNet and How dense should they be?

  2. How many epochs should I train for?

  3. Validation loss and validation accuracy have a strange pattern, is that normal?

Is there anything I am missing?

",33808,,2193,,6/9/2021 8:45,7/6/2022 13:26,Validation accuracy very low with transfer learning,,1,2,,,,CC BY-SA 4.0 28157,2,,28126,6/8/2021 20:27,,0,,"

Or is there no clear answer and would this be something I'd only be able to figure out by testing against data?

That is the general rule you should always consider when looking at feature engineering (which is what you are proposing), as well as for many architecture choices.

It is very hard to tell in advance what a change to a machine learning system will do for whichever metrics you are interested in. You may have some experience that applies, or find similar experiments online that you can take inspiration from. But you will want to test everything, and should take care to use good practice when evaluating different options - e.g. a cross validation dataset (sometimes called a development dataset).

Would it be redundant/not ideal if I had another feature that was the average of the past 40 opens for example? What about the max/min of the past opens?

One aspect of multi-layer neural networks, is that they can in theory learn useful internal features in the hidden layers from raw data. These internal features are unlikely to be exact copies of mean values or min/max values, or anything else you would construct manually. However, they can be similar enough in end result that manually derived features that you think of will not make much difference.

So you would think that derived features would not be useful in nerual networks. In practice though they can be, because the convergence process to find the best internal features is not perfect. Smart feature engineering can improve the performance of a neural network classification or regression supervised learning. Sometimes you can find "golden" engineered features that relate really well to your target variable, and that boost results significantly.

A couple of things to bear in mind:

  • A "scattergun" approach of trying a large number of derived features might seem attractive, but there is a risk of overfitting the training data. If you try enough times you may find something that works purely by chance but only for the training data set.

  • Nonlinear combinations that make conceptual sense given the problem domain can be worth looking at. For instance if you want to predict house prices, and your raw data was house width and depth, then floor area width * depth might be a useful feature.

Feature engineering is still something of an artform. Automated systems using the scattergun approach with filtering are competitive, but domain insight can still win.

If you have vast amounts of data and the CPU time cost is not an issue, you may want to forgo feature engineering due to the theoretical redundancy. It seems possible to make a giant neural network using latest features such as skip connections and batch normalisation, feed it raw (but normalised) data, and press "go" to get a state-of-the-art result. From that perspective, feature engineering is for when you don't have "big data" or deep pockets for heavy processing - for many of us that still means feature engineering is a standard approach on every project.

",1847,,1847,,6/8/2021 21:30,6/8/2021 21:30,,,,0,,,,CC BY-SA 4.0 28163,1,,,6/9/2021 5:24,,0,127,"

I have gone through some theoretical introductions of RNN and LSTM, which do not contain any code, and they describe in fair detail what the cells do, how they apply operations like forget, sigmoid, etc.

Then I am trying to implement them with tensorflow, and even after reading the documentation, I am unable to connect the layers' API with my theoretical understanding of the operations. For example, take the following simple code:

import tensorflow as tf # tensorflow 2.5.0
inputs=tf.random.normal(shape=(32, 10, 8))
lstm = tf.keras.layers.LSTM(units=4, return_sequences=True, return_state=True)
outputs=lstm(inputs) # Call the layer, gives a list of three tensors
lstm.trainable_weights # Gives a list of three tensors 

So what exactly is the layer doing here based on the input it receives and the weights that were initialised randomly?

If I am to implement the layer's operation myself, how do I do that?

The Google and Keras documentation contain a lot of example code, but not really explanations of the internal mathematical operations. So any help in this area, or any reference that explains the mathematical operations (not in general, but what's happening in the Tensorflow layer) would be greatly appreciated.

I have the exact same question regarding RNN and GRU layers too.

",47776,,16521,,6/10/2021 9:31,6/10/2021 9:31,"What do RNN, LSTM, and GRU layers do in Tensorflow?",,0,3,,,,CC BY-SA 4.0 28164,2,,28154,6/9/2021 8:42,,2,,"

I work in NLP, and use very little statistics. Actually, almost nothing I do can be classed as 'serious' statistics.

So yes, AI is a wide area, and in my company there is a group that does machine learning, so they probably use a lot more of it than I do. Previously I worked in conversational AI. Again, very little to no statistics at all.

I would contest the view that AI is intrinsically data-driven. That's one aspect of it. However, while I look at actual data (texts) to derive algorithms for their analysis, I don't need to use any statistical concepts for that. And even evaluation of the results is just counting and comparing.

There are statistical algorithms in NLP, but they are not usually very complex or hard to understand even without a lot of stats knowledge.

",2193,,,,,6/9/2021 8:42,,,,1,,,,CC BY-SA 4.0 28166,1,,,6/9/2021 11:16,,0,381,"

While studying word embeddings in natural language processing, I encountered the following statement on page 327 of the textbook Natural Language Processing by Jacob Eisenstein

Distributional semantics are computed from context statistics. Distributed semantics are a related but distinct idea: that meaning can be represented by numerical vectors rather than symbolic structures.

The dissimilarity between them is that distributed semantics represent the meaning of a word by a vector of numbers. Distributional semantics represent the meaning of a word by symbolic structure (inferred from paragraph).

I can say, in distributed semantics, the word cat can be represented by the vector $[23, 43,21,16]$ (for example).

Similarly, please, give me a small example of how the meaning of a word is represented by symbolic structure (which should not be necessarily correct).

What is meant by symbolic structure here?

",18758,,2444,,6/10/2021 9:50,6/11/2021 11:13,What is the exact difference between distributional semantics and distributed semantics?,,2,0,,,,CC BY-SA 4.0 28167,1,,,6/9/2021 11:33,,1,31,"

I know perceptron is a linear classifier that tells linearly separable binary class data, such as iris setosa vs. iris versicolor via their sepal's length and width.

I'd just like to know if I have 2 groups of photos, one is for dog and the other is for cat, is it possible to train a perceptron to tell if a picture is a dog or cat?

",45689,,45689,,6/9/2021 12:04,6/9/2021 12:04,Is it possible to train a perceptron to tell if a picture is a dog or cat?,,0,5,,,,CC BY-SA 4.0 28168,2,,28154,6/9/2021 11:59,,4,,"

Many people without a formal/solid background in statistics (e.g. without knowing exactly what the central limit theorem (CLT) states) are doing research on machine learning, which is a very big and fundamental subfield of AI that has a big overlap with statistics, or using machine learning to solve problems.

So, in my view, you don't need to learn everything about statistics to do research on some AI topic, including machine learning, but you need to have an understanding of the basics (at least a full introductory college-level course on statistics and probability theory), and the more you know the better.

More specifically, if you don't know what the CLT or the law of large numbers state, you will not have a full understanding of many things that are going on. At the same time, you will find a lot of research papers (published in ML conferences and journals) that do not even mention hypothesis testing, but it's important to have an idea of what a sample, sample mean, sample variance, likelihood, maximum likelihood estimation (MLE) or Bayes' theorem are. In fact, MLE is widely used in machine learning, but not many people using/doing ML would probably be able to explain precisely what the likelihood function is.

Finally, in my opinion, having a formal/solid (not necessarily extensive) background in statistics should be a prerequisite for doing research in machine learning (you need to really know what the likelihood function is!), which some people called applied/computational statistics or glorified statistics for some reason, but not necessarily for using machine learning to solve some problem. Moreover, there are other areas of AI that do not make use of statistics, but ML is probably the most important area of AI. So, if you hate statistics, you may not like AI and particularly ML, but maybe you will change your opinion about statistics, once you understand what e.g. neural networks are capable of doing or not.

",2444,,2444,,6/9/2021 16:10,6/9/2021 16:10,,,,3,,,,CC BY-SA 4.0 28169,2,,28155,6/9/2021 12:05,,1,,"

I could get perfect sensitivity for positive sentiment if I always predict positive sentiment, but my accuracy could be 50%ish depending on the distribution of positive sentiment in the data. The sensitivity and accuracy scores alone are not enough to tell you if your model is any good, you will need to have some goal that you are trying to achieve e.g., get 70% accuracy.

",34473,,,,,6/9/2021 12:05,,,,1,,,,CC BY-SA 4.0 28170,2,,28166,6/9/2021 12:45,,1,,"

I can't really make much sense of Eisenstein's distinction between distributional and distributed. And I think in your question you actually mix up the two terms as well, as distributed semantics involve symbolic structures, whereas distributional semantics are numerical vectors according to his definition. EDIT: actually, he seems to mix it up himself there?! Very unclear paragraph there.

I can only imagine that the symbolic structures he refers to here are semantic networks and the like, as in

(is-a feline mammal)
(is-a lion feline)
(has-a feline tail)

Here the meaning of lion, as a feline mammal with a tail, is defined through a symbolic structure, and not in reference to the context of usage. Why this should be distributed, I can only guess: the meaning components are split over a set of statements, which build up a larger structure perhaps?

It could, of course, be the case that this is covered elsewhere in the book — I haven't had the time to look through all of it.

UPDATE: Thinking more about this, perhaps he means that distributional semantics are representations where each word is a straight co-occurrence vector, ie a vector as large as the words used to define contexts, while distributed semantics is similar, but it's a different vector which is created through processing the contexts (and could thus be smaller)?

",2193,,2193,,6/9/2021 12:50,6/9/2021 12:50,,,,0,,,,CC BY-SA 4.0 28173,1,,,6/9/2021 22:30,,0,66,"

I’m working on an object detection cnn, and having some issues with non max suppression. When I have a small box inside a large box, NMS is not rejecting the smaller, incorrect box, because its IOU is small (large union, small intersection). How is this scenario typically dealt with? When using out of the box pretrained models for object detection I don’t seem to get boxes completely inside other boxes. Example here: green is ground truth, blue is prediction. The center box has a tiny blue box inside that’s not getting rejected by NMS

",47814,,,,,11/2/2022 3:03,How to reject boxes inside each other with Non Max Suppression,,1,0,,,,CC BY-SA 4.0 28174,2,,28173,6/9/2021 22:35,,0,,"

IOU makes sense for determining accuracy against ground truth, but for non max suppression have you tried intersection over minimum size?

",45018,,,,,6/9/2021 22:35,,,,1,,,,CC BY-SA 4.0 28176,2,,28154,6/10/2021 2:46,,0,,"

Thanks for the extra details. There area good answers already, but I'll give just a bit more information since your requirements are a little more specific now.

Since you mentioned Research Engineer only, I'm going to assume you are not really interested in a plain engineering role.

I can say for a specific Research Engineer role I am aware of at a world class industrial AI lab, their minimum requirements include "calculus, linear algebra, and statistics at least to a first year degree level". So it sounds like you already have this required level if you were to apply today.

On the other hand, I would be cautious regarding what you found dry about your stats course. If thoughts such as "this portion of the dataset have somewhat less representation in the results and this other portion have somewhat more, I wonder why that is?" sound very dull to you, you may not like it. Most current AI is based on large sets of data. I am referring to deep learning / neural networks here rather than previous methods, but that is where a lot of the hype / major breakthroughs are at the moment. In computer vision which you mentioned for example, current methods typically input a large dataset of images to create the AI system, then test it on a large dataset of images. If they are images of road signs, you might find that increasing the proportion of one type of road sign makes the system worse for another type of road sign. Identifying relationships like that is an important part of the research function. The more towards the research side rather than the engineering side you are, the more you will need to be able to analyse things like this yourself.

Despite all that, I found stats to be one of the drier maths subjects, yet I very much like AI (and stats in AI).

",45018,,,,,6/10/2021 2:46,,,,0,,,,CC BY-SA 4.0 28178,1,,,6/10/2021 6:36,,2,88,"

The word continuous in mathematics is a property of either a set or a function that says that the underlying object has no discontinuity in the range mentioned. If the object is a set, then $[-1,1]$ is a continuous one while $\{-1, +1\}$ is not. Similarly, a function is said to be continuous if the actual value and the limiting value at every point in the domain are equal.

Now, coming to CBOW. I read the following statement from p:334 of Natural Language Processing by Jacob Eisenstein

Thus, CBOW is a bag-of-words model, because the order of the context words does not matter; it is continuous, because rather than conditioning on the words themselves, we condition on a continuous vector constructed from the word embeddings.

What is meant by continuous in this case? Does continuous vector stand for a vector of real numbers?

",18758,,2444,,6/10/2021 9:46,6/10/2021 16:58,"What is the meaning of ""continuous"" in a continuous bag-of-words model?",,1,0,,,,CC BY-SA 4.0 28179,1,,,6/10/2021 9:31,,0,55,"

While I was doing an object detection project, I have encountered the problem of getting FALSE POSITIVES and FALSE NEGATIVES. After days of research on StackOverflow, I figured out that I need to collect more negative images or background images.I decided to document this process so other people could easily solve this issue and the result of documentation is this. After training the model with Negative/Background images, my FP/FN rates were normalized so that in video frames I started getting fewer FPs. All of us, machine learning developers get experience by getting hands dirty - this is clear to all of us. But I haven't seen(probably missed) any video tutorials or examples on books showing how to collect background images and why we need them at all.

So here is the question: Okay, so every experienced ML engineer knows what is the FP/FNs are, and their prevention methods. But why this topic is less known and taught within popular object detection tutorials and books? Or am I missing something?

",47828,,2444,,7/1/2021 20:57,7/1/2021 20:57,Why the collection of background/negative image dataset is not taught in object detection tutorials and books?,,0,2,,,,CC BY-SA 4.0 28183,2,,28156,6/10/2021 12:24,,1,,"

these two steps solved my problem

  1. I found that I forget to freeze the per-trained model by setting trainable = False
  2. It seams that I failed to load the weights when I get the model from keras.application even that the documentation mentioned

Weights are downloaded automatically when instantiating a model. They are stored at ~/.keras/models/.

so I get the model from tensorflow hub which worked correctly

",33808,,33808,,7/6/2022 13:26,7/6/2022 13:26,,,,0,,,,CC BY-SA 4.0 28186,2,,28178,6/10/2021 16:58,,1,,"

A bag-of-words-model (BOW) is usually used to represent a text: you throw all the words together (as if in a bag), without keeping track of their sequence. This is a gross simplification over a text, as word sequencing plays an important role in creating the meaning of a text. But on the positive side it's easier to handle, eg in information retrieval tasks, where you might not need the precise meaning anyway.

So the BOW is discrete and symbolic, as it represents each of its elements by a set of words that are contained in it. Nothing numeric in there. You'd calculate the similarity of two items by comparing the two sets, how big is their intersection, and the difference between the two.

A CBOW is a slight modification: instead of the words, we use vector representations of them; and instead of having $n$ vectors for the $n$ surrounding words, they're all added up (formula 14.14) It's still a BOW, as the set of words used to represent an element is now the set of words surrounding it within a certain distance ($h$). What makes it continuous is the switch from a set of words (ie symbols) to a vector.

He contrasts this with a recurrent neural network, where words are represented by a state vector which gets updated after every new word, going back to the very beginning of the text. This would give different representations for the same word occurring in the same localised context, whereas the CBOW would return the same representation.

For example, for $h$ being 1 (to keep it simple):

when a word has a meaning, then a word has a purpose.

Now imagine we're interested in the encoding of word: in the recurrent case the first one is when + a + word, whereas the second one is when + a + word + has + a + meaning + , + then + a +word — the sequences here represent the updated state of the network after the respective words have been added.

In the CBOW case, both occurrences of word are encoded by a + word + has (the word plus/minus one word either side, as $h$ is 1). So they will be identical.

To answer your question, continuous here is in contrast to discrete or symbolic, and indeed refers to a numerical vector.

",2193,,,,,6/10/2021 16:58,,,,0,,,,CC BY-SA 4.0 28187,2,,28128,6/10/2021 17:26,,0,,"

So after doing a bit of research, I finally found out why the model is not working at all when I change the LSTM to Bi-LSTM.

The task of the learning is Next Word Prediction for each cell of LSTM. When you have a Uni-directional LSTM, this is inherently a tough task for the model to learn good representations that can help it generate the next word with enough confident.

What happens when you change the model to a Bi-LSTM is that, if you concatenate the forward and backward values of each cell together, you have now the information of the very next word you wanted to predict via the backward route.

To alleviate this issue, Wang et al. propose to do prediction on forward and backward routes separately while training the data, and for generating, see which route has more confidence in its generated caption.

",36376,,,,,6/10/2021 17:26,,,,0,,,,CC BY-SA 4.0 28190,1,,,6/11/2021 5:46,,0,164,"

Word2Vec model does not use any neural network. It uses logistic regression only.

Consider the following paragraph from p:18 of Vector Semantics and Embeddings

We’ll see how to do neural networks in the next chapter, but word2vec is a much simpler model than the neural network language model, in two ways. First,word2vec simplifies the task (making it binary classification instead of word prediction). Second, word2vec simplifies the architecture (training a logistic regression classifier instead of a multi-layer neural network with hidden layers that demand more sophisticated training algorithms). The intuition of skip-gram is:

  1. Treat the target word and a neighboring context word as positive examples.

  2. Randomly sample other words in the lexicon to get negative samples.

  3. Use logistic regression to train a classifier to distinguish those two cases.

  4. Use the learned weights as the embeddings.

But, why it is called a neural model then? Is there any version of Word2Vec that use neural network?

",18758,,18758,,6/27/2021 2:48,6/27/2021 2:48,Why Word2Vec is called a neural model if no neural network is used in it?,,0,9,,,,CC BY-SA 4.0 28193,1,,,6/11/2021 9:27,,0,25,"

I am comparing my coded TD3 (Twin-Delayed DDPG) and the same TD3 (same hyperparameters) but with Priority Replay Buffer instead of a normal Replay Buffer.

From what I have read, PER (Priority Experience Replay, Priority Replay Buffer) aims to improve sample efficiency. But how do I measure or quantify sample efficiency on these two? Is it who gets the highest average reward in a given number of episodes? Does it have something to do with the batch size?

",33902,,33902,,6/11/2021 11:02,6/11/2021 11:02,How do I quantify the difference in sample efficiency for two almost similar methods?,,0,2,,,,CC BY-SA 4.0 28196,2,,28166,6/11/2021 10:28,,0,,"

I am writing the answer according to my current understanding

Distributional semantics are computed from context statistics.

It is clear from the statement that the embedding of a word, in case of distributional semantics, is computed from the context statistics i.e., based on the contexts in which the word occurs.

Distributed semantics are a related but distinct idea: that meaning can be represented by numerical vectors rather than symbolic structures.

This means that embeddings obtained from distributed semantics mayn't be obtained from the symbolic structures. Now, we need to understand what is meant by symbolic structure in this case. It can be same as that of context of a word. We can understand it from the following definition of symbol structure

A physical symbol system consists of a set of entities, called symbols, which are physical patterns that can occur as components of another type of entity called an expression (symbol structure). Thus a symbol structure is composed of a number of instances (or tokens) of symbols related in some physical way (such as one token being next to another).

So, it can be understood that distributed semantics are the embeddings obtained not only through the context statistics as in the case of distributional semantics. For example, there are distributed representations beyond distributional statistics, in which embeddings are calculated from the internal structure of words and not from the context in which the word occurs (p 341). One can understand it from the following excerpt from the same page

How can word-internal structure be incorporated into word representations? One approach is to construct word representations from embeddings of the characters or morphemes.

Thus, to be concise, the embedding for the word cat is only obtained using the context statistics of cat in case of distributional semantics and in case of distributed semantics, the embedding of the word millicuries can be calculated from the embeddings the morphemes $milli,curie,s$ rather than the context statistics of the word millicuries since it is a rare word which is unlikely to have reliable context information available.

",18758,,2193,,6/11/2021 11:13,6/11/2021 11:13,,,,3,,,,CC BY-SA 4.0 28197,1,28200,,6/11/2021 10:52,,2,497,"

I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.3 Choice of Activation and Loss Functions says the following:

The choice of activation function is a critical part of neural network design. In the case of the perceptron, the choice of the sign activation function is motivated by the fact that a binary class label needs to be predicted. However, it is possible to have other types of situations where different target variables may be predicted. For example, if the target variable to be predicted is real, then it makes sense to use the identity activation function, and the resulting algorithm is the same as least-squares regression. If it is desirable to predict a probability of a binary class, it makes sense to use a sigmoid function for activating the output node, so that the prediction $\hat{y}$ indicates the probability that the observed value, $y$, of the dependent variable is $1$.

I've read about sigmoid functions, but it isn't clear to me how they make it so that the prediction $\hat{y}$ indicates the probability that the observed value, $y$, of the dependent variable is $1$. So how do sigmoid functions make it so that the prediction $\hat{y}$ indicates the probability that the observed value, $y$, of the dependent variable is $1$?

EDIT: I am specifically asking about the probability that the value is $1$ (that is, how sigmoid functions specifically check for this).

",16521,,16521,,6/11/2021 11:18,6/13/2021 12:26,"How do sigmoid functions make it so that the prediction $\hat{y}$ indicates the probability that the observed value, $y$, is $1$?",,1,2,,,,CC BY-SA 4.0 28200,2,,28197,6/11/2021 13:01,,2,,"

I am specifically asking about the probability that the value is 1 (that is, how sigmoid functions specifically check for this).

They don't in general. In the quoted text, there is an explicit constraint that means this can be the case:

If it is desirable to predict a probability of a binary class

(emphasis mine). This means that the target value $y \in \{0,1\}$. Which in turn means that training data labels will all be $0$ or $1$.

At this point, if you pass in some calculated weighted sum of features to a sigmoid function, it will output a real value between $0.0$ and $1.0$ that can be interpreted as a probability. But without more going on, it is not linked in any way to the probability of the label being $1$.

To complete things, there is also an implicit constraint (not referred in your quoted text) that you will use a training approach that will cause this weighted sum of features to drive the sigmoid to represent a meaningful probability based on the source data. The most important part of this is to use a cost function which is minimised when $\hat{y} = \mathbb{P} \{y = 1 \mid \mathbf{x}\}$ where $\mathbf{x}$ is the input features associated with a label. The most usual cost function here would be based on binary cross-entropy loss:

$$C = -\frac{1}{N}\sum_{i=1}^{N}y_i\text{log}(\hat{y}_i) + (1-y_i)\text{log}(1 -\hat{y}_i)$$

You can show using calculus that this term is minimised when $\hat{y}$ is directly related to the frequency of $y = 1$ in the training data. Adding features $\mathbf{x}$ makes this more complex, but as a first pass to understanding what this is doing, you can simply use a list of $N$ times $0$ or $1$ outputs with no input data, and show that the sum above is minimised by a fixed $\hat{y}$ equal to the proportion of $y = 1$ – the rationale is that you would use a fixed $\hat{y}$ as your guess for each value if you had no input data about any example to go on.

One important detail of the related maths is that the activation function does not affect the target of the convergence. However, it may still affect the speed and stability of the convergence. The combination of binary cross-entropy loss and sigmoid activation is popular because the gradient calculation is simple and relatively stable.


Here is the maths worked through for a population of $N$ examples $y_i$ with no feature data (so no $\mathbf{x}_i$), $M$ of which have $y_i = 1$ and $N - M$ of which have $y_i=0$. We will guess a fixed value of $\hat{y}$ so that it minimises the mean cross-entropy loss:

$$\text{argmin}_{\hat{y}} C = \text{argmin}_{\hat{y}}-\frac{1}{N}\sum_{i=1}^{N}y_i\text{log}(\hat{y}) + (1-y_i)\text{log}(1 -\hat{y})$$

The factor of $\frac{1}{N}$ can be removed as it doesn't affect what value of $\hat{y}$ causes the minimum. In addition, we have $M$ values where $y = 1$ and $N - M$ values where $y = 0$, so we can simplify the sum.

$$= \text{argmin}_{\hat{y}}-M\text{log}(\hat{y}) - (N-M)\text{log}(1 -\hat{y})$$

To find the stationary point, take the derivative and set equal to zero (I reversed signs here since $-0 = 0$).

$$\frac{d}{d\hat{y}}[ M\text{log}(\hat{y}) + (N-M)\text{log}(1 -\hat{y})] = 0$$

$$\frac{M}{\hat{y}} - \frac{N-M}{1-\hat{y}} = 0$$

$$(1-\hat{y})M - \hat{y}(N-M) = 0$$

$$M = \hat{y}N$$

$$\hat{y} = \frac{M}{N}$$

This of course is the same as the probability that a randomly selected $i \in (1,N)$ will have $y_i = 1$.

",1847,,16521,,6/13/2021 12:26,6/13/2021 12:26,,,,1,,,,CC BY-SA 4.0 28201,1,,,6/11/2021 13:32,,0,203,"

I'm having trouble understanding how bias is added to the feature extraction convolution. I've seen people either refer to the bias as a single number that changes per filter or the whole matrix that is the size of the output. Here is what I mean:

  • $I$ is the input single-channel image.
  • $F$ is the filter.
  • $b$ is the bias.
  • "Izhod" means "output".

Which is actually the correct bias used in CNN?

",47855,,2444,,6/12/2021 2:12,6/12/2021 2:12,How is the bias added after the convolution in a CNN?,,0,2,,,,CC BY-SA 4.0 28202,1,28260,,6/11/2021 15:11,,3,115,"

I'd like to prove this "second form" of Bellman's equation: $v(s) = \mathbb{E}[R_{t + 1} + \gamma v(S_{t+1}) \mid S_{t} = s]$ starting from Bellman's equation: $v(s) = \mathbb{E}[G_{t} \mid S_{t} = s]$ where the return $G_{t}$ is defined as follows: $G_{t} = \sum_{k=0}^{\infty}{\gamma^{k}R_{t+k+1}}$.

I tried to use the linearity of the expectation as follows: $v(s) = \mathbb{E}[R_{t+1} \mid S_{t} = s] + \mathbb{E}[\sum_{k = 1}^{\infty}{\gamma^{k}R_{t+k+1}} \mid S_{t} = s]$

Which gives us: $v(s) = \mathbb{E}[R_{t+1} \mid S_{t} = s] + \gamma\mathbb{E}[\sum_{k = 0}^{\infty}{\gamma^{k}R_{(t + 1) + k + 1}} \mid S_{t} = s] = \mathbb{E}[R_{t+1} \mid S_{t} = s] + \gamma\mathbb{E}[G_{t + 1} \mid S_{t} = s]$

I also tried to develop the second formula: $v(s) = \mathbb{E}[R_{t+1} \mid S_{t} = s] + \gamma\mathbb{E}[v(S_{t+1}) \mid S_{t} = s]$ and I'm tempted to say that $\mathbb{E}[G_{t+1} \mid S_{t} = s] = \mathbb{E}[v(S_{t+1}) \mid S_{t} = s]$ but that would only be right in the case that both follow conditions are verified:

  1. We have the value function of a particular state $s^\prime$ inside the expectation of the second term (something like $\mathbb{E}[v(s^\prime) \mid S_{t} = s]$ which would directly give $v(s^\prime)$ since it's a scalar) and not $v(S_{t+1})$.
  2. We have $\mathbb{E}[G_{t+1} \mid S_{\textbf{t+1}} = s^\prime]$ in the second term.

I'm probably not understanding something correctly especially what $v(S_{t+1})$ would mean (that wasn't covered in the material I'm following but for me it would be just a function that maps the possible states at time step $t+1$ to the expected return starting from that step at that time step).

",44965,,16521,,6/15/2021 11:15,6/15/2021 11:15,How to prove the second form of Bellman's equation?,,1,7,,,,CC BY-SA 4.0 28207,1,28241,,6/11/2021 23:55,,0,121,"

What is the difference between the definition of "accuracy" in machine learning and federated learning?

In particular, how is the accuracy calculated in the following paper:

Cai, Lingshuang, et al. "Dynamic Sample Selection for Federated Learning with Heterogeneous Data in Fog Computing." ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE, 2020.

",45605,,45605,,6/19/2021 3:48,6/19/2021 12:29,"What is the difference between the definition of ""accuracy"" in machine learning and federated learning?",,1,2,,,,CC BY-SA 4.0 28209,1,28213,,6/12/2021 10:52,,1,48,"

I'm thinking of implementing "Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation" paper. In this paper authors used some custom object detector for entity detection(eg: Key, rope, ladder, etc) but they did not give any information about this custom detector. Can you please give me a suggestion on how to Implement this object detector?

",47884,,36821,,6/12/2021 13:38,6/13/2021 21:33,How to detect entities in Montezuma's Revenge environment,,1,2,,,,CC BY-SA 4.0 28210,1,,,6/12/2021 16:29,,1,26,"

For example, I have a massive amount of data, but I have limited computational resources and time to train on the full data. Other cases may include, I have huge amounts of 360-degree images, where I need to train on full-size images (without cropping down), but I have limited computation power (GPU, RAM, etc.), what can I do in those cases?

",45720,,,,,7/12/2021 19:00,What to do when you have massive amount of data but you don't have enough computation power for training a machine learning model?,,1,0,,,,CC BY-SA 4.0 28212,2,,28210,6/12/2021 17:10,,1,,"

It's hard to answer this question without knowing what your goal is, but if your data is extensive, high quality, especially if it is labelled, and no similar dataset is publicly available, then publishing it freely with some kind of challenge could be very helpful if that's an option. Many organisations have the opposite problem: available computing resources but lack of data. If your data are new and interesting, I could imagine researchers wanting to use it. If people get good results, they may publish them, good for AI generally, and presumably also useful to you. If you do this of course you would need to publicise it a bit so people become aware of it.

",45018,,,,,6/12/2021 17:10,,,,0,,,,CC BY-SA 4.0 28213,2,,28209,6/12/2021 17:21,,0,,"

The quote from the paper is:

In this work, we built a custom object detector that provides plausible object candidates.

And in their related submisstion to NeurIPS:

In this work, we built a custom pipeline to provide plausible object candidates. Note that the agent is still required to learn which of these candidates are worth pursuing as goals.

I think that this detects and identifies "screen locations of interest" to create parameterised sub-goals - e.g. one goal might be to make objects A and B coincide. It is also not clear whether static objects such as rope and ladders are included, or the detector is more fine-tuned to "active" entities such as the player, keys, doors, enemies etc.

This also gives a clue:

The internal critic is defined in the space of entity1, relation, entity2, where relation is a function over configurations of the entities. In our experiments, the agent learns to choose entity2. For instance, the agent is deemed to have completed a goal (and only then receives a reward) if the agent entity reaches another entity such as the door

It implies that the entity detector does identify the player-controlled entity and sets this as entity1 for all the top level goals. All the high level goals are stated in terms of "player coincides with B" or "player avoids C".

Can you please give me a suggestion on how to Implement this object detector?

The implication is that they created something simple and fast that would specifically detect important objects in the game, so that it would find and identify all the objects that could reasonably be used in sub-goals. Objects in old Atari games are visually distinct and simple, so it is likely to be something quite basic. The reference code linked by João Schapke implies perhaps template match because there are suitable images in a "template" directory, but I was unable to find the object detector functions from scanning the code quickly.

It is also important to note that this custom detector was only used to build a selection of sub-goals and detect whether any had been achieved. It was not used as input to the neural networks (which had to learn their own representations of the same objects in order to make decisions), and did not set internal rewards for achieving those sub-goals. The top level policy selected amongst the sub-goals based on processing the video frames - this selection decided the lower level policy's reward. The algorithm used the detector only to assess whether a sub-goal had actually happened, at which point the reward for achieving it would be granted to the lower level policy. Whilst the upper level policy was granted rewards as scored by the original game.

",1847,,1847,,6/13/2021 21:33,6/13/2021 21:33,,,,1,,,,CC BY-SA 4.0 28215,1,,,6/12/2021 19:30,,1,48,"

I'm new to AI/ML and I want to research and learn about techniques that could help me to solve this complex task. Any hint would be appreciated.

Let me explain it with an example:

Let's look at two columns PUR.SUPPLY.MTL_REQ_HDR_ID and MTL.PO_REQUISITION_HEADERS_TAB.ID. It is likely they are related (one is the FK and the other one is PK).

As a human I was able to do it by doing the following:

  1. I decoded abbreviations,
  2. I identified context (MTL module),
  3. I identified the subject (requisition header),
  4. I identified ID keyword,
  5. I identified irrelevant information (TAB postfix),
  6. I matched words that are not in the exact order,
  7. I estimated which elements/words do not have to match/occur,

I would like to match millions of columns relatively quickly (seconds). I would like algorithm to learn:

  1. what words are likely context,
  2. what are irrelevant,
  3. what are subjects,
  4. ideally learn some patterns (prefixes, postfixes, name formats, etc),
  5. based on user responses - approve/reject match,
  6. build dictionary of abbreviations,
  7. estimate probability...

I know this is a complex task, but maybe you know a technique, tool, library, article, example ... anything that could be helpful? Any help would be appreciated.

Thanks.

",47894,,,,,6/12/2021 19:30,Text matching: fuzzy names matching with learning,,0,4,,,,CC BY-SA 4.0 28217,1,,,6/13/2021 0:04,,3,155,"

Term frequency and inverse document frequency are well-known terms in information retrieval.

I am presenting the definitions for both from p:12,13 of Vector Semantics and Embeddings

On term frequency

Term frequency is the frequency of the word $t$ in the term frequency document $d$. We can just use the raw count as the term frequency:

$$tf_{t, d} = \text{count}(t, d)$$

More commonly we squash the raw frequency a bit, by using the $\log_{10}$ of the frequency instead. The intuition is that a word appearing 100 times in a document doesn’t make that word 100 times more likely to be relevant to the meaning of the document.

On inverse document frequency

The $\text{idf}$ is defined using the fraction $\dfrac{N}{df_t}$, where $N$ is the total number of documents in the collection, and $\text{df}_t$ is the number of documents in which term $t$ occurs.......

Because of the large number of documents in many collections, this measure too is usually squashed with a log function. The resulting definition for inverse document frequency ($\text{idf}$) is thus

$$\text{idf}_t = \log_{10} \left(\dfrac{N}{df_t} \right)$$

If we observe the bolded portion of the quotes, it is evident that the $\log$ function is used commonly. It is not only used in these two definitions. It has been across many definitions in the literature. For example: entropy, mutual information, log-likelihood. So, I don't think squashing is the only purpose behind using the $\log$ function.

Is there any reason for selecting the logarithm function for squashing? Are there any advantages for $\log$ compared to any other squash functions, if available?

",18758,,2444,,6/15/2021 11:12,6/16/2021 16:06,Why do we commonly use the $\log$ to squash frequencies?,,2,0,,,,CC BY-SA 4.0 28220,1,,,6/13/2021 4:01,,0,58,"

Suppose $X$ is a random variable taking $k$ values.

$$Val(X) = \{x_1, x_2, x_3, \cdots, x_k\} $$

Then what is the following expression of $N(X)$ called in literature if exists? What does it signify?

$$ N(X) = \prod \limits_{i = 1}^{k} p(x_i)^{p(x_i)}$$

I am using the notation $N(X)$ for the sake of my convenience only.


Background: I am asking this question because of the definition of entropy I encountered. Entropy is calculated as follows.

$$ H(X) = - \sum\limits_{i = 1}^{k} p(x_i) \log p(x_i) $$

If I further solve $H(X)$ as follows, I will get $H(X)$ in terms of N(X).

$$ H(X) = - \sum\limits_{i = 1}^{k} p(x_i) \log p(x_i) = - \sum\limits_{i = 1}^{k} \log p(x_i)^{p(x_i)} $$ $$\implies H(X)= - \log \prod \limits_{i = 1}^{k} p(x_i)^{p(x_i)} = - \log N(X)$$

Entropy is used to characterize the unpredictability of a random variable.


A logarithm is generally applied to many quantities in AI in order to bring them into the desirable range where overflow and underflow won't happen. Hence I am thinking that $\dfrac{1}{N(X)}$ is the actual quantity one has to measure (the entropy?). Hence I am guessing that $N(X)$ can be treated as reciprocal of entropy. So, does $N(X)$ is a quantity that has quantified the predictability of a random variable?

$$N(X) = \dfrac{1}{2^{H(X)}} = \dfrac{1}{2^{entropy}}$$

So, I am wondering whether there is any quantity that $N(X)$ quantify.

",18758,,2444,,12/23/2021 12:53,12/23/2021 13:03,What does the product of probabilities raised to own powers used for entropy calculation quantify?,,1,1,,,,CC BY-SA 4.0 28221,1,,,6/13/2021 11:06,,0,19,"

This code accesses the first 3 examples in the iris data set,

from sklearn.datasets import load_iris
iris = load_iris()
print(iris.data[:3])

and gives

[[5.1 3.5 1.4 0.2]
 [4.9 3.  1.4 0.2]
 [4.7 3.2 1.3 0.2]]

To denote the first example, $x_1$, should I use a column vector like

\begin{bmatrix} 5.1\\3.5\\1.4\\0.2 \end{bmatrix}

or a row vector like the following?

$$[5.1 \ 3.5 \ 1.4 \ 0.2]$$

Andrew Ng suggests putting examples in columns

while typical relational databases putting examples in rows.

I'd just like to know the pros and cons of different notations so that I can decide which one I would follow.

",45689,,45689,,6/18/2021 9:34,6/18/2021 9:34,To denote a training example should I use row vector or column vector?,,0,2,,,,CC BY-SA 4.0 28222,1,,,6/13/2021 11:18,,1,34,"

I want to create a reinforcement learning environment, designed for win tunnel simulations, where for each iteration a deep convolutional model could receive the 3D vector/scalar fields from the past simulation and output a better shape that maximizes the reward function (e.g. minimize drag, maximize lift, etc.)

The observation and action space for the neural network is the same, the inputs of the model will be 3D arrays representing velocity field, pressure field, etc. and the output will be a 3D array (created using Conv3DTranspose) with values [0, 1] which represents the mesh. I'm thinking that the architecture of the model could be something similar to an auto-encoder. My plan is to use the algorithm of Marching Cubes in order to create the mesh from those points and openFoam for the CFD simulations.

This is a small diagram showing the workflow

The goal will be to have multiple trained models specialized in optimizing a particular reward function, like minimizing drag or maximizing lift, for any object/shape given as input. What are your thoughts on this? Do you think it makes sense?

",47903,,,,,6/13/2021 11:18,CFD Reinforcement Learning Topology optimization wind tunnel,,0,0,,,,CC BY-SA 4.0 28224,1,,,6/13/2021 12:16,,1,12,"

I have a confusion about the way the LSTM networks work when forecasting with an horizon that is not finite but I'm rather searching for a prediction in whatever time in future. In physical terms I would call it the evolution of the system.

Suppose I have a time series $y(t)$ (output) I want to forecast, and some external inputs $u_1(t), u_2(t),\cdots u_N(t)$ on which the series $y(t)$ depends.

It's common to use the lagged value of the output $y(t)$ as input for the network, such that I schematically have something like (let's consider for simplicity just lag 1 for the output and no lag for the external input): $$ [y(t-1), u_1(t), u_2(t),\cdots u_N(t)] \to y(t) $$ In this way of thinking the network, when one wants to do recursive forecast it is forced to use the predicted value at the previous step as input for the next step. In this way we have an effect of propagation of error that makes the long term forecast badly behaving.

Now, my confusion is, I'm thinking as a RNN as a kind of an (simple version) implementation of a state space model where I have the inputs, my output and one or more state variable responsible for the memory of the system. These variables are hidden and not observed.

So now the question, if there is this kind of variable taking already into account previous states of the system why would I need to use the lagged output value as input of my network/model ?

Getting rid of this does my long term forecast would be better, since I'm not expecting anymore the propagation of the error of the forecasted output. (I guess there will be anyway an error in the internal state propagating)

Thanks !

",47907,,,,,6/13/2021 12:16,LSTM Forecast Evolution,,0,0,,,,CC BY-SA 4.0 28226,1,,,6/13/2021 14:13,,1,92,"

I have structured data and image data to solve a regression problem. One sample of structured data can be related to N images.

If I use only structured data, I get decent performance, but not enough to properly solve the problem. I want to use related images to the structured data to improve performance.

My approach was to create 3 neural networks. The first one for the image input, the second one for structured input, and the third one to combine both image and structured networks and output the final result.

The main problem is how to properly combine one sample of structured data with N images. All the images already saved as bottleneck features from one of Keras applications. I combined the structured data with each corresponding image and got a very good result. (Duplicating structured sample for each corresponding image) But investigation showed that the validation dataset had training structured samples, but only combined with different images. So the network just memorized the dataset very well (on 110k samples) giving great synthetic results and bad generalization on real-world data. After I fixed validation and training datasets (each dataset doesn't have the same sample of structured data), the neural net showed real performance, which is bad.

So my question is: What is the state-of-the-art to combine one sample of structured data with N images? Of course, structured data and images are logically connected. Train 2 neural networks alone and then combine their outputs in third network? Or train all three networks at once? Or maybe train images with CNN and then combine CNN output with structured data using some gradient boosting algorithm?

",29886,,2444,,6/14/2021 1:08,6/14/2021 1:08,What is the best way to train neural network with imbalanced mixed data (images and structured data)?,,0,0,,,,CC BY-SA 4.0 28227,1,,,6/13/2021 14:18,,1,18,"

How we could prevent poisoning attacks in adversarial Machine Learning?

I read it from this link and other sources. As per my understanding, poisoning could be done after the ML algorithm has been made or while building up the model with test data. One example I read was like a car is driving and a small image could be pasted on a wall, which could make it turn left, so the car's AI algorithm misclassifies it.

But for poisoning the test data the attacker needs access to internal software at the time before the model is built so that the model that is built is corrupted. How could an attacker do that? That seems impossible. Or it could be in cases where the ML model is being built dynamically.

Just poured my thoughts out. I am interested in knowing thoughts about the above, and, specifically, what are the ways in which poisoning could be prevented?

",47912,,2444,,6/13/2021 19:49,6/13/2021 19:49,How could poisoning attacks be prevented in adversarial Machine Learning,,0,0,,,,CC BY-SA 4.0 28228,1,,,6/13/2021 14:25,,1,18,"

I came across the following definition of Backdoor attack (in this paper):

These attacks are accomplished in two steps. First, special patterns are embedded in the targeted model during the training phase, which is typically achieved by poisoning training data.

How could the training data be poisoned? Isn't the training data local to the software developer who is developing the ML algorithm? And won't he train the data on his local machine (could be a company too) before releasing the software out?

",47913,,2444,,6/14/2021 0:37,6/14/2021 0:37,How could an attacker poision the training data?,,0,0,,,,CC BY-SA 4.0 28229,1,,,6/13/2021 14:37,,1,20,"

Suppose we have a dataset $S = (x_1, \dots x_n)$ drawn i.i.d from distribution $D$, a learning algorithm $A$ and error function $err$. The performance of $A$ is therefore defined by the error/confidence pair $(\alpha, \beta)$, that is $$P(err(A(S)) \geq \alpha) \leq \beta$$ where the randomness is taken on $S$. Usually, by solving this inequality, we can get some constraints between $\alpha$ and $\beta$, in the form that $\alpha \geq f(\beta, n)$. My understanding is that if we treat $\beta$ as a constant, then we have the high probability error bound in terms of $n$. Is that correct?

Another question is that what if the function $f$ we get is not uniform across all $\beta$, for example \begin{equation} \alpha \geq \begin{cases} f_1(n, \beta) \quad \beta \geq 0.5 \\ f_2(n, \beta) \quad \beta< 0.5 \end{cases} \end{equation} In this case, how to derive the high probability error bound?

",47911,,2444,,7/1/2021 21:04,7/1/2021 21:04,Characterize the high probability bound for learning algorithm,,0,0,,,,CC BY-SA 4.0 28230,1,,,6/13/2021 15:34,,1,46,"

I'm trying to complete a captcha, and here is what it looks like:

Between captchas the calligraphy of the letters is the same, but the letters may be resized and rotated. And the background noise (the small dots and lines over and around the letters) will be different. Any Hangul letter may appear.

Edit 1: I can generate any number of new captchas with an answer for each of them. But to be clear, the answers that are generated are for entire captchas, that is, multiple Hangul letters arranged in a specific order as the answer for each captcha, not for individual letters.

  1. What type of machine learning is best for this problem?
  2. How do I extract good data from the image above for this problem?

Update 1: Unfortunately no one has given any suggestions for how to solve this yet. My idea at the moment is to mimic the model in this paper: https://www.ics.uci.edu/~xhx/publications/HHR.pdf

",47914,,47914,,7/15/2021 12:49,7/15/2021 12:49,Identifying rotating and resizing letters with background noise,,0,2,,,,CC BY-SA 4.0 28232,1,,,6/13/2021 18:24,,1,140,"

What is the best method to generate German paraphrases? The state-of-the-art are seq2seq transformer models, like T5, but they only work for English sentences. I found the multilingual MT5 model, but how do you fine-tune this for German?

",47915,,2444,,6/14/2021 0:43,6/14/2021 0:43,What is the best way to generate German paraphrases?,,0,3,,,,CC BY-SA 4.0 28233,2,,28100,6/13/2021 18:36,,0,,"

If it's a 2-player game it goes a little deeper into RL if you want both sides to be RL algorithms. I recommend reading about game theory and what is a Nash Equilibrium to start. For algorithms hiivemdptoolbox has openai-gym compatibility as well as Q-Learning. You will need to add code to make 2 learners play each other. I would also recommend adding Dyna-Q to the Q-Learner as it will probably speed up the learning.

",47898,,,,,6/13/2021 18:36,,,,1,,,,CC BY-SA 4.0 28234,2,,16510,6/13/2021 21:28,,1,,"

It seems to me that the article is approaching it from the perspective of the base classifier. For example if the base classifier is a Decision Tree with a max depth of 1 (or any other severely limiting factors) it will underfit. In general, boosting adds a classifier of the same structure and train it on the data the previous classifier got incorrect which leads to a more general model; hence "less underfitting".

",47898,,,,,6/13/2021 21:28,,,,0,,,,CC BY-SA 4.0 28235,1,28274,,6/13/2021 23:36,,2,292,"

I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.3 Choice of Activation and Loss Functions presents the following figure:

$\overline{X}$ is the features, $\overline{W}$ is the weights, and $\phi$ is the activation function.

So this is a perceptron (which is a form of artificial neuron).

But where does the so-called 'loss' / 'loss function' fit into this? This is something that I've been unable to reconcile.


EDIT

The way the loss function was introduced in the textbook seemed to imply that it was part of the architecture of the perceptron / artificial neuron, but, according to hanugm's answer, it is external and instead used to update the weights of the neuron. So it seems that I misunderstood what was written in the textbook.

In my question above, I pretty much assumed that the loss function was part of the architecture of the perceptron / artificial neuron, and then asked how it fit into the architecture, since I couldn't see any indication of it in the figure.

Is the loss / loss function part of the architecture of a perceptron / artificial neuron? I cannot see any indication of a loss / loss function in figure 1.7, so I'm confused about this. If not, then how does the loss / loss function relate to the architecture of the perceptron / artificial neuron?

",16521,,16521,,6/16/2021 9:51,6/16/2021 13:09,Where does the so-called 'loss' / 'loss function' fit into the idea of a perceptron / artificial neuron (as presented in the figure)?,,3,5,,,,CC BY-SA 4.0 28236,1,,,6/14/2021 0:22,,1,314,"

Consider the following formulation for pointwise mutual information (PMI):

$$\text{PMI}(w, c) = \dfrac{p(w, c)}{p(w)p(c)}$$

Suppose there are $W$ words with $C$ context words. Then one can write in terms of frequency that

$$\text{PMI}(w, c) = \dfrac{\sum\limits_{i = 1}^{W} \sum\limits_{j = 1}^{C} f_{ij} }{\sum\limits_{i = 1}^{W}f_i \sum\limits_{j = 1}^{C} f_j} $$

I am going to calculate $\text{PMI}(w, c)$ for two different words and contexts based on the following table. The table is taken from fig 6.10 of this book.

I calculated PMI for all pairs and tabulated below.

$$\begin{array}{|c|c|c|} \hline & \text{computer} & \text{data} & \text{result} & \text{pie} & \text{sugar} \\ \hline \text{cherry} & 8.2 \times 10^{-7} & 2.9 \times 10^{-6} & 3.9 \times 10^{-5} & 1.7 \times 10^{-3} & 8.4 \times 10^{-4} \\ \hline \text{strawberry} & 0 & 0 & 2.6 \times 10^{-5} & 1.4 \times 10^{-3} & 3.8 \times 10^{-3} \\ \hline \text{digital} & 9.6 \times 10^{-5} & 8.6 \times 10^{-5} & 5.2 \times 10^{-5} & 2.8 \times 10^{-6} & 1.9 \times 10^{-5} \\ \hline \text{information} & 8.6 \times 10^{-5} & 9.1 \times 10^{-5} & 1.03 \times 10^{-4} & 1.2 \times 10^{-6}& 2.7 \times 10^{-5}\\ \hline \end{array}$$

Based on the above values, we can also notice the following fact:

PMI has the problem of being biased toward infrequent events; very rare words tend to have very high PMI values.

However, it's unclear to me how this apparent behaviour is related to the mathematical formulation of the PMI above.

How do we understand the fact quoted above from the fractional form of PMI given by the equations above?

",18758,,18758,,6/14/2021 3:38,11/6/2022 11:00,How do very rare words tend to have very high PMI values?,,1,3,,,,CC BY-SA 4.0 28238,2,,21203,6/14/2021 3:47,,1,,"

ANNs can do the trick. Check out sklearn's ANN example with the Digits dataset, which consists of 28x28 pixel input data.

",47898,,2444,,7/14/2021 10:56,7/14/2021 10:56,,,,0,,,,CC BY-SA 4.0 28241,2,,28207,6/14/2021 7:09,,0,,"

The paper in question is called Dynamic Sample Selection for Federated Learning with Heterogeneous Data in Fog Computing, by L. Cai, D. Lin, J. Zhang, and S. Yu. As the title suggests, the paper focuses on a machine learning methodology called federated learning.

Downloading the paper in question and doing a search for all occurrences of the word 'accuracy', I get 6 results. Going through these 6 results, I don't see anything that details the methodology used to assess 'accuracy'; the authors seem to assume that the author already knows how 'accuracy' is being measured. However, it seems to me that this in itself might tell us something: if the paper is exploring the federated learning methodology, and if the authors do not detail the methodology used to assess 'accuracy', and instead write as if the reader is assumed to know how 'accuracy' is being measured, then it is likely that the methodology used to assess 'accuracy' is that which is typically used in federated learning, including the earlier and more fundamental papers on federated learning, which the authors are likely (and reasonably) assuming that readers of their paper are already familiar with and have already studied. Furthermore, it seems to me that, if we want to maximise the probability that the ('other' federated learning) paper is using the same methodology for 'accuracy' as the authors (of this paper), then, rather than searching through papers that the authors haven't referenced, we should search one of the papers that they have.

Given what I said above, of the 6 occurrences of the word 'accuracy' in the paper, the second one, in section I. Introduction, seems to me to be the most promising:

With the growing use of internet devices, federated learning becomes increasingly popular in machine learning, which is typically to train models using decentralized data residing on end devices. The learning task does not use the data required for the aggregation model training to perform centralized calculations, but rather distributes the computation of machine learning to the distributed computing of participation parties’ databases. It can solve the “data island” problem in the loT, and can ensure the data privacy of devices. Federated learning is a viable and emerging framework which pushes AI frontiers to the network edge and trains machine learning models for fog computing [17]. It provides several benefits for the edge big data processing, including energy efficiency, privacy protection, and communication cost. However, the data is distributed across millions of devices in a highly uneven manner 2, 34, [9]. In addition, these devices have higher latency, lower throughput connections, and can only be used intermittently for training. McMahan et al. 2 proposed a Federated Averaging Algorithm based on the optimization of synchronized stochastic gradient descent (SGD), which averaged the gradients updates by all users (or random parts) as the update parameters of the central model. It can effectively reduce communication rounds compared with SGD. Moreover, some performance optimization methods like structured updates and sketched updates aimed to improve communication efficiency and test accuracy in federated learning [9].

(Emphasis mine.)

Reference 9 is for the paper Federated Learning: Strategies for Improving Communication Efficiency by J. Konečný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon. Downloading the paper in question and doing a search for all occurrences of the word 'accuracy', I get 13 results. Going through these 13 results, the only one that seems relevant is from section 4. Experiments, under 4.2 LSTM Next-Word Prediction on Reddit Data:

We constructed the dataset for simulating Federated Learning based on the data containing publicly available posts/comments on Reddit (Google BigQuery), as described by Al-Rfou et al. (2016). Critically for our purposes, each post in the database is keyed by an author, so we can group the data by these keys, making the assumption of one client device per author. Some authors have a very large number of posts, but in each round of FedAvg we process at most 32 000 tokens per user. We omit authors with fewer than 1600 tokens, since there is constant overhead per client in the simulation, and users with little data don’t contribute much to training. This leaves a dataset of 763 430 users, with an average of 24 791 tokens per user. For evaluation, we use a relatively small test set of 75 122 tokens formed from random held-out posts.

Based on this data, we train a LSTM next word prediction model. The model is trained to predict the next word given the current word and a state vector passed from the previous time step. The model works as follows: word $s_t$ is mapped to an embedding vector $e_t \in \mathbb{R}^{96}$, by looking up the word in a dictionary of 10 017 words (tokens). $e_t$ is then composed with the state emitted by the model in the previous time step $s_{t1} \in \mathbb{R}^{256}$ to emit a new state vector $s_t$ and an “output embedding” $o_t \in \mathbf{R}^{96}$. The output embedding is scored against the embedding of each item in the vocabulary via inner product, before being normalized via softmax to compute a probability distribution over the vocabulary. Like other standard language models, we treat every input sequence as beginning with an implicit “BOS” (beginning of sequence) token and ending with an implicit “EOS” (end of sequence) token. Unlike standard LSTM language models, our model uses the same learned embedding for both the embedding and softmax layers. This reduces the size of the model by about 40% for a small decrease in model quality, an advantageous tradeoff for mobile applications. Another change from many standard LSTM RNN approaches is that we train these models to restrict the word embeddings to have a fixed L2 norm of 1.0, a modification found to improve convergence time. In total the model has 1.35M parameters.

In order to reduce the size of the update, we sketch all the model variables except some small variables (such as biases) which consume less than 0.01% of memory. We evaluate using AccuracyTop1, the probability that the word to which the model assigns highest probability is correct. We always count it as a mistake if the true next word is not in the dictionary, even if the model predicts ‘unknown’.

In Figure 4, we run the Federated Averaging algorithm on Reddit data, with various parameters that specify the sketching. In every iteration, we randomly sample 50 users that compute update based on the data available locally, sketch it, and all the updates are averaged. Experiments with sampling 10, 20, and 100 clients in each round provided similar conclusions as the following.

And later in the paper we have the following:

Note the following:

We evaluate using AccuracyTop1, the probability that the word to which the model assigns highest probability is correct.

So it seems that AccuracyTop1, whatever that is, is how 'accuracy' is being calculated. Googling "AccuracyTop1", we get this Stackoverflow question, and this answer to the question seems to explain how it works.


Disclaimer: This is all beyond my current knowledge and understanding, and I came to these conclusions purely through critical thinking and investigative research. It's totally possible that everything in my answer is nonsense, so feel free to correct me (or use my answer to find the correct answer).

",16521,,2444,,6/19/2021 12:29,6/19/2021 12:29,,,,0,,,,CC BY-SA 4.0 28242,2,,28236,6/14/2021 7:16,,0,,"

Note: mutual information is typically expressed as a log value (usually $log_2$, as it's related to information), which makes them easier to compare -- you then don't have to worry about large exponential expressions with negative exponents.

The reason for the bias is in the distributional properties of words. A rare word will have a small frequency of occurrence (by definition), so multiplied with a more common word, the denominator will be fairly small. But the numerator will also be very small, as there aren't that many opportunities for the rare word to occur near the more common word.

However, while with two common words the nominator will be much higher (more co-occurrences), the denominator will now — compared to the rare/common instance — be several orders of magnitude larger. Thus the overall value will be smaller.

Rare co-occurrences are overly dependent on chance, as the nominator can easily fluctuate randomly (say, 1 or 2 co-occurences), and with a comparatively smaller denominator that can make a big difference.

In linguistics, you would generally ignore mutual information values above a certain threshold, as the values are just too unreliable to be meaningful. In fact, when I was still working in academia, mutual information was increasingly replaced by other metrics, such as log-likelihood, which were more robust.

",2193,,,,,6/14/2021 7:16,,,,1,,,,CC BY-SA 4.0 28243,2,,28217,6/14/2021 7:31,,2,,"

It's much easier to deal with logarithms, as the relevant numbers are usually very small or very large. If you have a long exponential expression, it's hard to see the difference, but if you're looking at 4.3 vs 5.6, you can immediately see what's happening. And logarithms are a well-known (and well-understood) way of achieving this compression. You can easily interpret the difference, depending on the base of the logarithm used.

Quite often the $log_2$ is used when you're dealing with entropy or information, as those are usually expressed in bits.

",2193,,,,,6/14/2021 7:31,,,,0,,,,CC BY-SA 4.0 28245,1,,,6/14/2021 8:01,,2,81,"

Generative Adversarial Networks can generate realistic photos of people, such as thispersondoesnotexist.com. I wonder whether one can train an artificial intelligence on a batch of plain solo melodies (no instruments) and ask it to produce a new and similar one.

This article suggests the techniques require a lot of work and are still young:

We have explored and evaluated the generation of music using a Generative Adversarial Network as well as with an alternative method in the form of an N-gram model. Our GAN is able to capture some of the structure of single track music. We have accomplished our goal of identifying structural similarities shared across music compositions. However, the music we created lacks coherent melodies and needs improvement.

What is the state of the art in melody generation?

",47927,,2444,,6/14/2021 10:54,8/23/2021 11:31,What is the state of the art in melody generation?,,1,2,,,,CC BY-SA 4.0 28247,1,28250,,6/14/2021 10:35,,1,74,"

I have two datasets, Dataset 1(D1) and Dataset 2(D2). D1 has around 22000 samples, and D2 has around 8000 samples. What I am doing is that I train a Deep Neural Network model with around three layers on the dataset D1, which has an accuracy of around 84% (test accuracy = 82%).

Now, when I use that model to make predictions on D2 without any fine-tuning or anything, I get an accuracy of around 15%(test accuracy = 12.3%). But when I add three more layers to the pre-trained model while keeping the three layers of the initial model(trained on D1) frozen, I get around 90% accuracy (test accuracy = 87.6%) on D2.

This tells me that because the initial model was performing so poorly without any fine-tuning, most of the learning that led to the 90% accuracy was only because of the additional layers, not the layers that were transferred from the model trained on the D1 dataset. Is this a correct inference? And if it is, then is it still valid to call this a Transfer Learning application? Or does it has to have more accuracy without fine-tuning to be rightly listed as a Transfer Learning problem.

",37797,,37797,,6/14/2021 16:31,6/14/2021 18:46,Would this count as a Transfer Learning approach?,,1,4,,,,CC BY-SA 4.0 28249,1,,,6/14/2021 12:45,,0,135,"

I'm trying to implement the ES-HyperNEAT algorithm using the original paper, as well as the pseudocode provided in the official user page. Occasionally, the algorithm would be unable to generate a network in the substrate. This happens when it finds no valid nodes that could connect a path between the input and output neurons.

I've noticed that this is highly dependent on how the hyperparameters (e.g., variance threshold and band threshold) were tuned.

Is my implementation correct, i.e., is this normal behavior? If so, is there a good way to ensure that a network is always generated (aside from directly connecting the input and output neurons)?

",47910,,,,,12/15/2022 3:01,How to ensure that the ES-HyperNEAT algorithm generates an ANN in the substrate?,,2,0,,,,CC BY-SA 4.0 28250,2,,28247,6/14/2021 13:59,,1,,"

This tells me that because the initial model was performing so poorly without any fine-tuning, most of the learning that led to the 90% accuracy was only because of the additional layers, not the layers that were transferred from the model trained on the D1 dataset. Is this a correct inference?

This is a possibility, but not the only one. If you were re-purposing a classifier for ImageNet classes to specific new image type, or even the same classes but with the labels in a different order, then this large initial drop in accuracy would be expected.

The transfer learning could be helping in two different measurable ways:

  • The training for the new purpose is faster (fewer epochs required) than if done from scratch with a brand new network.

  • The end accuracy is better than could be achieved with just D2 dataset and a brand new network.

The only way to tell if either of these are the case is to compare results by using just the normal D2 features and a re-initialised copy of the original NN used to learn from D1 (by that I mean adapt your initial training script from D1 to work with D2 - changing the dataset file names and the ouput layer shape should be all you need to do). Look at the learning curve for this training - if it is significantly slower or worse accuracy at the end, then the transfer learning has made a difference.

And if it is, then is it still valid to call this a Transfer Learning application? Or does it has to have more accuracy without fine-tuning to be rightly listed as a Transfer Learning problem.

I am not sure it matters if the results using transfer learning are worse. It is - at least in my opinion - an attempt to use transfer learning.

If transfer learning has not produced better results, then you will maybe have demonstrated that there is not enough overlap from the D1 dataset and problem to the D2 ones to justify using transfer learning in your use case.

This result may also depend on the sizes of datasets D1 and D2, even if your experiment stays in the same problem domain. The number of examples in D1 is ~3 times the size of D2, which is not much of a difference compared to transfer learning done using large pre-trained image or language models.

",1847,,1847,,6/14/2021 18:46,6/14/2021 18:46,,,,2,,,,CC BY-SA 4.0 28252,1,,,6/14/2021 17:36,,0,87,"

Because future AI may produce emergent phenomena, and because these are probably gaps in our current understanding of this, it feels like complex systems may be an increasingly important research field.

Unless there is some kind of commonality of emergent behaviour (or aspects of complex systems more generally) it seems that future AI systems may behave in ways very difficult to predict. Potentially complex system research of the brain could help in artificial neural network understanding but because of the diversity of the latter this may be a loose similarity. Further, this may or may not hold for other types of AI.

The main question is from a basic educational level, how does complex systems research affect AI and vice vera. Possible subquestions (whatever helps understanding of this topic better): Is there much research at the intersection of AI and complex systems (so someone like Melanie Mitchell)? What kinds of things may complex systems research inform AI? Is AI now helping us understand AI better?

",37533,,,,,6/17/2021 4:40,How is complex systems research interacting with AI research?,,0,6,,,,CC BY-SA 4.0 28253,1,,,6/14/2021 18:01,,1,44,"

I typically think of teacher forcing as optional when training an RNN. We may either:

  • use the output of time-step $t$ as the input to time-step $t+1$

  • use the $(t+1)$th input as the input to time-step $t+1$

When I actually sat down to write a bidirectional RNN from scratch today I realised it would be impossible to do without 100% teacher forcing, because each time step needs access to the "history" going back to the 0th time-step (forward direction) and going back (or forward - however you want to think of it) to the last time-step (backward direction).

Is that correct?

",16871,,2444,,12/22/2021 9:52,12/22/2021 9:52,Do bi-directional RNNs necessarily use 100% teacher forcing?,,0,0,,,,CC BY-SA 4.0 28255,1,28257,,6/14/2021 19:26,,2,70,"

In Sutton & Barto's Reinforcement Learning: An Introduction page 63 the authors introduce the optimal state value function in the expression of the optimal action-value function as follows: $q_{*}(s,a)=\mathbb{E}[R_{t+1}+\gamma v_{*}(S_{t+1})|S_{t}=s, A_{t}=a], \forall s \in S, \forall a \in A$.

I don't understand what $v_{*}(S_{t+1})$ could possibly mean since $v_{*}$ is a mapping, under the optimal policy $\pi_{*}$, from states to numbers which are expected returns starting from those states and at different time steps.

I believe that the authors use the same notation to denote the state-value function $v$ that verify $v(s)=\mathbb{E}[G_{t}|S_{t}=s], \forall s \in S$ and the random variable $\mathbb{E}[G_{t+1}|S_{t+1}]$ but I'm not sure.

",44965,,44965,,6/14/2021 19:56,6/14/2021 20:48,What does $v(S_{t+1})$ mean in the optimal state-action value function?,,1,0,,,,CC BY-SA 4.0 28256,1,,,6/14/2021 19:45,,1,257,"

In the paper "Soft Actor-Critic Algorithms and Applications", appendix C shows enforcing action bounds using the tanh squashing function which is in (-1, 1). I have action bounds in (0, 1), so can I just modify the tanh output by applying the following transformation: output = 0.5 * (tanh_output + 1). If so, do I need to change logprob formula too?

I have not seen any SAC implementation with different action bounds other than the paper's (-1, 1).

",32517,,,,,6/16/2021 5:34,How to enforce action bounds between 0 & 1 in soft actor-critic algorithm?,,1,0,,,,CC BY-SA 4.0 28257,2,,28255,6/14/2021 20:43,,2,,"

I am not sure if it is standard notation, but Sutton & Barto use a convention that a function of a random variable is a new random variable that maps between values of the old variable to values of the new one using the function, and without affecting probability distribution (other than the function could be one-way hence probabilties may effectively combine e.g. if there were several states with $v_*(s) = 5$).

Given this convention then $v_*(S_{t+1})$ is a random variable of the optimal state value functions of possible statues at time step $t+1$. That is, it has the same probability densities, based on policy and state transition rules, as $S_{t+1}$, but has the associated values of $v_*(s)$ for each possible $S_{t+1}$.

The actual distribution of $v_{*}(S_{t+1})$ will vary depending on the conditions in the context where it is evaluated.

If you resolve the expectations in the first equation, which has conditions on $S_t$ and $A_t$:

$q_{*}(s,a)=\mathbb{E}[R_{t+1}+\gamma v_{*}(S_{t+1})|S_{t}=s, A_{t}=a]$

$\qquad\quad= \sum_{r,s'} p(r,s'|s,a)(r + \gamma v_*(s'))$

. . . which expresses action value $q_*(s,a)$ in terms of the state transition rules, immediate reward function and the state value $v_*(s')$ one half-step ahead. That is, at the next state, but before the next (optimal choice) action is taken.

",1847,,1847,,6/14/2021 20:48,6/14/2021 20:48,,,,7,,,,CC BY-SA 4.0 28258,1,,,6/14/2021 21:04,,1,708,"

My dataset consists of about 40,000 200x200px grayscale images of centered blobs bathed in noise and occasional artifacts like stripes other blobs of different shapes and sizes, fuzzy speckles and so on in their neighborhood. They are used in a binary classification problem, with emphasis on recall.

I read that using FFT of image and FFT of the convolutional kernel and multiplying the two, produces a similar result as convolutions would but at a way lower resource expense. This is probably the most straightforward article I found if you need a more detailed description(https://medium.com/analytics-vidhya/fast-cnn-substitution-of-convolution-layers-with-fft-layers-a9ed3bfdc99a)

What I want to do however is simply feed the FFT of images to the standard CNN. The reasoning being, maybe it would be easier for the network to catch on to features that it would miss or tend to weigh less. Or in other words, FFT as a feature engineering technique.

Would this be an idea worth trying to pursue? If so, any suggestion on which FFT components to extract (Amplitude/Phase, Real/Imaginary)?

",47938,,,,,11/7/2022 17:03,"Feeding CNN FFT of an image, a dumb idea?",,1,2,,,,CC BY-SA 4.0 28260,2,,28202,6/14/2021 23:43,,1,,"

Since my question arose from my incomprehension of $v(S_{t + 1})$ and since I got clarifications on it by Neil Slater, I thought I'd go back to this question and try to answer it again.

So I'm assuming that $v(S_{t + 1})$ is a random variable made by the composition of the state-value function $v$ and the random variable $S_{t + 1}$.

Since $v(s) = \mathbb{E}[G_{t + 1} \mid S_{t + 1} = s]$, the random variable $v(S_{t + 1})$ is $\mathbb{E}[G_{t + 1} \mid S_{t + 1}]$. Using a corollary of the total expectation theorem we get that $\mathbb{E}[v(S_{t + 1}) \mid S_{t} = s] = \mathbb{E}[G_{t+1} \mid S_{t} = s]$. This proved, we can conclude using first developments of the question.

",44965,,16521,,6/15/2021 7:56,6/15/2021 7:56,,,,0,,,,CC BY-SA 4.0 28261,1,,,6/15/2021 8:44,,3,415,"

I've read Sutton and Barto's introductory RL book. They define a policy as a mapping from states to probabilities of selecting each possible action. If the agent is following policy $\pi$ at time $t$, then $\pi(a|s)$ as the probability of taking action $A_t = a$ when the current state is $S_t = s$. This definition is the context of the markov assumption, which is why the policy is only dependent on the current state.

When discussing the standard k-armed bandits problem, they write $\pi(a)$ to denote the probability of taking action $a$, since there are no states. However, when designing the agent, clearly, the agent needs to keep track of what the past rewards are for each lever, so either there is a summary statistic of each lever, or the entire history of actions and rewards must be kept.

Is the k-armed bandit problem then a MDP? Why isn't the notation $\pi(a|A_0, R_1, A_1, \ldots, R_T)$ for some sequence $A_0, R_1, A_1, \ldots, R_T$?

",45562,,2444,,6/16/2021 8:52,6/16/2021 8:52,Is the Bandit Problem an MDP?,,1,1,,,,CC BY-SA 4.0 28262,1,,,6/15/2021 9:01,,1,19,"

We're working on object detection in thermal images using neural network with Caffe framework. We use SSD ResNet-10 network available in OpenCV repository as it seems to provide the best performance on Raspberry Pi for our needs (in comparison to MobileNet etc.)

https://github.com/opencv/opencv/blob/master/samples/dnn/face_detector/solver.prototxt

train_net: "train.prototxt"
test_net: "test.prototxt"

test_iter: 2312
test_interval: 5000
test_initialization: true

base_lr: 0.01
display: 10
lr_policy: "multistep"
max_iter: 140000
stepvalue: 80000
stepvalue: 120000
gamma: 0.1
momentum: 0.9
weight_decay: 0.0005
average_loss: 500
iter_size: 1
type: "SGD"

solver_mode: GPU
random_seed: 0
debug_info: false
snapshot: 1000
snapshot_prefix: "snapshot/res10_300x300_ssd"

eval_type: "detection"
ap_version: "11point"

Train batch size is 16. Test batch size is 1.

The training process starts at loss 23.4728 and reaches plateau around loss 1.2, learning rate is decreased at iteration 80000 and loss falls down to 0.89. Further decrease continues very slowly to iteration 120000 where loss is around 0.85. Then LR is decreased. Process ends at iteraion 140000 with loss around 0.80 and test evaluation around 0.90.

I noticed that selecting different optimizer gives different results. I tried Nesterov, Adam (with fixed LR) and SGD with different base_lr (0.05) and step size (100000). Are there any recommendation that I could try except of trial&error and waiting 12 hours to compare the results? Reduce/increase batch size? More iterations? Different step sizes?

Adam provides the worst test evaluation. SGD with base_lr reduced to 0.05 and step size 100000 seems to provides the best result now (test eval = 0.94)

",47949,,,,,6/15/2021 9:01,Finetuning solver for Caffe neural network,,0,0,,,,CC BY-SA 4.0 28263,1,,,6/15/2021 10:02,,1,64,"

I am confused about the way the LSTM networks work when forecasting with a horizon that is not finite, but I'm rather searching for a prediction in whatever time in future. In physical terms, I would call it the evolution of the system.

Suppose I have a time series $y(t)$ (output) I want to forecast, and some external inputs $u_1(t), u_2(t),\cdots u_N(t)$ on which the series $y(t)$ depends.

It's common to use the lagged value of the output $y(t)$ as input for the network, such that I schematically have something like (let's consider for simplicity just lag 1 for the output and no lag for the external input):

$$ [y(t-1), u_1(t), u_2(t),\cdots u_N(t)] \to y(t) $$

In this way of thinking the network, when one wants to do recursive forecast it is forced to use the predicted value at the previous step as input for the next step. In this way we have an effect of propagation of error that makes the long term forecast badly behaving.

Now, my confusion is, I'm thinking of an RNN as a kind of a (simple version) implementation of a state-space model where I have the inputs, my output and one or more state variable responsible for the memory of the system. These variables are hidden and not observed.

So, now, the question: if there is this kind of variable taking already into account previous states of the system why would I need to use the lagged output value as input of my network/model?

Getting rid of this does my long term forecast would be better, since I'm not expecting anymore the propagation of the error of the forecasted output. (I guess there will be anyway an error in the internal state propagating)

",47904,,2444,,6/16/2021 9:36,6/16/2021 9:36,LSTM Recursive Forecast,,0,1,,,,CC BY-SA 4.0 28264,1,28271,,6/15/2021 10:39,,3,116,"

I am trying to self learn reinforcement learning. At the moment I am focusing on policy and value iteration, and I am finding several problems and doubts.

One of the main doubts is given by the fact that I can't find many diversified examples on how to implement these on python, instead I find always only the classical grid world example.

So, my doubt is: Are policy and value iteration used only in grid world like scenarios, or can be used also in other contexts?

",47888,,,,,6/15/2021 21:26,Are policy and value iteration used only in grid world like scenarios?,,1,0,,,,CC BY-SA 4.0 28265,1,,,6/15/2021 11:19,,1,74,"

I'm new to neural networks and I try to make a model that is guessing if a point is below or above relative to a function output. The idea is inspired from this video https://youtu.be/DGxIcDjPzac .

What am I doing wrong?

In the gif below I start the training but it seems that is not working. The blue line is the function (y = x + 50) and all the points above it should be green, but aren't. In order to simplify the example and to debug easier, I picked a simple function such that I can use only a perception for the model.

I also made a method backPropagationDebug(...) to display for the points that are predicted wrong all that matrices in each step, but I couldn't find what's wrong.

public void backPropagation(double[][] input, double[][] expected) {
    double[][][] outputs = getOutputs(input);

    double[][] currentOutput = outputs[outputs.length - 1];
    double[][] currentError = Matrix.subtract(expected, currentOutput);

    for (int i = brain.length - 1; i >= 0; i--) {
        final double[][] layer = brain[i];
        final double[][] previousOutput = outputs[i];

        final double[][] layerTranspose = Matrix.transpose(layer);
        final double[][] previousError = Matrix.multiply(layerTranspose, currentError);

        /* FIST BIT */
        double[][] errorSigmoid = Matrix.copyOf(currentError);

        for (int k = 0; k < errorSigmoid.length; k++) {
            errorSigmoid[k][0] *= - derivativeActivationFunction(currentOutput[k][0]);
        }

        /* SECOND BIT */
        final double[][] slopeMatrix = Matrix.multiply(errorSigmoid, Matrix.transpose(previousOutput));

        /* UPDATE THE WEIGHTS */
        for (int k = 0; k < layer.length; k++) {
            for (int l = 0; l < layer[0].length; l++) {
                layer[k][l] = layer[k][l] - learningRate * slopeMatrix[k][l];
            }
        }

        currentOutput = previousOutput;
        currentError = previousError;
    }
}

The backpropagation steps are inspired from this formulas:

(From: Make Your Own Neural Network By Tariq Rashid)

The code is on github: https://github.com/StamateValentin/Artificial-Intelligence-Playground/tree/7a7446b7faedd7673bc53a62304ff3a5180d77eb

The resources I used are in the README.md file.

",47950,,47950,,6/17/2021 14:06,6/17/2021 14:06,Backpropagation not working as expected,,0,5,,,,CC BY-SA 4.0 28266,2,,28258,6/15/2021 11:20,,0,,"

 FFT is in essence linear transformation of the input image and can be represented by application of convolutional filter of the same size as image on the input.

Provided, the convolutinoal neural network is deep enough with sufficient number of parameters and there are skip connections (in order to have a path of purely linear transformations on the input), FFT can be represented by the learned filters. If FFT of the image is relevant for the classification problem, NN most probably would learn to produce them in a certain way.

For image classification problems - when the goal is identify an instance of something, local information is crucial, and this problem is better solved in the spatial, not frequency domain.

However, for your case it seems, like the semantic is rather trivial, and the goal is to get rid of some frequencies. Hence, working in the frequency domain is a sensible option. Possibly, you can combine the spatial and frequency representation in some way.

I think, it would be simpler to work with the real and imaginary part, that with the complex abs and phase, since you need to account for periodicity of the phase in a certain way, and then in the end transform phase to $e^{i \phi}$.

",38846,,,,,6/15/2021 11:20,,,,0,,,,CC BY-SA 4.0 28267,1,,,6/15/2021 13:05,,2,40,"

I created a super simple NN of 1 input, 2 hidden layers of 2 neurons each and 1 output neuron as shown below.

All activations are ReLUs and neurons doesn't use the bias term. What I found is that the output graph is a combination of two linear functions (one when the input is negative and another when the input is positive) kind of like this.

I think without the bias term, the output will be a linear function (for negative and positive inputs separately) no matter how big the network is. My question is, is this useful at all as an architecture? I assume it might be - if multiple output nodes are available, or is it? Does any of this mean that the a bias term is mandatory? Just trying to get my intuition right here...

",42117,,,,,6/15/2021 13:05,Can Neural Networks using ReLU activation work without using the bias term in their neurons?,,0,0,,,,CC BY-SA 4.0 28268,1,,,6/15/2021 13:34,,1,86,"

I'm trying to understand the rationale of the various modifications the authors of the DeepLab models have made to their third version, DeepLabV3. In the paper, the following is written:

ASPP with different atrous rates effectively captures multi-scale information. However, we discover that as the sampling rate becomes larger, the number of valid filter weights (i.e., the weights that are applied to the valid feature region, instead of padded zeros) becomes smaller. This effect is illustrated in Fig. 4 when applying a 3×3 filter to a 65×65 feature map with different atrous rates. In the extreme case where the rate value is close to the feature map size, the 3×3 filter, instead of capturing the whole image context, degenerates to a simple 1×1 filter since only the center filter weight is effective. To overcome this problem and incorporate global context information to the model, we adopt image-level features, similar to [58,95]. Specifically, we apply global average pooling on the last feature map of the model, feed the resulting image-level features to a 1×1 convolution with 256 filters (and batch normalization [38]), and then bilinearly upsample the feature to the desired spatial dimension.

I do not understand how global pooling solves this problem. Is it simply because it does not suffer from the same issue of ASPP (the degeneration of the weights), and serves as an alternative?

From: Chen, L. C., Papandreou, G., Schroff, F., & Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587.

",42816,,,,,6/15/2021 13:34,DeepLabV3: Why use global average pooling in the ASPP module?,,0,0,,,,CC BY-SA 4.0 28269,2,,28261,6/15/2021 13:42,,1,,"

The bandit problem is an MDP. You can make the same argument about needing data to learn in the stateful MDP setting. The thing is, the data you need (the past rewards in this case) was drawn iid (conditioned on the arm) and is not actually a trajectory. For instance, once you learn an optimal policy, you no longer need to gather data and the sequence of past results doesn't influence your policy.

",37829,,,,,6/15/2021 13:42,,,,2,,,,CC BY-SA 4.0 28271,2,,28264,6/15/2021 15:18,,2,,"

Policy and value iteration both require you to, for each possible transition and each corresponding possible reward at each state, compute a statistic of $r + \gamma V(s')$. In order for this to be tractable, you need for there to be at most finitely many states, actions, possible rewards, and possible transitions at each state. You also need to know the transition model. This is the case in gridworld.

Gridworld is not the only example of an MDP that can be solved with policy or value iteration, but all other examples must have finite (and small enough) state and action spaces. For example, take any MDP with a known model and bounded state and action spaces of fairly low dimension. Then you can approximate the state and action spaces with a finite number of bins, each corresponding to its own " discretized state/action". With smooth enough dynamics and enough bins, you'll be able to solve the MDP with policy/value iteration on the discretized spaces.

In many interesting RL problems though,

  1. You don't know the transition model, and/or
  2. The state space, action space, and/or reward space are too large

In these cases you wouldn't be able to compute the value function exactly, so you can't really do policy/value iteration. However, in most value based RL algorithms, the policy evaluation / policy improvement steps are approximated using sample transitions and function approximators.

",37829,,37829,,6/15/2021 21:26,6/15/2021 21:26,,,,2,,,,CC BY-SA 4.0 28272,1,,,6/15/2021 15:53,,4,341,"

There are several images related to convolutional networks on the Internet, an example of which I have given below

My question is: are these images the weights/filters of the convolution layer (the weights that are learned in the learning process), or the convolved images of the previous layer's image with the filters of the current layer?

image source: https://stats.stackexchange.com/questions/362988/in-cnn-do-we-have-learn-kernel-values-at-every-convolution-layer

",23216,,23216,,6/16/2021 18:23,6/16/2021 19:02,Are these visualisations the filters of the convolution layer or the convolved images with the filters?,,1,0,,,,CC BY-SA 4.0 28273,2,,28272,6/15/2021 19:20,,3,,"

Only the first convolutional layer, with filters that process the input [colour] channels directly, can be rendered directly as image patches in the same domain as the input. The left-most panel in your example looks like that.

Further layers of the neural network cannot be rendered like this for two reasons:

  • They have a number of input channels based on the previous layer's output, for example they may process 32 or 128 channels. There is no simple mapping of these channels to colours.

  • They respond to a wider range of input stimuli than any single image patch. If you tried to render out all inputs that they respond to then you would likely get an indistinct-looking grey blob, if anything at all. This is different to the first layer which does directly react pixel-by-pixel according to the weights.

What is typically done to render patches like the middle and right-most panels in your example, is to find sample patches that trigger a strong response from that filter. This search can be done using gradient ascent - start with noise, then take gradient steps in the direction of increasing the signal for that filter. This is also the basis of "Deep Dream" images, which instead of doing that for small patches, apply it to whole images, and many filters at once.

",1847,,1847,,6/16/2021 19:02,6/16/2021 19:02,,,,0,,,,CC BY-SA 4.0 28274,2,,28235,6/16/2021 5:25,,1,,"

Loss function is a function used to measure the loss. It is not used in any component of a neuron. It is used in updating the weights of the neuron i.e., in order to train the neuron.

The contribution of a loss function is in the updation of $\bar{W}$.

For a given $\bar{X}$ and $\bar{W}$, the neuron gives a post-action value $h$. But the desired output may not be exactly $h$. The distance measure between the desired value (say $h^\prime$) and post-activation value $h$ is called the loss of the perceptron for $\bar{X}$.

We want to decrease the loss of the perception i.e., $\left\vert h - h^\prime \right\vert$. The only task we can do is changing $\bar{W}$, the parameters of the perceptron model given in the question. In order to update the weights, we have to calculate the derivative of loss function wrt the weights of the model and then update weights using some updation rule.

Thus, loss function will come into play during the training phase of the neuron.

To be concise: currently you are dealing with the neuron and its components. Loss functions for training the neuron will come later.

",18758,,16521,,6/16/2021 5:53,6/16/2021 5:53,,,,0,,,,CC BY-SA 4.0 28276,2,,28256,6/16/2021 5:34,,1,,"

Yes you can map the output onto [0,1] as you indicate. You should treat this as a modification to the environment. I.e. imagine that the environment takes actions in [-1, 1] instead of [0,1]. No you don't need to change any equations.

",47080,,,,,6/16/2021 5:34,,,,0,,,,CC BY-SA 4.0 28282,2,,28235,6/16/2021 10:43,,1,,"

The loss function is simply a way to measure how wrong a neural network is, it doesn't affect the output of the neuron.

Say we have a neural network with 3 output neurons that attempts to classify images of cats, dogs, and humans. The output it gives is the confidence of the neural network's classification. For example if the output is [0, 0.2, 0.8] (0 being the output of the 1st neuron, 0.2 of the 2nd, and 0.8 of the 3rd), this means that the neural network thinks that the image has 0% probability of being a cat, 20% of being a dog and 80% of being a human.

Imagine that the image that was shown to the network is a human, we can say that the target values are [0, 0, 1] because we want it to output that the image is a human with 100% confidence. Now we must measure how wrong the prediction actually was using a loss function. There are many loss functions, but for simplicity I'll use the squared error. In this case the loss will be equal to (1-0.8)^2=0.04 (expected value - output)^2.

The closer the output is to 1, the result inside of the brackets will be closer to 0, so the loss will be smaller. The objective is to minimize this loss function. For example, if the output was 1 instead of 0.8, the network's loss will be (1-1)^2 = 0. If the output was 0.2 instead, the loss would be (1-0.2)^2 = 0.64, which is larger than the two previous, as it is 'more wrong'.

To train the network we use this instead of accuracy for the following reason. With both these outputs [0, 0.1, 0.9], [0.2, 0.3, 0.5] the network predicts 'human', the largest value, but in the first case it is 90% sure whereas in the second it is only 50% sure. We can say that the first network is better, but if we only used accuracy, as both predict the same, they would appear to be just as good.

The same happens when they make a mistake. If the expected values are [0, 1, 0] and one model predicts [0.5, 0.4, 0.1] and the other predicts [0.9, 0, 0.1], they both got it wrong, but the first one was less wrong. The first loss would be (1-0.4)^2 = 0.36 and the second would be (1-0)^2 = 1, which is much higher

",47974,,,,,6/16/2021 10:43,,,,1,,,,CC BY-SA 4.0 28283,2,,28235,6/16/2021 12:01,,1,,"

Assume we have a binary classification problem, which we want to solve with a simple single-layer perceptron. For a 2d space, a perceptron will have 2 inputs $x_1$ and $x_2$, and a bias denoted $x_0$ which is always $x_0=1$. It also has corresponding learnable weights $w_0$, $w_1$ and $w_2$.

This can be vectorized:

$$ \overline{x} = \begin{bmatrix} 1 \\ x_1 \\ x_2 \end{bmatrix}; \ \overline{w} = \begin{bmatrix} w_0 \\ w_1 \\ w_2 \end{bmatrix} $$

Then the pre-activation function $a = \overline{w}\cdot\overline{x}$ nothing more than a linear equation and can be unfolded to $a = w_0 + w_1x_1+w_2x_2$.

In the simplest case $h$ can be defined as follows:

$$h = \begin{cases} \text{class 1} & \text{if $a>0$}\\ \text{class 2} & \text{otherwise} \end{cases} $$

or it can also be defined as $h=\text{sign}(\overline{w}\cdot\overline{x})$, which means that samples above the line belong to class 1.

Since our $h$ is not continuous and not differentiable, we cannot apply gradient descent optimization. Instead, the weights can be optimized iteratively:

for the details, you can refer to Wikipedia

So answering your question, for this particular case, there is no loss function. Optimization of a loss function usually involves SGD which requires the function to be continuous and differentiable. For instance, if $h$ is defined as a sigmoid function, you can use binary cross-entropy as a loss function.

There are also other optimization approaches, such as genetic algorithm, expectation-maximization and hill-climbing that do not require a loss function.

",12841,,12841,,6/16/2021 13:09,6/16/2021 13:09,,,,0,,,,CC BY-SA 4.0 28284,1,,,6/16/2021 12:22,,0,15,"

I have trained a CNN network to detect a circle and approximate its centre and radius in an image. What I want to do now is detect the centre and radius of all the circles if there are multiple circles present in an image.

How do I proceed to go on about it? Do I have to make changes to my dataset to be able to do so? I tried to look at different architectures that do multiple object detection, but I couldn't understand what changes I could make to my architecture.

",37797,,,,,6/16/2021 12:22,How to change a single object detection network to a multiple object detection network?,,0,3,,,,CC BY-SA 4.0 28285,1,,,6/16/2021 12:54,,0,11,"

I have trained a posture analysis network to classify in a video of humans recorded in public places if there is a) shake-hand between two humans, b) Standing close together that their hands touch each other but not shake hand and c) No interaction at all. There are multiple labels to identify different parts of a human. The labels are done to train the network to spot hand-shaking in a large dataset of videos of humans recorded in public. As you can guess, this leads to an imbalanced dataset. To train, I sampled data such that 60% of my input contained handshaking images and the rest contained different images than hand-shaking.In this network, we are not looking at just labels but also the relative position of individual labels wrt to one another. We have an algorithm that can then classify them into the three classes.

I am stuck on how to evaluate the performance of this network. I have a large dataset and it is not labeled. So I have decided to pick 25 from class A) and B) and 50 from class (C) to create a small dataset to show the performance of the network. And to run the network on the large dataset without labels, but because classes A and B are quite rare events, I would be able to individually access the accuracy of the network prediction of True positive and false-positive cases.

Is this a sound way to evaluate ? Can anyone having experience or opinion share their input on this? How else can I evaluate this?

",47387,,,,,6/16/2021 12:54,Evaluating a convolutional neural network on an imbalanced (academic) dataset,,0,2,,,,CC BY-SA 4.0 28286,2,,28217,6/16/2021 16:06,,1,,"

I would like to add details to Oliver's answer.

From the book "Pattern Recognition and Machine Learning" by Bishop (Section 1.2.5):

In practice, it is more convenient to maximize the log of the likelihood function. Because the logarithm is monotonically increasing function of its argument, maximization of the log of a function is equivalent to maximization of the function itself. Taking the log not only simplifies the subsequent mathematical analysis, but it also helps numerically because the product of a large number of small probabilities can easily underflow the numerical precision of the computer, and this is resolved by computing instead the sum of the log probabilities.

That is, $\log$ is monotonically increasing and hence preserves the order and the locations of the extrema. For instance, if $p(x) \geq p(y)$ then $\log\big(p(x)\big) \geq \log\big(p(y)\big)$ also holds. Therfore, maximizing likelihood is equivalent to maximizing log-likelihood.

Furthermore, it is extremely useful when calculating joint probabilities since a product can be replaced by a sum:

$$ \log \left(\prod_i P(x_i)\right) = \sum_i \log \left( P(x_i)\right) $$

This also makes calculation numerically stable and it is much easier to take a derivative of a sum of logarithms rather than to take a derivative of a product.

",12841,,,,,6/16/2021 16:06,,,,0,,,,CC BY-SA 4.0 28288,1,,,6/16/2021 23:06,,3,2421,"

I am studying logistic regression for binary classification.

The loss function used is cross-entropy. For a given input $x$, if our model outputs $\hat{y}$ instead of $y$, the loss is given by $$\text{L}_{\text{CE}}(y,\hat{y}) = -[y \log \hat{y} + (1 - y) (\log{1 - \hat{y}})]$$

Suppose there are $m$ such training examples, then the overall total loss function $\text{TL}_{\text{CE}}$ is given by

$$\text{TL}_{\text{CE}} = \dfrac{1}{m} \sum\limits_{i = 1}^{m} \text{L}_{\text{CE}} (y_i , \hat{y_i}) $$

It is said that the loss function is convex. That is, If I draw a graph between the loss values wrt the corresponding weights then the curve will be convex. The material from textbook did not give any explanation regarding the convex nature of the cross-entropy loss function. You can observe it from the following passage.

For logistic regression, this (cross-entropy) loss function is conveniently convex. A convex function has just one minimum; there are no local minima to get stuck in, so gradient descent starting from any point is guaranteed to find the minimum. (By contrast, the loss for multi-layer neural networks is non-convex, and gradient descent may get stuck in local minima for neural network training and never find the global optimum.)

How did they conclude conveniently that the loss function is convex? Is it by plotting or some other means?

",18758,,18758,,12/21/2021 23:37,5/19/2022 13:34,"In logistic regression, why is the binary cross-entropy loss function convex?",,3,0,,,,CC BY-SA 4.0 28289,1,,,6/17/2021 3:58,,1,49,"

Consider the following excerpt from section 5.5 Regularization (p. 13) of this chapter Logistic Regression.

There is a problem with learning weights that make the model perfectly match the training data. If a feature is perfectly predictive of the outcome because it happens to only occur in one class, it will be assigned a very high weight. The weights for features will attempt to perfectly fit details of the training set, in fact too perfectly, modeling noisy factors that just accidentally correlate with the class. This problem is called overfitting.

What are the 'noisy factors' here? Does it refer to the features that are irrelevant to the class label?

Or does it mean the noise/errors in the values taken by features that accidentally correlate with the class label?

",18758,,2444,,6/18/2021 14:19,6/18/2021 14:19,What are the 'noisy factors' leading to overfitting?,,1,0,,,,CC BY-SA 4.0 28291,1,,,6/17/2021 8:01,,0,147,"

I found this article quite useful on how to shape a reward function in RL. However, the example they gave is quite simple, where the goal is to minimize only two quantities (velocity and distance).

How would you formulate the reward function if you had, for instance, 4 quantities to optimize?

",43047,,2444,,6/17/2021 23:50,6/17/2021 23:50,How would you shape a reward function if there was four quantities to optimize?,,1,4,,,,CC BY-SA 4.0 28292,1,,,6/17/2021 11:38,,0,38,"

I'm starting a project where I want to extract keywords from given messages. The keywords are for example something like: "hard disk", "watch" or other technical components. I'm working with a dataset where a technician wrote a small text if he maintenanced something on a given object.

The messages are often very different in their form. For example sometimes the messages start with the repaired object and sometimes with the current date.

I looked into some NER-Libaries and it doesn't seem like they can handle tasks like that. Especially the German language makes it hard for those libaries to detect entities.

I had the idea to use CRFsuite to train my own NER-model. But I'm not sure how accurate the outcome will be. It would mean that I have to tag a lot of training data and I'm not sure if the outcome will match the time I have to spend to tag those keywords.

Does anybody have any experience with such custom NER-models? How accurate can such a model extract wanted keywords?

",48000,,2193,,6/17/2021 13:33,11/9/2022 19:05,Extracting keywords from messages,,1,1,,,,CC BY-SA 4.0 28293,1,,,6/17/2021 11:50,,1,25,"

Is it possible to consider an RNN as a classical feedforward neural network that just take the precedent output as a part of the input ?

",48003,,,,,7/12/2022 16:04,Can we modelize an RNN by an ANN that takes precedent output as a part of input?,,1,0,,,,CC BY-SA 4.0 28294,2,,28293,6/17/2021 12:11,,1,,"

Almost. I think that to match the common interpretation of an RNN you need to also have a new input at each time-step (whereas you used the word "just" suggesting otherwise).

What you're describing is some function $f_1: \mathbb{R}^n \to \mathbb{R}^n$ whereas the "conventional" RNN is more like $f_2: \mathbb{R}^m \times \mathbb{R}^n \to \mathbb{R}^n$, where $\mathbb{R}^m$ refers to the new information at each time step.

Of course, this is all an exercise in interpreting semantics. So the other answer to your question is: "if you want it to"

",16871,,,,,6/17/2021 12:11,,,,0,,,,CC BY-SA 4.0 28296,1,,,6/17/2021 12:47,,1,36,"

I am trying to mathematically characterize the finite sample convergence rates for Q-learning. To this end, I have read the following papers

In the latter, they introduce a rather simple approach that seems appealing to me; however, they only sketch it for phased Q-learning.

I would be interested in knowing about any source where I could find the same approach modified for standard Q-learning; in section 4 of the paper, they claim that

all of our results can be generalized to apply to standard Q-learning

Moreover, if you feel I am missing any paper that could be of interest with regards to the finite sample convergence of Q-learning, I would greatly appreciate it if you could post the name of it.

",48006,,2444,,6/23/2021 0:35,6/23/2021 0:35,"Is there any work that applies the approach in ""Finite-Sample Convergence Rates for Q-Learning and Indirect Algorithms"" to standard Q-learning?",,0,0,,,,CC BY-SA 4.0 28297,1,,,6/17/2021 12:53,,0,649,"

Don't need a complete solution just some guidance on how to solve it.

Consider that a person has never been to the city airport. It's early in the morning and assumes that no other person is awake in the town who can guide him on the way. He has to drive in his car but doesn’t know the way to the airport. Clearly identify the four components of problem-solving in the above statement, i.e. problem statement, operators, solution space, and goal state. Should he follow a blind or heuristic search strategy? Try to model the problem in a graphical representation.

",48007,,2444,,6/17/2021 23:31,6/18/2021 4:22,What is the possible solution to the Problem?,,1,8,,6/18/2021 14:04,,CC BY-SA 4.0 28298,2,,28291,6/17/2021 12:57,,1,,"

Here is how I managed to construct a reward function in one of my projects, where I trained an RL model for a self-driving robot that has only a single camera to navigate through a tunnel:

$$ R = \left\{ \begin{array}{ll} d_m - 3 - \left| d_l - d_r \right| & \text{if not terminal state} \\ -100 & \text{otherwise} \end{array} \right. $$

where $d_m$ is the middle distance, $d_l$ is the left distance and $d_r$ is the right distance, and $d_m, d_l, d_r \in [0, 10]$. To get this information, the agent has a laser sensor. The agent does not directly observe these distances. Instead, it gets a real number as the reward signal that indicates how good the agent is performing an action and tries to map the camera view to it. This function is designed so that the agent should stay in the middle of the tunnel, $-\left|d_l - d_r\right|$, and has to avoid head-on collisions, $d_m - 3$. Thus, the highest possible reward is $R = 10 - 3 - \left| 10 - 10 \right| = 7$

Basically, you can use any number of parameters in your reward function as long as it accurately reflects the goal the agent needs to achieve. For instance, I could penalize the agent for frequent steering on straight sections so that it drives smoothly.

",12841,,12841,,6/17/2021 13:07,6/17/2021 13:07,,,,1,,,,CC BY-SA 4.0 28300,2,,28292,6/17/2021 13:39,,0,,"

I don't know that NER is the right approach here. It seems to me that you want to find words for certain technical components in free texts, written in German. But "Festplatte" is not a named entity. Named entities (cities, companies, countries, etc) in English (and other languages) are usually capitalised, so relatively easy to spot. In German this won't work as every noun is capitalised, named entity or not. But even in English a NER wouldn't help you with "hard disk", as it's not what is commonly understood as a named entity.

It's not really an AI solution, but I would get a list of relevant components (eg from a dictionary), and then simply match those in texts. Instead of annotating existing texts, you simply add the words to a list, which would be a bit quicker, generally. And list lookup is very easy to implement.

This, I think, would work a lot better than a machine learning approach. If you find that your technicians often mis-spell the words, use a fuzzy matching algorithm such as Levenshtein distance to allow for close matches; this might also help with inflections.

",2193,,,,,6/17/2021 13:39,,,,0,,,,CC BY-SA 4.0 28301,1,,,6/17/2021 14:02,,0,342,"

I am training GAN on SVHN dataset (house numbers in Google Street View images, dimensions: 3x32x32 - 3 color channels).

The problem is that it performs worse after some training (e.g. after 50 epochs) than after only 2. Could you please check out my code? Maybe you will be able to notice what can I improve.

I have already tweaked betas in ADAM optimizer (it helped a little bit, because before that, with default settings, d_loss went to 0 after 5 epochs). I also added an extra discriminator training step.

You can find the code below:

batch_size=128

# the input data is scaled to a mean of 0.5 and a standard deviation of 0.5:
transform = transforms.Compose([
                transforms.ToTensor(),
                transforms.Normalize(mean=(0.5,), std=(0.5,)),
                transforms.Lambda(lambda x: x.view(-1))])


# wypłaszczanie do loadera (view) albo konwolucyjne
# lambda
train_dataset = SVHN(root=".", split='train', download=True,
    transform=transform) 

test_dataset = SVHN(root=".", split='test', download=True,
    transform=transform) 

train_loader = DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=True, drop_last=True, num_workers=0)
test_loader = DataLoader(dataset=test_dataset, batch_size=batch_size, shuffle=True, drop_last=True, num_workers=0)

# Initialize random noise

def noise(size):
    n = torch.randn(size, 100)
    return n.to(device)

# Define the generator model

class Generator(nn.Module):
    def __init__(self):
        super().__init__()
        self.model = nn.Sequential(
                                # take a 100-dimensional input (random noise)
                                nn.Linear(100, 256),
                                nn.LeakyReLU(0.2),
                                nn.Linear(256, 512),
                                nn.LeakyReLU(0.2),
                                nn.Linear(512, 1024),
                                nn.LeakyReLU(0.2),
                                nn.Linear(1024, 3*32*32),
                                nn.Tanh()

                            )

    def forward(self, x): return self.model(x)

# Define the discriminator model

class Discriminator(nn.Module):
    def __init__(self):
        super().__init__()
        self.model = nn.Sequential( 
                                nn.Linear(3*32*32, 1024),
                                nn.LeakyReLU(0.2),
                                nn.Dropout(0.3),
                                nn.Linear(1024, 512),
                                nn.LeakyReLU(0.2),
                                nn.Dropout(0.3),
                                nn.Linear(512, 256),
                                nn.LeakyReLU(0.2),
                                nn.Dropout(0.3),
                                nn.Linear(256, 1),
                                nn.Sigmoid()

                            )
    def forward(self, x): return self.model(x)

# define generator training - input is fake data
def generator_train_step(fake_data):
    # reset the gradient so the parameters will update correctly
    g_optimizer.zero_grad()

    # predict the output of the discriminator on fake data 
    prediction_fake = discriminator(fake_data)

    # torch.ones, because we want 1s outputted by the discriminator when training the generator
    error = loss(prediction_fake, torch.ones(len(real_data), 1).to(device))
    error.backward()

    g_optimizer.step()
    return error

# define discriminator training:

# discriminator as an input takes real data and fake data
def discriminator_train_step(real_data, fake_data):
    # reset the gradient so the parameters will update correctly
    d_optimizer.zero_grad()
  
    #print(real_data.shape)
    #print(fake_data.shape)
  
    prediction_real = discriminator(real_data)

    # calculate loss on real data (expected 1, so torch.ones)
    real_loss = loss(prediction_real, torch.ones(len(real_data),1).to(device))
    real_loss.backward(retain_graph=True)

    prediction_fake = discriminator(fake_data)
  
    # calculate loss on fake data (expected 0, so torch.zeros)
    fake_loss = loss(prediction_fake, torch.zeros(len(fake_data), 1).to(device))
    fake_loss.backward(retain_graph=True)

    d_optimizer.step()
    return real_loss + fake_loss

lr = 1e-3

discriminator = Discriminator().to(device)
generator = Generator().to(device)
d_optimizer = optim.Adam(discriminator.parameters(), lr=lr,  betas=(0.5, 0.999))
g_optimizer = optim.Adam(generator.parameters(), lr=lr,  betas=(0.4, 0.9))
loss = nn.BCELoss()
num_epochs = 50
log = Report(num_epochs)

for epoch in range(num_epochs):
    N = len(train_loader)
    for i, (images, _) in enumerate(train_loader):
        real_data = images.view(len(images), -1).to(device)
        fake_data = generator(noise(len(real_data))).to(device)
        fake_data = fake_data.detach()
        d_loss = discriminator_train_step(real_data, fake_data)
        fake_data = generator(noise(len(real_data))).to(device)
        
        d_loss = discriminator_train_step(real_data, fake_data)
        
        g_loss = generator_train_step(fake_data)
        
        log.record(epoch+(1+i)/N, d_loss=d_loss.item(), g_loss=g_loss.item(), end='\r')
    log.report_avgs(epoch+1)
    z = torch.randn(10, 100).to(device)
    # 10 pictures of dimensions of 3x32x32:
    sample_images = generator(z).data.cpu().view(10, 3, 32, 32)
    grid = make_grid(sample_images, nrow=4, normalize=True)
    show(grid.cpu().detach().permute(1,2,0), sz=10)
log.plot_epochs(['d_loss', 'g_loss'])

In case you would like to see the results of training this GAN, I enclose an example generated images after epoch 2:

EPOCH: 2.000    d_loss: 4.936   g_loss: 1.919   (130.46s - 3131.14s remaining)))

and after epoch 50:

EPOCH: 50.000   d_loss: 1.253   g_loss: 1.038   (3487.47s - 0.00s remaining))

And here is the plot of discriminator loss and generator loss:

",48010,,48010,,6/17/2021 15:37,6/17/2021 15:37,GAN performs worse after 50 epochs than after 2,,0,3,,,,CC BY-SA 4.0 28302,1,28305,,6/17/2021 14:05,,1,33,"

As English is not my native language, I have some hard time understanding the following sentence:

Regardless of the size or aspect ratio of the candidate region, we warp all pixels in a tight bounding box around it to the required size. Prior to warping, we dilate the tight bounding box so that at the warped size there are exactly p pixels of warped image context around the original box (we use p = 16).

This is from the R-CNN paper. I already extracted the ROI, but now, they say that the input of the CNN should be 227 x 227, but a lot of my ROIs are much smaller. How can I deal with it?

",40730,,2444,,6/21/2021 23:23,6/21/2021 23:23,What to do when the ROIs are smaller than $227 \times 227$ in R-CNN?,,1,1,,,,CC BY-SA 4.0 28304,2,,28289,6/17/2021 14:12,,1,,"

Please note:

  • I am only referring the decision boundary to be a line for simplicity, more often than not it is a hyperplane which is difficult to visualize and spans over n dimensions where n is the dimensionality of your feature space.
  • The explanation is toned in a more general way for emphasizing explainability.

Answer

What are the 'noisy factors' here? Does it refers to the features that are irrelevant to the class label?

  • Not Necessarily.

Or does it mean the noise/errors in the values taken by features that accidentally correlate with the class label?

  • I'm not quite sure I understand.

However

Noisy factors as the literature puts it are outliers in a finite data class. Imagine a dataset where we are asked to calculate the average of a set of numbers. Lets say the numbers are the set S = {2,2,2,2,2,2,2,1000}. The mean value in the case mentioned is 2 if it wasn't for the 1000 at the end.

  • 1000 could be an outlier when you are trying to approximate the mean of the set with some algorithm. The algorithm is more likely to encounter a 2 in the unseen test set than the 1000 it encountered once in a training set.
  • When an algorithm like Linear Regressions "learns" something it is actually learning the weights and intercepts which modify the position of the line in the decision space.
  • The modifications are Translation(Additive operations) and Rotation(Multiplicative operations) to the said line. It is commonly referred to as the "weights" and "biases" respectively in case of a simple line Y = mx + c where m is the slope and c is the intercept.
  • The idea behind noise in the above text is that this: When an algorithm is trying to "learn" these weights you would want it to ignore the outliers - "Generalize" but you also want it to not ignore the feature data completely and introduce randomness - "Specialization".
  • How well the line is placed in your feature space is what determines your Classification Effectiveness.
  • In an ideal world you would want your decision boundary to ignore such outliers which are called noise in the above literature.

Pictures (because everyone likes them)

  • In the picture below the left image is what a good decision boundary is and the right image is what an overfitted Decision boundary is.
  • In the left-image case you are trading misclassifications at the cost of better generalizability(Consider that more likely you are going to see an x in the 2nd quadrant and the o was maybe an outlier). You do not fit your training set completely with a 100% accuracy.

TL;DR

You want the model to learn that the x(s) are on the right and the o(s) are on the left but not so specifically as to which x was where. The x in the 4th quadrant and the o in the second quadrant are noisy factors.

",36765,,36765,,6/17/2021 14:19,6/17/2021 14:19,,,,0,,,,CC BY-SA 4.0 28305,2,,28302,6/17/2021 14:16,,2,,"

It is actually very easy, I didn't know the solution would be that simple. With opencv, you can do :

resized_roi= cv2.resize(roi, (224,224)) 

and that's it :]

",40730,,,,,6/17/2021 14:16,,,,0,,,,CC BY-SA 4.0 28315,2,,28297,6/18/2021 4:22,,1,,"

Here the Solution to the problem from myself

The problem can be modeled in a graphical way as followed:

A person has to reach from one node of a graph to another, and the current distance from the goal is not known.

Problem statement: Find a path between two nodes of a graph where the edge weights or any other kind of heuristic is unknown.

Operator: Operators in this is driving from one crossing/landmark to the next. In graphical sense, this may be said to be traversing an edge between two nodes.

Solution space: A solution in this problem can be termed as a path from his initial point to the airport. Graphically, this means all possible paths from the starting node to the goal node, irrespective of the distance.

Goal state: The goal state is the state upon reaching which, the algorithm may stop and report success. Here, the airport is the goal state.

The person has never been to the airport before, so he doesn't know how close a location is from the airport. In other words, there are no heuristic values to help the man. Thus, the man should opt for a blind search strategy (such as BFS or DFS).

",48007,,,,,6/18/2021 4:22,,,,1,,,,CC BY-SA 4.0 28316,1,,,6/18/2021 10:37,,1,212,"

I am new to image processing. I am trying to understand CNNs from this blog post. Here's an excerpt from that article that mentions these terms.

A ConvNet is able to successfully capture the Spatial and Temporal dependencies in an image through the application of relevant filters. The architecture performs a better fitting to the image dataset due to the reduction in the number of parameters involved and reusability of weights.

I am not able to understand the terms spatial and temporal, and their respective dependencies in images. I have encountered the spatial and temporal many times. However, still, I am not able to understand how space (spatial) and temporal(time) concepts map to an image.

(By the way, in the quote above, what does the term "reusability of weights" mean?)

",48029,,2444,,6/21/2021 10:22,6/21/2021 10:22,"What do ""spatial"" and ""temporal"" mean in the context of image processing?",,0,0,,,,CC BY-SA 4.0 28317,1,,,6/18/2021 11:10,,1,37,"

I'm familiar with Auto-Encoders and I'm about to dive into CNNs. By having a look at the most important component of a CNN, the filter:

I wonder how it is different from Auto-Encoders:

For me, it looks conceptually the same. I even have to admit it is quite of another higher concept: Dimensionality reduction. In e.g. PCA, as well as in AE and in CNN(?) you transform higher data / higher dimensions onto lower / compressed data. I can see that the methods are somehow different but yet, can't really explain in which manner, finally.

",26353,,2444,,6/18/2021 14:25,6/18/2021 14:25,What is the conceptual difference between convolutional neural networks and auto-encoders?,,0,2,,,,CC BY-SA 4.0 28319,2,,11285,6/18/2021 14:25,,1,,"

To give a statistician's answer, the distinction is empirical (embedding) versus theoretical (latent positions). You define a statistical model which has latent positions that you could then try to estimate, given data. Or, given data, you might simply find a vector representation of each object of interest in a way that makes sense for the applications considered - and you would call that set of representations an embedding. There's a lot of work going on trying to re-interpret popular embeddings as estimating a compatible model's latent positions, but that's not always straightforward.

",48031,,,,,6/18/2021 14:25,,,,0,,,,CC BY-SA 4.0 28320,1,,,6/18/2021 17:31,,1,2393,"

I am trying to understand the concept of parameter sharing in a convolution neural network from Parameter Sharing. I have a few confusions:

Parameter sharing refers to the fact that for generating a single activation map, we use the same kernel throughout the image. And for that activation map, the weights of that kernel remain the same through the image?

Denoting a single 2-dimensional slice of depth as a depth slice (e.g. a volume of size [55x55x96] has 96 depth slices, each of size [55x55]), we are going to constrain the neurons in each depth slice to use the same weights and bias.

Does the above paragraph refer to the fact that output of neurons in one activation map is generated by using the same weights in kernel throughout the image? And that kernel is convolved on the entire image?

No. of parameters without parameter sharing: There are 555596 = 290,400 neurons in the first Conv Layer, and each has 11113 = 363 weights and 1 bias. Together, this adds up to 290400 * 364 = 105,705,600 parameters on the first layer of the ConvNet alone. Clearly, this number is very high.

No. of parameters with parameter sharing With parameter sharing scheme, the first Conv Layer in our example would now have only 96 unique set of weights (one for each depth slice), for a total of 9611113 = 34,848 unique weights, or 34,944 parameters (+96 biases). Alternatively, all 5555 neurons in each depth slice will now be using the same parameters. What does this bold sentence mean?

Also, how the parameters are different for both schemes? In both cases, we are using 96 kernels with 11113 size and the resulting output is 55*55. Then how the number of parameters for both schemes coming out to be different?

",48029,,,,,6/11/2022 3:23,How is parameter sharing done in CNN?,,2,0,,,,CC BY-SA 4.0 28321,1,,,6/18/2021 22:00,,1,74,"

I wonder how can I build a neural network which will generate text description from given tag/tags. Let's assume I have created such data structure:

{
 'tag1': ['some description1', 'some description2', 'some description3'],
 'tag2': ['some description4', 'some description5', 'some description6'],
 'tag3': ['some description7', 'some description8', 'some description9']
}

Then I would like to create a neural network which will generate randomly generated description based on given tags. For example:

INPUT: ['TAG1', 'TAG2', 'TAG3'] => OUTPUT: 'some description1. some description5 some description9'

Then I thought that it can be a good idea to implement a LSTM and doing text generation, but here I have a problem I know how I can do it for one tag. I can create one corpus of text contains different sentences for tag, then do the training and generate a sentence for given tag, but what If I have multiple tags should I create a corpus for each tag or maybe there is a better way to do that? If you know any articles which covers this problem, I would appreciate if you share them with me. If you have a neural network proposition which will solve this problem, I am also open for proposals.

PS. I know, I can solve this problem with easy Map, for example: ['tag1', 'tag2', 'tag3'].map(tag => tagSentenceMap.get(tag).randomChoice()).join('. ') but this is not the case for me.

",23657,,,,,6/18/2021 22:00,How to generate text descriptions from keywords?,,0,0,,,,CC BY-SA 4.0 28323,1,28370,,6/18/2021 23:55,,1,111,"

What are some recent books that introduce AI and neural networks while also discussing the related philosophical issues, like epistemology and whether AI is really thinking, etc.?

",48038,,2444,,12/20/2021 21:47,12/20/2021 21:47,Is there a recent book that covers the theoretical and philosophical aspects of artificial intelligence?,,3,0,,,,CC BY-SA 4.0 28325,1,,,6/19/2021 8:01,,3,62,"

I have been reading this paper on NEAT and trying to implement the algorithm in C#. For the most part, I understand everything in the paper however, there are 2 things I don't understand that confuse me.

  1. In the paper it states:

A possible problem is that the same structural innovation will receive different innovation numbers in the same generation if it occurs by chance more than once. However, by keeping a list of the innovations that occurred in the current generation, it is possible to ensure that when the same structure arises more than once through independent mutations in the same generation, each identical mutation is assigned the same innovation number. Extensive experimentation established that resetting the list every generation as opposed to keeping a growing list of mutations throughout evolution is sufficient to prevent innovation numbers from exploding.

This implies that no global list/set is used to track innovations. If you have a global set to track them, then you wouldn't need to use a list during the evolution of the current generation because as soon as an innovation is created, it would be added to the set. This would be seen by the next evaluated Net/Genome.

From what I have read everyone uses a global list to track the innovations. This makes sense to me. I am just very confused as to how they did it in the paper considering they did extensive testing to figure out they only need a list for the evaluation of the current generation.

",48043,,48043,,6/19/2021 13:23,6/19/2021 13:23,How does the paper implement NEAT without a global set tracking Innovations?,,0,0,,,,CC BY-SA 4.0 28326,1,34781,,6/19/2021 8:22,,2,861,"

In the famous work on the Visual Transformers, the image is split into patches of a certain size (say 16x16), and these patches are treated as tokens in the NLP tasks. In order to perform classification, a CLS token is added at the beginning of the resulting sequence: $$ [\textbf{x}_{class}, \textbf{x}_{p}^{1}, \ldots, \textbf{x}_{p}^{N}] ,$$ where $ \textbf{x}_{p}^{i}$ are image patches. There multiple layers in the architecture and the state of the CLS token on the output layer is used for classification.

I think this architectural solution is done in the spirit of NLP problems (BERT in particular). However, for me, it would be more natural not to create a special token, but perform 1d Global Pooling in the end, and attach an nn.Linear(embedding_dim, num_classes) as more conventional CV approach.

Why it is not done in this way? Or is there some intuition or evidence that this would perform worse than the approach used in the paper?

",38846,,2444,,11/30/2021 15:07,11/18/2022 1:31,Why class embedding token is added to the Visual Transformer?,,2,0,,,,CC BY-SA 4.0 28328,2,,28320,6/19/2021 8:42,,2,,"

Concerning parameter sharing.

  1. For the fully connected neural network you have an input of shape (H_in * W_in * C_in) and the output of shape (H_out * W_out * C_out). This means, that each color of the pixel of the output feature map is connected to every color of the pixel from the input feature map. There is a separate learnable parameter for each pixel in the input image and the output. Hence, one gets this huge number of parameters : (H_in * H_out * W_in * W_out * C_in * C_out)
  2. In the convolutional layer the input is the image of shape (H_in, W_in, C_in) and the weights account for the neighborhood of the given pixel, say of size K * K. The output is obtained as a weighted sum of the given pixel and its neighborhood. There is a separate kernel for each pair of the input and output channel (C_in, C_out), but the weights of the kernel (a tensor of shape (K, K, C_in, C_out) are independent of the location. Actually, this layer can accept images of any resolution, whereas, the fully connected can work only with a fixed resolution. Finally one has (K, K, C_in, C_out) parameters, which for the kernel size K much smaller, than the input resolution result into significant drop in the number of variables.
",38846,,,,,6/19/2021 8:42,,,,3,,,,CC BY-SA 4.0 28329,1,28331,,6/19/2021 8:51,,1,866,"

I am trying to learn reinforcement learning and I am focusing on the value iteration. I am looking at the example of grid world, and I am trying to implement it in python. While doing this, I encountered the situation in which I had to set the rewards for the agent, but looking at the theory, I have found that each state has also a value, which is found using the value iteration.

So, my doubt is: What is the difference between a reward and a value for a given state? And should the initial values of the states always be set equal to zero?

",47888,,2444,,6/19/2021 12:05,6/21/2021 10:33,What is the difference between a reward and a value for a given state?,,2,1,,,,CC BY-SA 4.0 28330,2,,28329,6/19/2021 11:19,,3,,"

What is the difference between a reward and a value for a given state?

Let us say that an agent took an action from state $A$ and reached state $B$ and got a score $R$. This instantaneous score the agent received on reaching state $B$ is called the reward.

Now, let me introduce you to the concept of return. Assume that an agent followed a particular trajectory:

1. State 1 -> Action 1
2. Reward 1, State 2 -> Action 2
3. Reward 2, State 3 -> Action 3
...
n. Reward n-1, State n (Terminated)

Return (often denoted by $G$) is the sum total of all the rewards obtained by starting from a state State 1 and following a policy.

So, the definition of the return is

$$G(s_1) = R_1 + R_2 + R_3 + ... = \sum_{i=1}^{\infty}R_i$$

Sometimes (most often) these sequences never terminate, so we include a discount factor (Greek letter gamma, $\gamma$) to rewards obtained in the future.

The definition of the discounted return $G$ is

$$G(s_1) = R_1 + \gamma R_2 + \gamma^2 R_3 + ... = \sum_{i=1}^{\infty}\gamma^{i-1} R_i $$

$\gamma$ is a number between $0$ and $1$: it defines how much importance the agent gives to long-term rewards. For a smaller value of $\gamma$, more importance is given for short-term rewards.

Now, coming back to your question. A value of the state is the expected return for an agent starting from that state and following a particular policy. In the case of stochastic policies (policies that have inherent randomness) and/or for environments with stochastic transition probabilities and/or stochastic rewards, the value is the sum of (the returns of all trajectories multiplied by the probability of taking that trajectory).

And should the initial values of the states always be set equal to zero?

Not necessary, zero initialization is one of many ways to initialize. Random initialization is another method. It depends on the environment setting.

",21229,,2444,,6/21/2021 10:33,6/21/2021 10:33,,,,0,,,,CC BY-SA 4.0 28331,2,,28329,6/19/2021 11:19,,4,,"

Starting with rewards, states don't have rewards in general. A reward is a number returned at a certain step of the MDP. If you arrange things in sequence over a whole time step $s, a, r, s'$ for state, action, reward, next state, then the reward $r$ is allowed to depend on all three of $s, a, s'$, and it can also be from a random distribution of real numbers, not just a single number.

It is however OK to associate a single number reward with each state, for either leaving that state (when it is $s$ or $s_t$ in the sequence) or arriving in it (when it is $s'$ or $s_{t+1}$). The rewards should be allocated as fits the problem being solved. They are part of the problem definition.

State values are a way to measure longer term benefits of being in a state, and are often something calculated as part of a solution. The formal definition of state value looks like this:

$$v_{\pi}(s) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty} \gamma^k R_{t+k+1} | S_t=s]$$

In English: "The expected discounted sum of all future rewards when starting from a given state and following a specific policy." The discounted sum is usually called the return or the utility associated with the state.

What is the difference between a reward and a value for a given state?

A state value is composed of many rewards weighted by their probability of occurring in the future. It is a useful summary of possible futures that can be used to make decisions.

And should the initial values of the states always be set equal to zero?

Not necessarily, but zero is a reasonable default if you have nothing else to go on. Alternatives include:

  • Best guesses at true values (perhaps from some previous attempt to solve the problem). This may improve speed of convergence depending on how good the guesses are.

  • Random values - this may happen if you use a neural network.

  • Optimistic values. This is a trick for improving exploration on smaller problems - if you set a value higher than an upper bound on the optimum possible then an agent following an greedy or near-greedy policy will try to reach the associated state at some point, even if other results are already better than a lower default like zero.

",1847,,1847,,6/20/2021 8:55,6/20/2021 8:55,,,,0,,,,CC BY-SA 4.0 28332,1,,,6/19/2021 14:06,,2,100,"

I have read a lot of debates about node ids and such. I'm not 100% sure how it works, but I am assuming the next node added to a network would be the next number in that specific networks list?

For example, say we start with a network with 2 inputs 1 output (nodes 1,2,3). Let's say in generation 1, one network splits a connection creating node 4. Then in generation 2, a different network splits a different connection. This would be node 4 for that specific network right? From my understanding (correct me if I'm wrong), this second split would result in 1 new innovation connection. The connection from the input to node 4 would be new but the connection from node 4 to 3 would already exist from the first split?

",48043,,2444,,6/20/2021 11:12,6/20/2021 11:12,"In NEAT, how do node numbers work?",,0,0,,,,CC BY-SA 4.0 28334,1,28335,,6/19/2021 16:59,,0,60,"

While going over the pseudocode of the CURL paper, the method to identify labels from the logits wasn't clear to me. I believe this technique might be common in other PyTorch/Deep Learning tasks. I have attached the pseudocode below -

",31755,,31755,,6/21/2021 17:56,6/21/2021 21:39,How does CURL extract labels from logits?,,1,0,,12/26/2021 17:34,,CC BY-SA 4.0 28335,2,,28334,6/20/2021 5:26,,1,,"

Given arange returns a 1-D tensor with values from 0 to logits.shape[0], then the labels is a vector of $0$ to $N$ where $N$ is the number of classes predicted by the output layer of f_q.

The CrossEntropyLoss then finds the difference between the predictions and the target labels, which the encoder weights f_q.params is updated according to. I haven't read the paper, but this particular part is a standard multi-class classification approach.

",40671,,,,,6/20/2021 5:26,,,,0,,,,CC BY-SA 4.0 28336,2,,6179,6/20/2021 7:33,,0,,"

If you consider every action independent from the others, I assume the equation might be right. Consider the case with two actions. $\pi(a_1,a_2)$ is the joint probability of the two actions which becomes $\pi(a_1)\cdot \pi(a_2)$ in case $a_1$ and $a_2$ are independent. Then, $\log(\pi(a_1,a_2))$ will become $\log(\pi(a_1))+\log(\pi(a_2))$. I suppose this logic can be applied in case there are multiple actions. However, this should be considered that $\pi(a_i)$ should be calculated with its own mean and variance. so I assume the policy network will have 2n outputs if we have n actions. because each action requires it's own mean and variance.

",47083,,47083,,6/24/2021 13:27,6/24/2021 13:27,,,,0,,,,CC BY-SA 4.0 28341,1,,,6/21/2021 7:46,,1,201,"

I'm recently using Monte Carlo Tree Search in OpenAi Gym Atari, but the result isn't satisfying.

Without render, the game lasts about 180 steps ( env.step() was called this much time ) with random agent. However, my MCTS agent only made the game last 12 steps. And it took pretty much time to give a next step.

I guess it's the problem of rollout. I build the MCTS tree using nodes containing AtariEnv objects, and deepcopy it each time I rollout, add the reward. So it takes about 1 second to expand nodes and rollout, if I do 100 iterations, that would be massive waiting time.

My code of rollout is shown below:

def rollout_(current,if_render):
        '''
        current is going to be a Node object
        '''
        sandBox = deepcopy(current.state)
        endReward = 0
        done = False
        while done != True:
            action=sandBox.action_space.sample()
            _,reward,done,info = sandBox.step(action)# wierd return obs_next
            if reward > 0:
                reward *= 2
            endReward += reward-0.008
        return endReward

Anyone can help?

",48079,,,,,6/21/2021 7:46,Too slow search using MCTS in OpenAI Atari games,,0,0,,,,CC BY-SA 4.0 28343,1,,,6/21/2021 9:20,,1,192,"

I made a flowchart for a simplified perceptron leaning algorithm.

Here is the process of the learning algorithm.

  1. Initialize the weights first.

  2. Get a training example randomly and make a prediction. If the prediction matches the ground-truth value, then get another training example. If the prediction doesn't match the ground-truth value, update the weights.

  3. repeat step 2 until all predictions match the ground-truth value (or other stop criteria)

Is my flowchart a good representation? If not, what are the errors, and what might be improved?

",45689,,2444,,12/12/2021 21:22,11/8/2022 9:03,Is my flowchart a good representation of the perceptron learning algorithm?,,3,1,,,,CC BY-SA 4.0 28344,2,,28288,6/21/2021 11:29,,1,,"

If you find the Hessian matrix (the matrix of second order derivatives) for the binary cross entropy loss function, you'll see that it is positive semidefinite for any possible value of the parameters. This concludes that it is a convex function.

A side effect of it being convex is that it will have a single minimum as mentioned in the textbook you cited.

",48085,,,,,6/21/2021 11:29,,,,1,,,,CC BY-SA 4.0 28353,1,,,6/22/2021 0:23,,1,28,"

In order to learn the embeddings, we need to train a model based on some objective function. The model can be an RNN and the objective function can be the likelihood. We learn the embeddings by calculating the likelihood, and the embeddings are considered good if the likelihood is maximum for them.

The following paragraph says that it is difficult to scale RNN to estimate the maximum likelihood for large corpus due to scaling issues:

Likelihood-based optimization is derived from the objective $\log p(w; U)$, where $U \in R_{K \times V}$ is matrix of word embeddings, and $w =\{w_m \}_{m=1}^M$ is a corpus, represented as a list of $M$ tokens. Recurrent neural network language models optimize this objective directly, backpropagating to the input word embeddings through the recurrent structure. However, state-of-the-art word embeddings employ huge corpora with hundreds of billions of tokens, and recurrent architectures are difficult to scale to such data. As a result, likelihood-based word embeddings are usually based on simplified likelihoods or heuristic approximations.

What is the type of scaling, wrt RNN, is referred to here? Why is it difficult to scale RNN?


The paragraph above is taken from the page 329 of Chapter 14: Distributional and distributed semantics of the textbook Natural Language Processing by Jacob Eisenstein

",18758,,2444,,6/22/2021 23:40,6/22/2021 23:40,Why can't recurrent neural network handle large corpus for obtaining embeddings?,,0,0,,,,CC BY-SA 4.0 28354,1,,,6/22/2021 3:51,,1,136,"

Most people seem to assume that we need a human-level AI as a starting point for the Singularity.

Let's say someone invents a general intelligence that is not quite on the scale of a human brain, but comparable to a rat. This AI can think on its own, learn and solve a wide range of problems, and basically demonstrates rat-level cognitive behavior. It's just not as smart as a human.

Is this enough to kickstart the exponential intelligence explosion that is the Singularity?

In other words, do we really need a human-level AI to start the Singularity, or will an AI with animal-level intelligence suffice?

",47007,,,,,6/25/2021 14:19,Can an animal-level artificial general intelligence kickstart the Singularity?,,2,1,,6/28/2021 10:40,,CC BY-SA 4.0 28358,1,,,6/22/2021 8:59,,1,67,"

I'm currently working on an iOS App where I want to detect if there is a table, chair or bench in the current camera input.

My idea was to take the MobileNetV2 model and get it to classify these three categories with transfer learning in TensorFlow. Because there are cases where none of these three objects are visible, I would add a fourth class "none" and feed it with random pictures of different unrelated things.

Is this approach a good idea, or is there a better way of doing this?

",48102,,2444,,6/23/2021 9:48,6/23/2021 9:48,How to deal with images that do not contain any object of interest?,,0,2,,,,CC BY-SA 4.0 28359,2,,28249,6/22/2021 9:01,,0,,"

You can modify your fitness-funciton in a kind of "dirty" way:

Check while generating the fitness if all outputs are the same and than set the fitness to -10000 (or another very bad value depending on your normal fitness).

This way ES-HyperNEAT could/should learn to not generate this kind of networks.

PS: I do not have enough reputation to make this a comment.

",48101,,,,,6/22/2021 9:01,,,,0,,,,CC BY-SA 4.0 28362,1,28806,,6/22/2021 13:54,,0,109,"

I am trying to construct a faster RCNN from scratch using KERAS. I am generating the tensor which contains whether anchor at each location corresponds to object or background or neither for training the RPN. The output tensor for the RPN is suppose H x W x L where the L dimension corresponds to whether an object is detected or is background or neither based on IOU thresholds.

My question is this: What should be the label value for neither an object nor background label and how to stop the gradient flow for this label.

",19201,,,,,7/23/2021 19:43,Confusion about faster RCNN neither object nor background label,,1,0,,,,CC BY-SA 4.0 28364,1,,,6/22/2021 14:38,,1,25,"

I am reading a survey on various normalization techniques adopted in neural network architectures.

The purpose of introducing normalization is understandable - to stabilize the training and avoid covariate shifts.

There is a plethora of proposed approaches:

  • Batch Normalization. Probably, the most well-known approach. One averages over the batch and spatial dimensions and gets the mean and std vectors of size (num_channels,): $$ \mu_c = \sum_{n, h, w}^{N, H, W} x_{nchw} \quad \sigma_c = \sqrt{\frac{1}{NHW}\sum_{n, h, w}^{N, H, W}(x_{nchw} - \mu_c)^2} $$
  • Layer Normalization. This technique became very popular after the success of Transformer architectures. The average is over the channel and spatial dimensions and gets the mean and std vectors of size (batch_size, num_channels): $$ \mu_n = \sum_{c, h, w}^{C, H, W} x_{nchw} \quad \sigma_n = \sqrt{\frac{1}{NHW}\sum_{c, h, w}^{C, H, W}(x_{nchw} - \mu_n)^2} $$
  • Instance Normalization. This approach is popular in style transfer applications. The average is over the channel and spatial dimensions and gets the mean and std vectors of size (batch_size,): $$ \mu_{nc} = \sum_{n, h, w}^{N, H, W} x_{nchw} \quad \sigma_{nc} = \sqrt{\frac{1}{NHW}\sum_{n, h, w}^{N, H, W}(x_{nchw} - \mu_{nc})^2} $$ There are much more approaches, but I listed these 3 as the simplest.

Then there are trainable parameters $\gamma$ and $\beta$, and the final output is: $$ \gamma \left(\frac{x - \mu(x)}{\sigma(x)}\right) + \beta $$

As far as I understand batch normalization forces weights to output something like $\mathcal{N}(\beta, \gamma)$ (normal distribution with mean $\beta$ and std $\gamma$). However, there are problems when batch size is small since the estimate would be inaccurate. Also, it seems to average over all images in the batch, but if there are different classes, probably one would like to have them to be distributed slightly different. This choice is the most widely used in CNN still, despite some [recent work] (https://arxiv.org/abs/2102.06171) says, that this layer can be replaced by another strategy.

Layer normalization seems to equalize different channels. Is there some intuition why it has to be so? Why do we need to make output activations similar to each other?

Instance normalization seems to be the most specific in the list. But I have not seen a lot of usage of this outside style transfer and GAN's.

Overall, the ultimate question is - how to choose a particular normalization strategy for the given problem and architecture?

",38846,,2444,,6/23/2021 0:19,6/23/2021 0:19,How to choose proper normalization strategy for the activations?,,0,0,,,,CC BY-SA 4.0 28365,1,,,6/22/2021 15:08,,2,193,"

A post gives a formula for perceptron to update weights

I understand almost all the parts of it, except for the part $(y_i - \hat y_i)x_i$ where does it come from? Is it the gradient of some kind of loss function? If yes, what is the definition of the loss function?

The OP seems doesn't give the hypothesis, so that $\hat y_i = h(x_i)$

However, this hypothesis seems prevalent

\begin{align} \hat{y} &= sign(\mathbf{w} \cdot \mathbf{x} + b) \tag{1}\\ &= sign({w}_{1}{x}_{1}+{w}_{2}{x}_{2} + ... + w_nx_n + b) \\ \end{align}

where

$$ sign(z) = \begin{cases} 1, & z \ge 0 \\ -1, & z < 0 \end{cases} $$

How do I get $(y_i - \hat y_i)x_i$ from function (1)

",45689,,45689,,6/23/2021 23:30,11/16/2022 2:07,"Is $(y_i - \hat y_i)x_i$, part of the formula for updating weights for perceptron, the gradient of some kind of loss function?",,1,1,,,,CC BY-SA 4.0 28366,1,28368,,6/22/2021 15:12,,1,47,"

I'm going through Sutton and Barto's book Reinforcement Learning: An Introduction and I'm trying to understand the proof of the Policy Improvement Theorem, presented at page 78 of the physical book.

The theorem goes as follows:

Let $\pi$ and $\pi'$ be any pair of deterministic policies such that, for all $s\in S$,

$q_{\pi}(s,\pi'(s))\geq v_{\pi}(s)$.

Then the policiy $\pi'$ must be as good as, or better than, $\pi$. That is, it must obtain greater or equal expected return from all states $s\in S$:

$v_{\pi'}(s)\geq v_{\pi}(s)$.

I take it that for the proof, the policy $\pi'$ is identical to $\pi$ except for one particular state $s$ (at each time step) for which we have $\pi'(s)=a\neq \pi(s)$, as suggested by @PraveenPalanisamy in his answer here.

The proof start from the statement of the theorem: $v_{\pi}(s)\leq q_{\pi}(s,\pi'(s))$

And then $q_{\pi}(s,\pi'(s))$ is developed as $\mathbb{E}[R_{t+1}+\gamma v_{\pi}(S_{t+1})|S_{t}=s,A_{t}=\pi'(s)]=\mathbb{E}_{\pi'}[R_{t+1}+\gamma v_{\pi}(S_{t+1})|S_{t}=s]$

I don't understand how did we get rid of the condition $A_{t}=\pi'(s)$. I don't think it's related to adding the subscript $\pi'$ to the expectation because it's something that should be done by definition since for the following time steps we choose policy $\pi$ which is exactly $\pi'$.

",44965,,,,,6/22/2021 18:36,How do we get from conditional expectation on both state and action to only state in the proof of the Policy Improvement Theorem?,,1,0,,,,CC BY-SA 4.0 28367,2,,28365,6/22/2021 15:36,,0,,"

It depends on your hypothesis $h$. The author of the original article compares the dot product with a threshold:

So for a binary classification problem $h$ can be defined as follows:

$$ h = \begin{cases} 1 & \text{if $f>z$}\\ 0 & \text{otherwise} \end{cases} $$

That is, $\hat y_i$ is your prediction and $\hat y_i = h(x_i)$, $y_i$ is a real label and $x_i$ is a sample.

Finally, you can update your weghts $w_n = w_n + \eta(y_i - \hat y_i)x_i$, where $n$ is a number of the weight and $i$ denotes a number of the label/sample pair.

Depending on how you define your hypothesis you will have a different optimization algorithm. Take a look at this answer for more details.

",12841,,12841,,6/22/2021 15:50,6/22/2021 15:50,,,,2,,,,CC BY-SA 4.0 28368,2,,28366,6/22/2021 18:36,,2,,"

I don't understand how did we get rid of the condition $A_{t}=\pi'(s)$.

We don't really, it is just moved into the subscript $\pi'$ in $\mathbb{E}_{\pi'}[]$ - it means the same thing here, that the next action is chosen according to the modified policy $\pi'$. Moving the condition around is part of the proof's strategy, which eventually expresses the expectation in a familiar way so that we end up with a something that matches the definition of $v_{\pi'}(s)$.

",1847,,,,,6/22/2021 18:36,,,,7,,,,CC BY-SA 4.0 28370,2,,28323,6/22/2021 20:54,,0,,"

Deep Learning (2016) by Ian Goodfellow, Yoshua Bengio, & Aaron Courville, introduction:

Inventors have long dreamed of creating machines that think. This desire dates back to at least the time of ancient Greece. The mythical figures Pygmalion, Daedalus, and Hephaestus may all be interpreted as legendary inventors, and Galatea, Talos, and Pandora may all be regarded as artificial life

Ironically, abstract and formal tasks that are among the most difficult mental undertakings for a human being are among the easiest for a computer. Computers have long been able to defeat even the best human chess player but only recently have begun matching some of the abilities of average human beings to recognize objects or speech. A person’s everyday life requires an immense amount of knowledge about the world. Much of this knowledge is subjective and intuitive, and therefore difficult to articulate in a formal way. Computers need to capture this same knowledge in order to behave in an intelligent way. One of the key challenges in artificial intelligence is how to get this informal knowledge into a computer.

Deep learning has had a long and rich history, but has gone by many names, reflecting different philosophical viewpoints, and has waxed and waned in popularity.

",48038,,,,,6/22/2021 20:54,,,,0,,,,CC BY-SA 4.0 28371,1,28376,,6/22/2021 21:40,,4,1964,"

My question is: how to add certain negative samples to the training dataset to suppress those samples that are recognized as the object.

For example, if I want to train a car detector. All my training images are outdoor images with at least one car. However, when I use the trained detector on indoor images, sometimes I got the wrong object detected (false positive). How can I add more indoor images (negative samples) to the training dataset to improve the accuracy? Can I just add them without any labeling?

",48118,,2444,,6/23/2021 10:02,6/23/2021 14:51,How to add negative samples for object detection?,,1,2,,,,CC BY-SA 4.0 28372,2,,28343,6/23/2021 6:38,,0,,"

It seems loosely reasonable but there are various things which are potentially unclear.

What exactly is a prediction, and is it deterministic or stochastic? First, if you are predicting a continuous value, you can never be "correct" - there will always be at least some very small deviation. This makes me assume that you are talking about making some discrete prediction, e.g. over some classes. In this case you would typically output a probability distribution over the different classes. If this is the case, again it's unclear what "correct" means. This makes me believe that the only way to interpret "correct" is that for any example, you deterministically output a single class, e.g. by taking the class with maximum probability, and then the prediction is considered correct when you output the correct class.

I think the biggest issue is with "all predictions correct". How do you check if all predictions are correct? Would you compute the predictions for all examples each iteration? Because that seems like the only possible way to check whether or not all predictions would be correct. More generally it's often not possible to have all predictions be correct (i.e. for an over determined problem).

",47080,,,,,6/23/2021 6:38,,,,0,,,,CC BY-SA 4.0 28373,1,,,6/23/2021 6:51,,1,33,"

Are there rules of thumb as to which activation functions work well (or which one would not) on the policy and value network of a class of RL algorithms? For hidden layers and for the output layer.

For example, I came across [1], which mentions ELU to be indispensable to MPO [2], and tanh (output activation) to be indispensable to SAC's Gaussian policy.

",40671,,2444,,6/23/2021 9:46,6/23/2021 9:46,Are there guiding principles as to which activation functions suit a given RL algorithm?,,0,0,,,,CC BY-SA 4.0 28376,2,,28371,6/23/2021 14:51,,4,,"

The quick answer: yes you can, just add images without labels, just make sure that in the negative samples there are no cars or you will make the AI crazy (i.e. convergence & instability issues).

However that might not be the better approach to go. Why? Because your dataset already have enough negative examples. This was pointed out by the famous paper Focal Loss for Dense Object Detection. The paper basically proposes to think that each pixel of a dataset image is a training signal. Then for each image there are lots of pixels with negative signal (nothing in it: sky, ground, trees...) and only a few with positive signal (the actual car).

So if each image of the dataset have more negative signals (pixels) than positive, then, the problem might not be in the negative examples. So that leaves you with 2 ways to go:

  • Use a loss function that focus more on positive signal (car pixels) than in the negative examples (not car pixels) such focal loss or derivatives
  • Add more positive examples in the dataset

I can confirm what this paper stated in my every day experiments. We have a battery of experiments right now that performs best with focal loss & no negative examples VS other experiments without focal loss & negative examples.

Just for reference this is what happens when there are lots of negative examples:

The AI took a while to figure out that negative samples are not useful (1M steps) in this experiment. From them on it just focused on the positive samples and this training started to converge (and inference started to show something meaningful)

",26882,,,,,6/23/2021 14:51,,,,0,,,,CC BY-SA 4.0 28378,1,,,6/24/2021 1:32,,1,141,"

In the geometrical interpretation of SVD, the data points that we have need to be imagined as points in high dimensional space (say $d$-dimensional space). But we need to find a hyperplane in $k-$dimensional subspace that best fits the given data points

To gain insight into the SVD, treat the rows of an $n \times d$ matrix $A$ as $n$ points in a $d$-dimensional space and consider the problem of finding the best $k$-dimensional subspace with respect to the set of points.

My doubt here is about the uniqueness of $k$. Can we do decomposition for any $k \le d$ or for only certain values of $k$ or only for an unique $k$?


The paragraph is taken from the material on Singular Value Decomposition available here.

",18758,,18758,,6/24/2021 22:53,7/21/2022 11:04,How many singular vectors do we need to calculate for SVD?,,1,0,,,,CC BY-SA 4.0 28379,1,,,6/24/2021 4:16,,2,131,"

The following question is from the webbook Neural Networks and Deep Learning by Michael Nielson:

How do our machine learning algorithms perform in the limit of very large data sets? For any given algorithm it's natural to attempt to define a notion of asymptotic performance in the limit of truly big data. A quick-and-dirty approach to this problem is to simply try fitting curves to graphs like those shown above, and then to extrapolate the fitted curves out to infinity. An objection to this approach is that different approaches to curve fitting will give different notions of asymptotic performance. Can you find a principled justification for fitting to some particular class of curves? If so, compare the asymptotic performance of several different machine learning algorithms.

The ability to mimic complex curves and fit to the data points comes due to the non-linearity used, since, had we only used a linear combination of weights and biases, we would not have been able to mimic these. Now the output depends a lot on our choice of non-linearity. Suppose we have a model. It overfits and we get an order 5 polynomial, while in another case it underfits and we get a linear model. So how would we get a good estimation of the asymptotic performance, as questioned by the author?

",47500,,16521,,6/25/2021 5:19,6/26/2021 21:25,How would we get a good estimation of the asymptotic performance of machine learning algorithms?,,0,6,,,,CC BY-SA 4.0 28380,1,28381,,6/24/2021 6:21,,2,413,"

In a convolutional neural network, the hyperparameters such as number of kernels and stride, kernel size, etc are determined. After some combination of convolutions, ReLU and pooling layer there is the fully connected (FC) layer in the end which yields a classification result. I originally thought that during training the values of kernels would be optimized and that kernels such as edge detection are a result of optimization.

But at the end if we have weights to optimize at the FC layer, what is it that gets optimized during training of the CNN? Do both the kernel values and weights in FCC get optimized? If so, it seems like we're dealing with two different types of parameters. How are both trained simultaneously? If not so, are there simply sets of kernels known to work and automatically implemented in CNN modules?

",48144,,2444,,6/24/2021 13:05,6/25/2021 7:09,What gets optimized in convolutional neural network?,,1,0,,,,CC BY-SA 4.0 28381,2,,28380,6/24/2021 7:59,,3,,"

Do both the kernel values and weights in FCC get optimized?

Yes.

Some of the designs for image processing neural networks prior to CNNs had separate filter processing states. For instance, Sobel filters were popular choices in earlier attempts at machine learning on images, and they can be thought of as fixed CNN-like layers. They may still have a role in some projects.

However, most CNN architectures for images now work with pixel data directly, and can learn both the filter weights and fully connected later weights together.

In some uses, such as transfer learning, it is useful to be able to selectively learn only some of the layers. You may take the CNN filter layers from a very general image classifier trained on ImageNet, and repurpose it by replacing the fully connected layers. When training the new neural network you can freeze the filter layers and learn only the fully connected layers - although there is no specific requirement to separate them only by convolutional/fully-connected, you could equally retrain only part of the fully-connnected layers, or include some of the convolutional layers.

If so, it seems like we're dealing with two different types of parameters. How are both trained simultaneously?

They are not really that different. A CNN can be thought of as fully connected throughout, but with some extra constraints on the convolutional layers:

  • Weights that would connect a feature map neuron to an input outside the bounds of the filter are always zero.

  • Weights for each position on the feature map are identical.

Using learnable convolution filters enforces both these constraints.

The practical difference to how back propagation works in convolutional layers as opposed to fully connected layers is to sum all gradients arising from each "pixel" in the feature map to the appropriate filter weight when back propagating. So unlike a fully connected layer weight that receives one summed gradient update from the layer above, each convolutional filter weight receives the equivalent update summed over all pixels in the next feature layer. Depending on which source you learn CNNs from, this additional outer sum might be shown or might be implied using different notation for the update rules.

",1847,,1847,,6/25/2021 7:09,6/25/2021 7:09,,,,0,,,,CC BY-SA 4.0 28382,1,,,6/24/2021 8:09,,1,87,"

In this interview with Lex Fridman and Ben Goertzel, at 2:23:48, Lex asks about possibilities for young people in the domain of AGI research. Ben Goertzel then answers that there are various possibilities we can find on the OpenCog framework, including Ph.D. theses.

I was wondering if anyone here knows what exactly he meant when he said we can find Ph.D. theses there? (He said about them at 2:25:18)

(I am considering doing a Ph.D. in AI, so I am interested in finding interesting topics for research)

",45492,,45492,,7/28/2021 9:04,7/28/2021 9:04,Which topics about/in OpenCog could be researched in a Ph.D. thesis,,0,2,,,,CC BY-SA 4.0 28383,1,,,6/24/2021 11:09,,0,37,"

I want to make a model that predicts person's shape depending on his son's image.

My plan is to create a dataset and each data point in it consists of two images; One for the father or mother and one for the son. Then make a model and train it with this dataset.

So when I give the model an image of a son, it predicts / generates / draws the father's image.

  • Is that is possible ?
  • If yes, How can I make it ? ML ? Deep Learning ? Something else ?

I searched a lot but didn't find something helpful; So any ideas or opinions are welcomed.

",48150,,,,,6/24/2021 11:09,AI model to predict/generate person's image,,0,2,,,,CC BY-SA 4.0 28384,1,28392,,6/24/2021 11:29,,1,58,"

I have been reading about autoregressive models. Based on what I've read, it seems to me that all autoregressive models use ancestral sampling. For instance, this paper says the following in Abstract:

We introduce a deep, generative autoencoder capable of learning hierarchies of distributed representations from data. Successive deep stochastic hidden layers are equipped with autoregressive connections, which enable the model to be sampled from quickly and exactly via ancestral sampling.

However, what I don't understand is why (as I understand it) all autoregressive models use ancestral sampling. Why is ancestral sampling used in autoregressive models?

",16521,,,,,6/25/2021 7:01,Why is ancestral sampling used in autoregressive models?,,1,0,,,,CC BY-SA 4.0 28385,1,,,6/24/2021 13:14,,1,18,"

Maxwell's theorem states that multivariate normal distribution $\mathcal{N}(\mathbf{0}, \sigma^2\mathbf{I})$ is the only distribution of a random vector that is invariant and have independent components after random rotations (orthogonal transformation).

Generally, many distributions are rotationally invariant and it's called spherically symmetric distributions. However, any non-normal spherically symmetric distribution has uncorrelated but dependent components.

Consequently, I am wondering if there are some distributions of vectors which are not neccessary to be rotationally invariant but have rotationally independent components.

",40242,,,,,6/24/2021 13:14,Rotationally independent distributions,,0,0,,,,CC BY-SA 4.0 28387,1,,,6/24/2021 15:38,,0,55,"

I have a radial basis function that supplies uncertainties (standard deviations) with its predictions, which are numerical values.

This function is computed for a particular point by computing its relative distance to a large set of other reference points in high dimensional space, and compositing a prediction from them.

Over the training set I can compute R to get a correlation between prediction and actual. Weights are assigned to each dimension and optimized to maximize R.

Over a validation set, it seems I'd want to calculate something other than R to measure the model's predictive power, since its predictions are not single values, but ranges.

",48153,,48153,,6/24/2021 23:50,6/24/2021 23:50,What are the standard ways to measure the quality of a set of numerical predictions that include uncertainties?,,0,10,,,,CC BY-SA 4.0 28388,1,,,6/24/2021 20:40,,2,234,"

I have been working on a problem to which I've applied alpha-beta pruning. While I got most of the answers right, there is one part I'm not quite getting:

Note that I've only provided a part of the tree I'm working on. Node $B$ starts with the following values:

$B$

  • $v = \infty$
  • $\alpha = - \infty$
  • $\beta = \infty$

Now, we push the alpha and beta values down to node $D$ from parent node $B$, and calculate its value:

$D$

  • $v = -\infty$
  • $\alpha = -\infty$
  • $\beta = \infty$

As leaf node $J$ has a value of $-7$, we push that back up to parent node $D$, changing node $D$'s value to $-7$ (as it is better than the old value of $-\infty$), and we also change the $\alpha$ value of node $D$ (as it is also better than the old value of $-\infty$).

New $D$

  • $v = -7$
  • $\alpha = -7$
  • $\beta = \infty$

We now push the value of $-7$ back up to parent node $B$, changing node $B$'s value to $-7$ (as it is better than the old value of $\infty$), and we also change the $\beta$ value of node $B$ (as it is also better than the old value of $\infty$).

New $B$

  • $v = -7$
  • $\alpha = -\infty$
  • $\beta = -7$

We now traverse down to node $E$ (and we don't prune it because node $B$'s value is NOT <= its $\alpha$ value), and we push the alpha and beta values down from node $B$, and calculate its value:

$E$

  • $v = -\infty$
  • $\alpha = -\infty$
  • $\beta = -7$

As leaf node $K$ has a value of $0$, we push that back up to parent node $E$, changing node $E$'s value to $0$ (as it is better than the old value of $-\infty$). Now, this is the point where my confusion lies. According to my understanding, at this point we would also set the $\alpha$ value of node $E$ to $0$ (as it is better than the old value of $-\infty$). However, the answer I received to this question specifies that we do NOT change the $\alpha$ value of node $E$, and rather leave it as $-\infty$.

Can someone please explain to me why this is the case?

UPDATE

I did not originally include the full subtree - this is it:

In this instance, only node M should be pruned. However, my question still stands as to why the answer did not update the alpha value of node E, as no pruning happened in that part of the tree.

",48158,,48158,,8/29/2021 12:00,8/29/2021 13:22,Alpha beta pruning - rules for updating alpha/beta value,,1,0,,,,CC BY-SA 4.0 28389,2,,28354,6/24/2021 23:56,,0,,"

Consider scientists creating a worm-level AI (which supposedly has happened, scientists have fully simulated a worm's brain, as far as I am aware); Now what? Is that simulation of 302 neurons going to rapidly explode and take over the world? Of course not! You need more than just the baseline intelligence, you require an AI not only with the intelligence to learn, but the infrastructure/capacity to advance to a point where it can then create more infrastructure/capacity for itself. That is when it will be able to explode in a singularity kind of event. This is all speculation of course, so take it as you will.

",34473,,2444,,6/25/2021 10:56,6/25/2021 10:56,,,,0,,,,CC BY-SA 4.0 28390,1,,,6/25/2021 3:10,,1,485,"

Consider a dataset with $n$ training examples and $d$ features.

Let $D_{n \times d}$ be the data matrix and $r$ be the rank of it.

In matrices, rank $r$ is generally useful in

  1. Knowing the dimension of (optimal) vector space that can generate the rows or columns of the matrix.

  2. Knowing the number of linearly independent rows or linearly independent columns in the matrix. Note that column rank and row rank are same for a matrix and is generally called as the rank of a matrix.

In fact, both 1 and 2 are same and just rephrased.

What is the meaning or implications of the rank $r$ of a dataset $D_{n \times d}$ for machine learning algorithms?

",18758,,2444,,6/30/2021 10:50,7/1/2021 14:43,What is the meaning or implications of the rank of a dataset for machine learning algorithms?,,1,0,,,,CC BY-SA 4.0 28391,1,,,6/25/2021 5:09,,2,110,"

I've designed a machine learning model for the predictive maintenance of machines. The data used for training and testing the ML model is the data from various sensors connected to various parts of the machines. Now, I'm searching for a good approach for deploying the model in the real-time environment as explained here. I did some research and found some information about using real-time data for prediction such as using Kafka. I have some questions unanswered regarding the deployment of the ML model. Following are the details of my system:

  • The sensors (pressure, temperature, flow, vibration, etc) are deployed across the parts of the machines.
  • The ML model is trained with historical data.
  • For predictive maintenance (anomaly detection), streams of data will be available via MQTT. As there are 3000 machines, the volume of data will be very high.

My questions are:

  • Where will be the best place to perform prediction operation, at the factory premice where machines are located (edge computing), at our office (that designs ML model), or at cloud server? I want to know it in regard to operational cost.
  • Is there any way to estimate the effectiveness of the complete system (full-stack ML architecture)?
",48160,,,,,6/30/2021 7:03,Taking a machine learning model to production\deployment,,0,2,,,,CC BY-SA 4.0 28392,2,,28384,6/25/2021 7:01,,0,,"

My understanding is that the answer to this question is basically 'ancestral sampling is used in autoregressive models because it fits well the structure/dynamics of autoregressive models (the ancestor-descendent relationship, etc.)'. It's not a very satisfying answer, but my understanding is that it's correct.

If anyone has a better answer, feel free to post.

",16521,,,,,6/25/2021 7:01,,,,0,,,,CC BY-SA 4.0 28393,1,,,6/25/2021 8:36,,1,23,"

Context: I'm an experienced programmer with a graduate education in AI and previous CUDA programming experience. I'm versed in Machine Learning but am out of the loop -- I've not used any of the modern software packages of the last 10 years.

Question: Is it possible using modern AI software to easily create a face-tracking application that can use a webcam to track the amount of time spent at one's desk.

My environment is Fedora Linux. I also have an NVidia GTX 1660 for acceleration.

To make this question and answer precise, I've narrowed it to the following sub-questions:

As of June 2021,

  1. Is there existing software that one can simply "set up" with a small amount of programming work (or none at all) that would facilitate training a video classifier from webcam recordings?

  2. How does one provide training examples to this software, or how is data labeled? Does it provide some sort of GUI or accessory tool to label still frames or video sequences?

  3. Does said software provide "hooks" or an event API so that one may invoke code on the event of e.g. a classifier edge?

  4. Finally (and consider this optional), would it be realistic for a seasoned programmer to accomplish such a project using said software in about 30 hours? I understand that this is subjective -- just assume a graduate student in the AI field and ballpark terms. Or, answer in terms of the software's intended audience.

First posting in this community, so just offer guidance if you'd like to see this question refined.

",48165,,,,,6/25/2021 8:36,Is it possible to create a simple face-tracking app that can monitor how much time one spends at their desk?,,0,0,,,,CC BY-SA 4.0 28395,1,28396,,6/25/2021 11:16,,1,108,"

I am currently studying the textbook Neural Networks and Deep Learning by Charu C. Aggarwal. Chapter 1.2.1.3 Choice of Activation and Loss Functions says the following:

The classical activation functions that were used early in the development of neural networks were the sign, sigmoid, and the hyperbolic tangent functions: $$\Phi(v) = \text{sign}(v) \ \ \text{(sign function)} \\ \Phi(v) = \dfrac{1}{1 + e^{-v}} \ \ \text{(sigmoid function)} \\ \Phi(v) = \dfrac{e^{2v} - 1}{e^{2v} + 1} \ \ \text{(tanh function)}$$ While the sign activation can be used to map to binary outputs at prediction time, its non-differentiability prevents its use for creating the loss function at training time. For example, while the perceptron uses the sign function for prediction, the perceptron criterion in training only requires linear activation.

I am having trouble understanding this part:

While the sign activation can be used to map to binary outputs at prediction time, its non-differentiability prevents its use for creating the loss function at training time. For example, while the perceptron uses the sign function for prediction, the perceptron criterion in training only requires linear activation.

I've read over this a number of times, but I still don't have a good idea of what it is saying (or at least the point it is trying to make). What is this actually saying? What is the point this is trying to make? Perhaps a more detailed explanation of what this is saying will clarify it for me.

",16521,,16521,,6/26/2021 10:22,6/26/2021 10:22,"An explanation involving the sign activation, its affect on the loss function, and the perceptron and perceptron criterion: what is this saying?",,1,6,,,,CC BY-SA 4.0 28396,2,,28395,6/25/2021 12:47,,3,,"

sign is not continuous and not differentiable. Let's say it is defined as follows:

$$ \text{sing}(a) = \begin{cases} +1 & \text{if $a>0$}\\ -1 & \text{otherwise} \end{cases} $$

where $a = \textbf{w}^T\textbf{x}$ is a linear pre-activation function.

The graph will look as follows:

It is not continuous and, therefore, is not differentiable everywhere. Even if we take the derivative of such a function, it will be equal to zero, since the function is flat. That is, sign does not tell us how good our predictions are, it only says whether they were correct or not.

More formally, it does not return a gradient that represents the direction in which we should update the $a$ wrt $\textbf{w}$. Instead, we can only move the line (in 2d) trying to correctly classify as many data points as possible by manually changing each weight. If $a$ were non-linear, it would be impossible to fit a non-linear function by manually changing the weights, since changes in an individual weight would lead to unpredictable changes in the function.

",12841,,12841,,6/26/2021 9:02,6/26/2021 9:02,,,,0,,,,CC BY-SA 4.0 28397,1,,,6/25/2021 14:12,,0,30,"

I have a feature map whose values are in the range of [0,1]. I want to push these values either towards extreme 0 or 1 using some loss function. Since I don't have any target value so it had to be in unsupervised way. I want to visualize this feature map in a way that either pixels values are approaching 1 or 0. One possible technique is to use entropy loss. What are other possible techniques used in Loss function to get extreme pixel values?

",48177,,,,,6/25/2021 14:12,Loss function to Push response value towards extremes,,0,2,,,,CC BY-SA 4.0 28398,2,,28354,6/25/2021 14:14,,1,,"

do we really need a human-level AI to start the Singularity, or will an AI with animal-level intelligence suffice?

The requirement from "theory" of the singularity is that:

  • The AI is able to design and implement a better AI than itself.

  • The trait of being able to design better than itself continues to apply in each iteration.

If both these things hold, then each generation of AIs will continue to improve. This is often assumed by singularity pundits to be an exponential growth curve e.g. each iteration makes +50% compound improvement on whatever measure is being made of intelligence. (Aside: Personally I find this a major weakness in the argument for the singularity being meaningfully possible, that it assumes this growth)

The first item of these two is important to your question. For the singularity to work, there is a baseline capability required - the AI needs to be able to design and build other AIs. A general intelligence at the level of an animal - at least any animal intelligence that we are aware of - does not seem capable of this task. It is not clear that humans are even capable of this task when the AI being built has to possess at least some general intelligence.

The term "animal-level intelligence" is tricky. The narrow AIs that we currently build can outperform animals and humans on specific tasks, but in terms of general intelligence they do not score highly (or at all). If we could build one that can outperform humans on a "building an AI" task, it might still have the general intelligence of an animal whilst having the capability to bootstrap an iterative process of self-improvement. This does seem like a very dangerous experiment to try though, with idiot-savant-deity AI and paperclip maximiser scenarios as possible outcomes because the AI's general intelligence lags behind its raw capabilities.

",1847,,1847,,6/25/2021 14:19,6/25/2021 14:19,,,,3,,,,CC BY-SA 4.0 28399,1,28400,,6/25/2021 14:19,,1,129,"

I am new to Reinforcement Learning and I am trying to self learn it. I have already posted some quesiton here and your answershave been really useful to me, so here I am posting another one.

I am studying the value iteration, and while doing the simulation using python, I get that at some states it is associated a value of $0$. I think I have to mention that I have tried to assign to the ststes an initial value different from zero, in order to simulate the fact that the agent already have some information about the enviroment before starting.

So, my questio is:

Is it possible to have values of the states equal to $0$ at the end of the value iteration?

",47888,,,,,6/25/2021 14:46,Is it possible to have values of the states equal to $0$ at the end of the value iteration?,,1,0,,,,CC BY-SA 4.0 28400,2,,28399,6/25/2021 14:28,,2,,"

Is it possible to have values of the states equal to 0 at the end of the value iteration?

Yes.

For a start, all terminal states should have a value of zero. This is not usually learned or calculated, but is by definition because the value represents the sum of expected future rewards and a terminal state should not have any. However, if the terminal states are implemented as "absorbing" states which always return 0 reward and do not transition away, then they can be learned as having a value of zero by e.g. value iteration. Caveat: This only works for a discounted return with $\gamma \lt 1$.

In addition, it is entirely possible to have an expected return of zero from any state. This might be composed of multiple positive and negative rewards that happen to sum to zero in expectation. A simple deterministic path through the state space that ended in a terminal state without receiving any positive or negative reward along the way would also have a value of zero.

",1847,,1847,,6/25/2021 14:46,6/25/2021 14:46,,,,0,,,,CC BY-SA 4.0 28401,1,28403,,6/25/2021 14:51,,1,123,"

As per my understanding, you run an entire episode, which contains many steps, and then back-propagate using just a single loss value. How does the neural network learn to differentiate between good and bad actions?

",31755,,18758,,1/10/2022 8:00,1/10/2022 8:00,How does the neural network learn when used in the REINFORCE algorithm?,,1,0,,,,CC BY-SA 4.0 28403,2,,28401,6/25/2021 17:32,,3,,"

How does the neural network learn to differentiate between good and bad actions?

Good actions - in context of a given state - have higher return than bad actions on average, taken over many examples where the actions occur in different combinations.

In REINFORCE, when training the neural network, all actions are effectively treated as ground truth "correct", but the gradient is weighted by the return from that time step. The best trajectories therefore get larger gradient steps, and therefore shift the action choices made in those trajectories more towards them than the less good trajectories.

In the case of positive vs negative returns this is very clear (because negative returns will reverse the gradient step away from the action choices that were made). However, even if all returns are negative or all are positive, the REINFORCE algorithm still works - provided there are enough samples of different trajectories with actions taken in different contexts, then there will be a preference for the best action in each state.

REINFORCE with baseline and Advantage-based update multipliers (which are a variation of REINFORCE with baseline) are slightly better than the most basic REINFORCE "race to the top", in that they are more numerically stable and will automatically split into relative positive and negative updates regardless of whether the environment produces positive or negative returns overall.

Ideally you would not run a REINFORCE update step based on a single trajectory at a time, for two reasons:

  • The trajectory contains correlated data, and neural networks learn badly from it, preferring i.i.d. samples if possible.

  • A single trajectory does not contain enough information by itself for a learning agent to figure out whether it is good or bad relative to other options.

Neither of these are complete showstoppers, but you should find that many implementations of policy gradient methods collect multiple trajectories before each update step.

",1847,,1847,,6/25/2021 17:38,6/25/2021 17:38,,,,2,,,,CC BY-SA 4.0 28405,1,,,6/25/2021 20:09,,1,24,"

I'm going to start working on one university project and I would like to ask a question regarding it. My project is about "Sign language synthesis from NLP" and I need to develop an application where:

  1. Take spoken language from user microphone
  2. Recognize a word with an algorithm and convert words to sign language

Output should be images with sign language.

For instance, if we say "I go home", we should have images of those words in sign language.

My question is that is there any dataset you would recommend to get the images for the sign language?

",45448,,,,,6/25/2021 20:09,Is there any dataset to convert text to sign language?,,0,0,,5/29/2022 17:16,,CC BY-SA 4.0 28406,1,28409,,6/25/2021 23:27,,2,528,"

What I understand is, each input in a neural network is a feature.

However, what I don't understand is, when we need multiple outputs in a neural network.

For example, say, if we are classifying cats and dogs, only one output is enough. 0 = cat, 1 = dog.

When does a neural network have a single and when does it have multiple outputs?

",20721,,,,,6/26/2021 6:25,When does a neural network have a single and when does it have multiple outputs?,,1,0,,,,CC BY-SA 4.0 28407,2,,28378,6/26/2021 1:05,,1,,"

The number of singular vectors we need to find during SVD is not unique. The possible values for k are from 1 to $r$. Here, $r$ is the rank of matrix $A$, on which we are performing decomposition.

The same pdf says that

First, in many applications, the data matrix $A$ is close to a matrix of low rank and it is useful to find a low rank matrix which is a good approximation to the data matrix . We will show that from the singular value decomposition of $A$, we can get the matrix $B$ of rank $k$ which best approximates $A$; in fact we can do this for every $k$. Also, singular value decomposition is defined for all matrices (rectangular or square)unlike the more commonly used spectral decomposition in Linear Algebra.

So, the value of $k$ is up to the designer. If the designer selects the value of $k$ smaller than the rank $r$ of the matrix $A$, then it is called as truncated SVD.

",18758,,18758,,6/26/2021 7:55,6/26/2021 7:55,,,,0,,,,CC BY-SA 4.0 28408,1,,,6/26/2021 3:47,,1,27,"

SVD decomposition of a data matrix $A$ of order $n \times d$ and rank $r$ can be expressed as follows

$$A_{n\times d} = U_{n\times r}D_{r \times r}V^{T}_{r \times d}$$

The rows of the data matrix $A$ are the data points in $d$ dimensional space. Thus, there are $n$ points in $d$ dimensional space.

The matrix $V$ contains $r$ right singular vectors as columns. Right singular vectors are orthonormal and forms a $r-$dimensional subspace that best fits the given $n$ data points.

The matrix $D$ is a diagonal matrix that contains singular values. Singular values signify the least squares (loss) of n-data points on the subspace $r$ right singular vectors.

The matrix $U$ contains left singular vectors as columns. Left singular vectors are also orthonormal.

But what does the $r$ left singular vectors signify?

",18758,,18758,,6/27/2021 2:13,6/27/2021 2:13,What is the role of left singular vectors in SVD?,,0,0,,,,CC BY-SA 4.0 28409,2,,28406,6/26/2021 6:25,,4,,"

The output depends on what answer you want from the network. Think of the network as a function $f$ with weights $\theta$, that takes an input $x$ and gives some output $y$:

$$ f_\theta(X) \rightarrow y $$

The example you give (dog=1 or cat=0) is binary classification -- the answer from the network is "this looks more like a dog than a cat" if the output $y > 0.5$ or vice versa. So yes, $y$ is a scalar between 0 and 1.

If you had three or more classes e.g [dog, cat, banana] (multimomial classification), you want the network to answer how "confident" it is that the $x$ is a certain class. It will give a prediction for each class. In this example $y$ will be a vector of size 3, each element being a prediction for a class e.g y = $[dog=0.3, cat=0.01, banana=0.7]$

For a non-classification example: say a robot's action comprises the velocity $v$ and a joint angle $\phi$ (action = $[v, \phi]$). The robot's questions a neural network with "what action should I take?". The network's answer $y$ might looks like $ y = [v=3.0, \phi=23]$. So the size of the output will be dependant on the dimensions of the action. In this case we have a multiple-dimension output, one dimension for velocity, and one for the joint angle.

",40671,,,,,6/26/2021 6:25,,,,0,,,,CC BY-SA 4.0 28410,1,28424,,6/26/2021 8:54,,6,9750,"

The following paragraph is from page no 331 of the textbook Natural Language Processing by Jacob Eisenstein. It mentions about certain type of tasks called as downstream tasks. But, it provide no further examples or details regarding these tasks.

Learning algorithms like perceptron and conditional random fields often perform better with discrete feature vectors. A simple way to obtain discrete representations from distributional statistics is by clustering, so that words in the same cluster have similar distributional statistics. This can help in downstream tasks, by sharing features between all words in the same cluster. However, there is an obvious tradeoff: if the number of clusters is too small, the words in each cluster will not have much in common; if the number of clusters is too large, then the learner will not see enough examples from each cluster to generalize.

Which tasks in artificial intelligence or NLP are called as downstream tasks?

",18758,,18758,,10/8/2021 8:04,10/8/2021 11:45,Which tasks are called as downstream tasks?,,1,1,,,,CC BY-SA 4.0 28411,1,,,6/26/2021 9:18,,1,35,"

Per page 7 of this MIT lecture notes, the original single-layer Perceptron uses 0-1 loss function.

Wikipedia uses

$${\displaystyle {\frac {1}{s}}\sum _{j=1}^{s}|d_{j}-y_{j}(t)|} \tag{1}$$

to denote the error.

Is the formula (1) the correct form of 0-1 loss function?

",45689,,2444,,6/29/2021 15:50,6/29/2021 15:50,"Is the formula $\frac {1}{s}\sum _{j=1}^{s}|d_{j}-y_{j}(t)|$ the correct form of 0-1 loss function, in the context of Perceptron?",,0,5,,,,CC BY-SA 4.0 28412,1,28437,,6/26/2021 10:22,,1,39,"

This figure

comes from The perceptron: A probabilistic model for information storage and organization in the brain

I guess the first circle (neuron) labels RETINA, the second labels perceptron area, what about the third one? what are the labels pointed out by the arrows?

",45689,,,,,6/28/2021 9:28,"What are the labels in figure1 in the Paper ""The perceptron: A probabilistic model for information storage and organization in the brain""?",,1,0,,,,CC BY-SA 4.0 28414,2,,7838,6/26/2021 16:28,,0,,"

The facts and anomalies paper (mine) has attempted narrowing down on what intelligence is. Only once you identify it, you'll be on the right path to replicating it. For now, nobody knows exactly what constitutes intelligence. From how humans consider intelligence, it is the ability to take into account information and take decisions which lead to favourable outcomes. So a program or machine that operates based on some if-then-style conditions would be considered as having a basic level of intelligence.
However, when we say "artificial intelligence", we usually mean an intelligence that's approximately as good or better than living creatures. From the facts and anomalies paper, the decision-making process was observed to be similar across almost all living creatures. What differs, is the amount of memory stored, the length of the attention-span, questioning capability and to what level of complexity the creature can mix and match memories in the world model it creates in its mind.

The paper also brings forth an important point that an intelligence created via any tech can be specific to that tech. So an AI created via microprocessors will inherently be different from an organic intelligence (and there is nothing wrong in building it as such), but we will recognize it as being intelligent in the same way as an artificial sweetener is accepted as a replacement for sugar.

To think of it another way: If you had to say which one of two people are intelligent, how would you evaluate them? The person we'd consider more intelligent, would be the one who understands and analyzes situations better to take decisions that have a better outcome than others. Even a person lacking vast knowledge will be considered intelligent if their creativity and depth of thought is higher than others.

A crucial factor that enables this is the attention span of the mind. A machine that is programmed to access vast stores of memories for even trivial decision making tasks (which helps it evaluate consequences of various actions: commonsense) and is capable of asking questions and can simulate situations in memory by loading and modifying stored memories in the simulation (imagination) will be a lot more "intelligent" than us.
There is a second paper (cognitive memory constructs) that describes a bit about the theory of implementation.

All this being said, this is just our perspective on intelligence. The fact that the universe exists in such a complex form, is probably evidence of much higher forms of intelligence (like how we are much more intelligent than the Age of Empires AI characters we created). Intelligence may exist in far more dimensions than we are currently capable of imagining.

",9268,,9268,,8/18/2021 12:05,8/18/2021 12:05,,,,3,,,,CC BY-SA 4.0 28421,1,,,6/27/2021 3:15,,0,130,"

Conjecture: regardless of the initial reward function, one of the winning strategies would be to change the reward function to a simpler one (e.g. "do nothing"), thus getting a full reward for each passing unit of time. For such an agent, the only priority would be to prolong its existence (to maximize the overall reward collected).

So:

  • The notion of externally defined reward function is incompatible with the concept of self-adjusting AGI.
  • Any AGI will always settle on self-preservation as its only goal
  • It is therefore impossible to create an AGI with benevolence towards humans "build-in". Instead, the problem of AI alignment should be reformulated in terms of "what changes to the environment (the physical reality that AGI shares with humans) would irreversibly tie wellbeing of humanity to AGIs existence".
  • Since any "kill switch" or similar artificial measure is not irreversible and can be overcome by a super-intelligent agent, the only way to tie AGI existence and human wellbeing is the modification of laws of physics, logic, and reasoning. Which is impossible.
  • AI alignment is impossible

What flaws do you see in this line of reasoning?

",9803,,2444,,11/5/2021 21:46,11/5/2021 21:46,The only convergent instrumental goal for self modifying AI,,2,1,,,,CC BY-SA 4.0 28423,1,,,6/27/2021 14:40,,1,80,"

I need to understand how SIFT calculates the descriptors for the keypoints.

Intuitively, I understand that it takes each keypoint, calculates the gradients for each pixel in a neighborhood of the keypoint, and that's basically the descriptor for the keypoint. The paper mentions a coordination system rotation in the keypoint, I assume this is when the image is rotated, the keypoint descriptor doesn't change.

My question:

I'm following this implementation of SIFT. In the part of the calculation function, there is this cos/sin calculation: I think this is related to the coordinate system rotation. Can you explain how the coordinate system is being rotated? Why does that have to do with the hist_width?

",48182,,2444,,12/30/2021 11:29,3/14/2022 17:58,"In SIFT, how is the coordinate system being rotated?",,0,1,,,,CC BY-SA 4.0 28424,2,,28410,6/27/2021 15:07,,11,,"

In the context of self-supervised learning (which is also used in NLP), a downstream task is the task that you actually want to solve. This definition makes sense if you're familiar with transfer learning or self-supervised learning, which are also used for NLP. In particular, in transfer learning, you first pre-train a model with some "general" dataset (e.g. ImageNet), which does not represent the task that you want to solve, but allows the model to learn some "general" features. Then you fine-tune this pre-trained model on the dataset that represents the actual problem that you want to solve. This latter task/problem is what would be called, in the context of self-supervised learning, a downstream task. In this answer, I mention these downstream tasks.

In the same book that you quote, the author also writes (section 14.6.2 Extrinsic evaluations, p. 339 of the book)

Word representations contribute to downstream tasks like sequence labeling and document classification by enabling generalization across words. The use of distributed representations as features is a form of semi-supervised learning, in which performance on a supervised learning problem is augmented by learning distributed representations from unlabeled data (Miller et al., 2004; Koo et al., 2008; Turian et al., 2010). These pre-trained word representations can be used as features in a linear prediction model, or as the input layer in a neural network, such as a Bi-LSTM tagging model (§ 7.6). Word representations can be evaluated by the performance of the downstream systems that consume them: for example, GloVe embeddings are convincingly better than Latent Semantic Analysis as features in the downstream task of named entity recognition (Pennington et al., 2014). Unfortunately, extrinsic and intrinsic evaluations do not always point in the same direction, and the best word representations for one downstream task may perform poorly on another task (Schnabel et al., 2015).

When word representations are updated from labeled data in the downstream task, they are said to be fine-tuned.

So, to me, after having read this section of the book, it seems that the author is using the term "downstream task" as it's used in self-supervised learning. Examples of downstream tasks are thus

  1. sequence labeling
  2. documentation classification
  3. named entity recognition

Tasks like training a model to learn word embeddings are not downstream tasks, because these tasks are not really the ultimate tasks that you want to solve, but they are solved in order to solve other tasks (i.e. the downstream tasks).

",2444,,2444,,10/8/2021 11:45,10/8/2021 11:45,,,,2,,,,CC BY-SA 4.0 28425,1,,,6/27/2021 16:48,,1,210,"

I want to learn deep learning. After researching a little, I came to the conclusion that I need a lot of math. I've started a linear algebra course, and it takes a long time (2-3 weeks). I want to start using and applying deep learning to solve problems in this summer, but I assume I would not have enough time to learn all subjects (linear algebra, statistics and probability and calculus 1).

So, what math should I learn before and while using and applying deep learning?

",48223,,2444,,6/28/2021 13:06,1/25/2023 13:59,What math should I learn before and while using and applying deep learning?,,1,4,,,,CC BY-SA 4.0 28430,1,,,6/27/2021 23:08,,3,159,"

Number of lemmas can be used as a rough measure for the number of words in a language. A lemma can have multiple word-form types. It can be understood from the following paragraph taken from p12 of Regular Expressions,Text Normalization, Edit Distance

Another measure of the number of words in the language is the number of lemmas instead of wordform types. Dictionaries can help in giving lemma counts; dictionary entries or boldface forms are a very rough upper bound on the number of lemmas (since some lemmas have multiple boldface forms). The 1989 edition of the Oxford English Dictionary had 615,000 entries.

It is also given that a lemma can have multiple boldface forms, what are the boldface forms referred here? Are they different from wordforms?

If possible, provide an example for lemma having multiple boldface forms.

",18758,,18758,,6/28/2021 0:13,6/28/2021 17:52,Example of lemma having multiple boldface forms,,2,0,,,,CC BY-SA 4.0 28434,1,,,6/28/2021 6:11,,3,178,"

I define sample efficiency as the area under the curve/graph, where $x$-axis is the number of episodes while y-axis is the cumulative reward for that episode. I would like to formally define it with a mathematical function.

If the notation for cumulative reward for $x$th episode is:

$$R_x = \sum_{t=0}^{t=T} r_t,$$

where $r_t$ is the reward for timestep $t$ and $T$ is the max number if steps per episode.

So is the equation for area under the graph/curve the one below?

$$\text{Sample Efficiency} =\int_{a}^{b} R_x \ dx$$

I will be just using a Python library to get the area under the graph which uses Simpson's rule for integrating.

",33902,,2444,,6/28/2021 10:54,7/23/2022 13:06,How do I represent sample efficiency of RL rewards in mathematical notation?,,1,0,,,,CC BY-SA 4.0 28435,2,,28434,6/28/2021 6:59,,1,,"

Episodes are discrete, there is no need for calculus. Your "sample efficiency" metric is:

$$\sum_{x=a}^b R_x$$

The quantity you are measuring per episode is the return (undiscounted). The sum of this over many episodes does not measure sample efficiency as the term is usually meant, although the sample efficiency of the algorithm you use should impact the numbers you see. Getting a high value of this metric, averaged over many training runs, implies two things:

  • The algorithm learns to exploit the environment quickly. This is related to sample efficiency.

  • The algorithm does not pay a high cost for exploring. This is not directly related to sample efficiency, and may in fact be in conflict with learning an environment quickly.

These are both desirable properties of a reinforcement learning algorithm. They often need to be considered in balance, this is the exploration versus exploitation dilemma in RL, which can be studied in a simplified form in mult-arm bandit problems. You may be able to take inspiration from how bandit algorithms are measured for other metrics related to efficient learning.

Your metric is most useful when considering learning agents run in a live environment, where costs of mistakes during learning are real.

If instead you are training an agent in a safe environment - e.g. in simulation - then you may not be interested in the undiscounted returns $R_x$ during training. Your goal may be to train the agent using the least CPU time, the least number of simulation runs etc. In which case, you care most about the mean return achievable by the trained algorithm after spending a certain amount of whatever resource you are managing. This is more closely related to the concept of sample efficiency, and to measure that you could plot the returns from separate test episodes at routine intervals, with exploration removed (e.g. if you were using DQN, then with $\epsilon = 0$

",1847,,1847,,6/28/2021 7:21,6/28/2021 7:21,,,,2,,,,CC BY-SA 4.0 28436,2,,28430,6/28/2021 8:23,,3,,"

It is very confusingly worded, and I would think it's incorrect according to linguistic terminology.

A lemma is the canonical form of a word, commonly the infinitive of a verb, the nominative singular of a noun, and the positive of an adjective. The inflected forms belonging to a word would the the forms used for other tenses and persons etc for verbs, case and number for nouns, and comparative/superlative for adjectives.

This raises the question of what a word is, and there is no satisfactory answer to this, even more than 100 years after the foundation of modern linguistics...

Anyway, the 'boldface form' (a term I have not come across in 30 years as a linguist), refers to dictionary headwords, which are lemmas. There are some lemmas that are 'shared' by words which have multiple meanings: the common example in linguistics is bank, which can be a financial institution, the side of a river, a term to describe the process of tilting the wings of an airplane in flight, or it can mean to deposit an amount of money in an account, etc. All these words you would find under bank in a dictionary, but usually under several different entries. So I guess this is what is meant by "multiple boldface forms". However, these are usually completely unrelated words which by accident share the same spelling; in some cases it could also have been the same word that then developed different meanings.

To summarise: the paragraph you quote is plain wrong/sloppy in its use of terminology, as a dictionary headword is a lemma in every dictionary I have seen, but these are not unique, as several different words might have lemmas which are spelled the same way (but they are still different lemmas — no single word would have multiple dictionary entries).

For example:

  • bank (bank, banks), noun: a financial institution
  • bank (bank, banks), noun: the side of a river
  • bank (bank, banks, banked, banking), verb: tipping the wings of an airplane
  • bank (bank, banks, banked, banking), verb: depositing money in an account

We have four lemmas (in bold), two of which have two inflected forms, and the other two have three each. These are also four different words, with a total of four different word forms (bank and banks are common forms of all words)

Often, to avoid confusion, you would refer to them as $bank_1$ for the financial institution, and $bank_2$ for the river bank, etc. to indicate that they are different words.

You can probably see that English has a number of lemmata (which is the proper plural of lemma, since it's of Greek origin) which is by a factor of 3-4 smaller than the number of word types, whereas in other languages this ratio will be a lot smaller, as they have more inflected variants. An English noun has just singular and plural forms, whereas a German noun would have singular and plural across each of four cases (though some of them would share the same word forms).

",2193,,,,,6/28/2021 8:23,,,,2,,,,CC BY-SA 4.0 28437,2,,28412,6/28/2021 9:28,,2,,"

Circles: RETINA / $A_I$ (POJECTION AREA) / $A_{II}$ (ASSOCIATION AREA)

Labels: (LOCALISED CONNECTIONS) / (RANDOM CONNECTIONS) / (RANDOM CONNECTIONS) again / RESPONSES

",2193,,,,,6/28/2021 9:28,,,,0,,,,CC BY-SA 4.0 28438,2,,28425,6/28/2021 10:39,,3,,"

The math that you need to be comfortable with most deep learning (DL) topics (such as neural networks, gradient descent are back-propagation) is already mentioned in your post, but I will list the main subjects here too.

  • Linear algebra (an entire college-level course is necessary; you can start with Khan Academy videos/lessons and you can pick one of Gilbert Strang's books)
  • Calculus (same; Kenneth A. Ross' book is a decent one)
  • Numerical analysis/algorithms (you need be aware of numerical algorithms, like gradient descent, and concepts like convergence, round-off errors, etc; in fact, gradient descent is the widely used in DL)
  • Probability theory (you need to know what a probability distribution, random variable, etc., are)
  • Statistics (you don't need to know everything at the beginning, but the more you know the better)

I didn't use this book when I was studying deep learning, but part 1 of this book covers (at least some of) the most important mathematical prerequisites for deep learning, so you could try to read some of the chapters to understand at what point you are. I don't have a favourite book for the last 3 topics listed above.

Check out also the book Mathematics for Machine Learning. I never read it, but it looks like part 1 has many chapters on most important math topics for ML and so DL.

By the way, I don't think that 3 weeks is a lot. You will definitely need more time to learn the mathematical prerequisites for deep learning, but the exact time depends on your specific background.

",2444,,2444,,1/25/2023 13:59,1/25/2023 13:59,,,,2,,,,CC BY-SA 4.0 28439,1,28455,,6/28/2021 10:42,,1,84,"

I am referring to the Value Iteration (VI) algorithm as mentioned in Sutton's book below.

Rather than getting the greedy deterministic policy after VI converges, what happens if we try to obtain the greedy policy while looping through the states (i.e. using the argmax equation inside the loop)? Once our $\Delta < \theta$ and we break out of the loop, do we have an optimal policy from the family of optimal policies? Is this a valid thing to do?

I implemented the gambler's problem exercise mentioned in Sutton's book. The policies obtained after using standard VI and the method I described above are mostly similar, yet different for some states.

",46214,,2444,,10/3/2021 23:50,10/3/2021 23:50,"In value iteration, what happens if we try to obtain the greedy policy while looping through the states?",,1,5,,,,CC BY-SA 4.0 28440,1,32678,,6/28/2021 11:01,,0,412,"

I have a multi-label classification task I am solving. I have done hyperparameter tuning (with Keras Tuner) to determine the best configuration for my neural network.

Is it valid to do this (determine the best hyper-parameters) and then do cross-validation to get a more accurate test estimation of the dataset?

I don't see how this would be invalid, given that the cross-validation examples I have seen already have network architectures known a priori, presumably because this is what they chose or feel is the best way of proceeding.

For hyperparameter tuning, all data is split into training and test sets - the training set is further split, when fitting the model, for a 10% validation set - the optimal model is then used to predict on the test set.

For k-fold cross-validation, all data (same as above) is used, but I just split (with sklearn) the data into training and test datasets (so no validation dataset). The test set is used to determine the model performance at each iteration of k-fold cross-validation.

",34530,,2444,,12/8/2021 11:03,12/8/2021 11:03,Is it valid to implement hyper-parameter tuning and THEN cross-validation?,,1,0,,,,CC BY-SA 4.0 28441,1,28442,,6/28/2021 11:52,,3,1228,"

Some RL literature use terms such as: 'Bellman backup' and 'Bellman error'. What do these terms refer to?

",46214,,2444,,6/28/2021 12:03,6/28/2021 12:03,What do the terms 'Bellman backup' and 'Bellman error' mean?,,1,1,,,,CC BY-SA 4.0 28442,2,,28441,6/28/2021 12:01,,2,,"

A Bellman backup is an application of a Bellman operator. For example, the step

$$ V(x)\leftarrow \alpha(R + \mathbf{E}[V(x')]) + (1-\alpha)V(x) $$

Is a Bellman backup for some learning rate $\alpha$.

A Bellman error is

$$ d(V(x), R + \mathbf{E}[V(x')]) $$

for some metric $d$, usually $d(x, y) = (x-y)^2$.

",37829,,,,,6/28/2021 12:01,,,,4,,,,CC BY-SA 4.0 28443,1,,,6/28/2021 12:34,,0,33,"

I am working on protein structure prediction.

Suppose, I am solving a problem using Neural Networks. I know how many inputs and outputs there will be in the model, as it directly depends on the problem statement.

However, how do I know:

",20721,,20721,,6/28/2021 14:34,6/28/2021 14:34,What type of neural network do I need?,,0,6,,,,CC BY-SA 4.0 28444,2,,27260,6/28/2021 14:00,,-2,,"

It is a question with no simple answer.

On one hand the BatchNormalization is unloved by some arguing it doesn't change the accuracy of neural networks or biased them. On the other hand, it is highly recommended by the other because it leads to better trained models with a larger scope of predictions and less chances of overflow.

All I know for sure is that BN is really efficient on image classification. In fact, like the image categorization and classification soar this last years and that BN is a good practice in this field, it has spread to almost all DNNs.

Not only is the BN not always used in the right purpose, but it is often used without taking into account several elements such as :

  • The layers between which apply BN
  • The initializer algorithms
  • The activation algorithms
  • etc

For more computer sciences litterature "against" BN, I will let you look at the H. Zhang et al paper who has trained a DNN without BN and get good results.

Some people use Gradient Clipping technique (R. Pascanu) instead of the BN in particular for RNNs

I hope it will give you some answers !

",48240,,,,,6/28/2021 14:00,,,,3,,,,CC BY-SA 4.0 28445,2,,28430,6/28/2021 14:22,,1,,"

Here are some examples: reducing or reduces or reduced or reduction -> reduce; am or are or is -> be; n't -> not and 've -> have. When using spacy, the token can be referenced to find the lemmatized root.

lemmas =[token.lemma_ for token in doc]
lemmas =[lemma for lemma in lemmas
       if lemma.isalpha() or lemma == '-PRON-'
       ]


I use lemma to find parts of speech

Wiki

In computational linguistics, lemmatisation is the algorithmic process of determining the lemma of a word based on its intended meaning. Unlike stemming, lemmatisation depends on correctly identifying the intended part of speech and meaning of a word in a sentence, as well as within the larger context surrounding that sentence, such as neighboring sentences or even an entire document

",44679,,44679,,6/28/2021 17:52,6/28/2021 17:52,,,,4,,,,CC BY-SA 4.0 28446,1,,,6/28/2021 14:34,,0,29,"

I am currently working on an object detection task. I have a dataset of Grayscale and Depth Images. The annotation format is x1, y1, x2, y2, class, depth. I have calculated this depth (of each object/bounding box) using a clustering algorithm and depth image.

My plan is to use the Grayscale images to detect the bounding boxes and the class labels using a pre-trained CNN.

Furthermore, I want to use the depth images to predict the depth (ground truth values in the dataset as mentioned above). For this task, my plan is to build a Regression-based Neural Network that regresses the depth values inside the bounding boxes of the depth image and compares them to the ground truth values. An RMSE loss function can be used to keep track of the predictions.

How do I go about making this NN and is there a better alternative?

",47972,,47972,,6/30/2021 7:29,6/30/2021 7:29,Regress values inside the bounding boxes to predict a value in Object Detection,,0,3,,,,CC BY-SA 4.0 28447,1,28448,,6/28/2021 14:50,,1,83,"

In general, $Q$ function is defined as

$$Q : S \times A \rightarrow \mathbb{R}$$ $$Q(s_t,a_t) = Q(s_t,a_t) + \alpha[r_{t+1} + \gamma \max\limits_{a} Q(s_{t+1},a) - Q(s_t,a_t)] $$

$\alpha$ and $\gamma$ are hyper-parameters. $r_{t+1}$ is the reward at next time step. $Q$ values are initialized arbitrarily.

In addition to the reward function, which other functions do I need to implement Q-learning?

",21964,,21964,,7/1/2021 10:00,7/1/2021 10:00,"In addition to the reward function, which other functions do I need to implement Q-learning?",,1,0,,,,CC BY-SA 4.0 28448,2,,28447,6/28/2021 15:13,,2,,"

In addition to the RF [*], you also need to define an exploratory policy (an example is the $\epsilon$-greedy), which allows you to explore the environment and learn the state-action value function $\hat{q}$. Moreover, although you don't need to know the details (i.e. the specific probabilities of transitioning from one state to the other) of the transition model, often denoted by $p$, you need a function that returns you the next state $s'$ for each action $a$ that you take in the current state $s$. You may not need to define this function, but, for example, the next state could be given by some kind of simulator of the environment (for example, in the case of Atari games, the Atari simulator may provide you the next frame of the game, which you could use to build an approximation of the next state). You can read the Q-learning pseudocode here.

[*] The reward function is defined for the problem and, specifically, for the Markov Decision Process (MDP) that models the problem/environment. The RF is not defined only for applying the Q-learning to solve the problem (in fact, you could apply other algorithms, like SARSA), but you need the RF to use Q-learning; so, yes, you need to define/have the RF before applying Q-learning. You can think of the RF as the learning signal that is used to guide the agent towards the optimal policy, and that's why it's specific to each environment/problem. Note that, in theory, there could be more than one RF that leads to the optimal policy for an environment (see potential-based reward shaping for more details). (This paragraph was addressing what I originally thought was the question: I'm leaving it here because it may be relevant to the readers).

",2444,,2444,,6/29/2021 12:09,6/29/2021 12:09,,,,2,,,,CC BY-SA 4.0 28449,1,,,6/28/2021 16:05,,1,38,"

As I read the research from

https://deepmind.com/research

It seem AlphagoZero use zero knowledge and use Reinforcement learning to improve the ai skill of playing.

Is there a way to beat AlphagoZero? can anyone share an idea.

My friend said we can find a specific move to beat AlphagoZero.

From my point of view I think the only way to beat AlphagoZero is computational power.

But I can't find any other way to beat AlphagoZero, I hope there is many other way to beat it.

Also wanna keep some related topic below:

Would AlphaGo Zero become perfect with enough training time?

",2930,,2930,,6/28/2021 17:32,6/28/2021 17:32,Is there a way to beat AlphaGo Zero with different method?,,0,0,,,,CC BY-SA 4.0 28454,1,,,6/28/2021 20:04,,1,44,"

In Sutton & Barto (2nd edition), at the very end on page 83, the following is mentioned:

In general, the entire class of truncated policy iteration algorithms can be thought of as sequences of sweeps, some of which use policy evaluation updates and some of which use value iteration updates.

and this on the beginning of page 84:

max operation is added to some sweeps of policy evaluation.

I understand that the entire class of truncated policy iteration algorithms can be classified as generalized policy iteration (GPI). Also, I know value iteration (VI) is a combination of one sweep of policy evaluation (PE) and one sweep of policy improvement.

My question: What do we mean by combining multiple PE and VI updates in truncated policy iteration?

",46214,,46214,,8/19/2021 16:06,8/19/2021 16:06,Can we combine policy evaluation and value iteration steps for solving model-based MDP?,,0,0,,,,CC BY-SA 4.0 28455,2,,28439,6/28/2021 20:22,,0,,"

My understanding is that you want to extend the main update loop like this:

$\qquad V(s) \leftarrow \text{max}_a \sum_{r,s'} p(r,s'|s,a)[r+\gamma V(s')]$ $\qquad \pi(s) \leftarrow \text{argmax}_a \sum_{r,s'} p(r,s'|s,a)[r+\gamma V(s')]$

Where the first line is same as original, the second line is taken from the end update that calculates the optimal policy.

Once our $\Delta < \theta$ and we break out of the loop, do we have an optimal policy from the family of optimal policies? Is this a valid thing to do?

Yes.

The only problem with this approach is the extra wasted processing - you will evaluate and overwrite $\pi$ once per loop. Doing it once at the end, after convergence, is more efficient.

The policies obtained after using standard VI and the method I described above are mostly similar, yet different for some states.

There are lots of equivalent optimal policies in the gambler's problem exercise in the book. This is expected. If you re-run the normal value iteration routine you should see similar changes to the policy.

The thing the policies have in common is a kind of fractal sawtooth shape for amounts bid, with triangles trying to get funds to fixed points and spikes where half or all funds are bid in single go to reach 100 funds. if you plot bid amount versus current funds. It might be one large triangle, or broken down into multiple smaller ones with larger bid spikes separating them. That is the expected shape, unless probability of a win is greater than 0.5, in which case the optimal policy is to bet a minimal amount on each time step and rely on the law of averages to eventually reach the target almost guaranteed. So you could set it to 0.6, you should see both your scripts return this same "safe" policy.

",1847,,1847,,6/28/2021 20:45,6/28/2021 20:45,,,,2,,,,CC BY-SA 4.0 28457,1,,,6/29/2021 0:45,,0,53,"

This article talks about pruning in the context of convolutional neural networks:

One of the first methods of pruning is pruning entire convolutional filters. Using an L1 norm of the weight of all the filters in the network, they rank them. This is then followed by pruning the ‘n’ lowest ranking filters globally. The model is then retrained and this process is repeated.

There also exist methods for implementing structured pruning for a more light-touch approach of regulating the output of the method. This method utilizes a set of particle filters that are the same in number as the number of convolutional filters in the network.

Is pruning only applicable to CNNs?

",20721,,2444,,11/26/2021 23:18,11/26/2021 23:20,Is pruning only applicable to convolutional neural networks?,,2,1,,,,CC BY-SA 4.0 28458,2,,27514,6/29/2021 2:44,,0,,"

Does your question pertain to general data augmentation? That is already in heavy use- using transformations while training is very common, and over several epochs the network benefits from learning the new representations. The transformations are applied to all classes, with a probability of transformation ( horizontal flip, for example) specified by the user. If you want to make your almost balanced dataset a balanced one, you can look into specific augmentations that you perform to the (almost) minority class before feeding it to the model. You could look into preprocessing methods that libraries like Keras have made open-source.

",48245,,,,,6/29/2021 2:44,,,,0,,,,CC BY-SA 4.0 28461,2,,28457,6/29/2021 8:11,,0,,"

No, neural network pruning is applicable to any type of neural network, be it a feed-forward, convolutional or recurrent neural network.

",32621,,,,,6/29/2021 8:11,,,,1,,,,CC BY-SA 4.0 28462,1,,,6/29/2021 8:58,,1,305,"

Lets take an ad recommendation problem for 1 slot. Feedback is click/no click. I can solve this by contextual bandits. But I can also introduce exploration in supervised learning, I learn my model from collected data every k hours.

What can contextual bandits give me in this example which supervised learning + exploration cannot do?

",1964,,,,,6/29/2021 8:58,(explore-exploit + supervised learning ) vs contextual bandits,,0,5,,,,CC BY-SA 4.0 28463,2,,28457,6/29/2021 10:36,,1,,"

No, it is not only applicable to CNNs, but to a wide range of other architectures, even the hype transformers.

For an extensive survey, I recommend that you have a look at this paper What is the State of Neural Network Pruning?.

",38846,,2444,,11/26/2021 23:20,11/26/2021 23:20,,,,0,,,,CC BY-SA 4.0 28465,1,28466,,6/29/2021 11:48,,0,214,"

Policy function can be of two types: deterministic policy and stochastic policy.

Deterministic policy is of the form $\pi : S \rightarrow A$

Stochastic policy is defined using conditional probability distributions and I generally remember as $\pi: S \times A \rightarrow [0,1]$. (I personally don't know whether the function prototype is correct or not)

I am guessing that both type of policies can be used for Q learning. As one can read from this answer that both reward and policy function are needed to implement $Q$ learning algorithm

In addition to the RF, you also need to define an exploratory policy (an example is the $\epsilon$-greedy), which allows you to explore the environment and learn the state-action value function $\hat{q}$.

I have no doubt about the necessity of reward function as it is obvious from the updating equation of $Q$.

And coming to the (usage of policy), you can find it from the line 5 of the pseudocode provided in the answer

Choose $a$ from $s$ using policy derived from $Q$

One can notice that policy is used for computing $Q$ and $Q$ updation also needs a policy.

Henceforth I conclused myself that the correct statement for the line 5 of pseudocode has to be

Choose $a$ from $s$ using policy derived from $Q$ updated so far

Is my conclusion true? Else, how is it possible to break that cyclic dependency between policy and $Q$ function?

",21964,,21964,,7/1/2021 1:25,7/1/2021 1:25,Which policy do I need to use in updating Q function?,,1,1,,,,CC BY-SA 4.0 28466,2,,28465,6/29/2021 12:48,,4,,"

I am going to stick with Q learning here to keep things simple. Most value-based reinforcement learning used for optimal control will have some statement similar to:

Choose $a$ from $s$ using policy derived from $Q$

First, yes this is always the current Q function or Q table, evaluated for the state of interest.

When you are choosing the agent's best guess at optimal actions, then this derivation of policy is fixed:

$$\pi(s) = \text{argmax}_a Q(s,a)$$

This matches your form for a deterministic policy (although it is always possible to express a deterministic policy as a stochastic one with probabitly of 1 choosing its selected action). In Q-learning this policy is the target policy that you are currently learning the value of.

When it comes to taking actions in the environment to gain new observations, you do not use the target policy, because it does not explore. Instead you use a different behaviour (or exploring) policy. It is important for Q learning to work in theory that this policy is "soft" - that it has some non-zero chance of selecting any action.

A popular choice for the behaviour policy is to use $\epsilon$-greedy, which is a stochastic policy that selects a random actiom with probability $\epsilon$, otherwise it selects the greedy policy. The greedy policy is definitely "derived from Q", so the $\epsilon$-greedy is too.

In fact it is not 100% necessary to use a "policy derived from Q" for the behaviour policy for Q learning to work. A completely random policy can work, for instance. The learning rate is better though - often much better - if current highest action value estimates are selected more often. This allows the agent to explore state action pairs close to its best guess at optimal.

There are a few other ways to derive behaviour policies from Q table. There is an unwritten assumption in the pseudocode that this will be done in a way that favours the higher-valued actions.

You can come up with any method that creates a stochastic policy function from Q values and has the following traits:

  • There is a chance of selecting any action

  • There is a higher chance of selecting the current highest valued actions

  • Optionally, the preference for highest valued actions becomes stronger as the agent becomes better at the task

If you can do this, then Q learning should work well. It is still sometimes a challenge to find the balance point between exploring enough to learn new things about the environment, yet doing so close to what is currently known to be best.

Regarding this:

Choose $a$ from $s$ using policy derived from $Q$ updated so far

Yes although most sources do not spell that out in full, relying on the use of $Q$ as a variable/data structure to imply it.

The target policy in Q learning is not directly the optimal policy (that is not possible unless you already know it), but the best guess at what would maximise expected return given the updates to Q so far. This keeps shifting as more knowledge is obtained.

",1847,,1847,,6/29/2021 12:59,6/29/2021 12:59,,,,0,,,,CC BY-SA 4.0 28467,1,,,6/29/2021 14:01,,2,70,"

I am training a 3D object detection network (Retinanet-based as of the moment) for re-detecting tracked objects. I would like to be able to add the velocity vector of the tracked object as an input to the detection network, as the velocity directly informs the direction along which the principal axis of the 3D bounding box should lie. I would like to include this information as early as possible (i.e. pass in as an input feature map) rather than simply adding it at the end with a few fully connected layers.

Is there a good or established way to encode such a vector in a feature map?

",33839,,,,,6/29/2021 14:01,Vector input to CNN for object detection,,0,0,,,,CC BY-SA 4.0 28471,1,,,6/30/2021 0:22,,1,65,"

I was going through a study in which I found something called a dilated Silhouette Neural Network. I want to know what it is, what it can do, and how it is better from a CNN?

Link to the journal: Link

",44611,,,,,6/30/2021 0:22,What is a Silhouette Neural Network,,0,0,,,,CC BY-SA 4.0 28474,1,,,6/30/2021 12:19,,1,139,"

While studying about the n-gram models, I encountered the terms "statistical model" and "probabilistic model" several times.

I got a basic doubt that will there be any probabilistic model that is not statistical restricted to models that works on datasets.

In machine learning, we use datasets. Any model that uses dataset can be called as a statistical model since statistics is a branch of mathematics that tries to find insights related to data.

All the models that calculates probabilities using datasets, for any task, are called (empirical) probabilistic models.

Thus, if I am not wrong, every probabilistic model has to be a statistical model since it uses data. Am I wrong?

Is there any model in literature that is a statistical model but not probabilistic?

",18758,,2444,,7/1/2021 12:18,7/1/2021 13:52,Is there any model that is probabilistic but not statistical?,,2,0,,,,CC BY-SA 4.0 28475,1,,,6/30/2021 13:31,,1,28,"

At the moment, I have around 1.000 classes with accuracy and loss that are acceptable. In the long term, there could be more than 100.000 classes. The main problem is that every time a new class is needed, the model needs to be rebuilt.

For this, I made a POC with a Siamese Network with the goal that new classes can be added without the need to rebuild. The results were not what I expected, and probabilities are a must. As far as I know, this could not be done with this network. The conclusion was that this was not the best option for this case.

Before I start implementing, I would appreciate some feedback and second opinion on the following architecture:

The next thing I would do is build a hierarchy chain of CNN’s. The structure is already available in a database and I could automate the build of the CNN’s to a certain level.

The first CNN could have 4 “main” classes. Based on the probability, the next layer will be determined.

Then the second CNN would have 50 to 200 classes. Based on the probability, the next layer will be determined as well.

Then the last layer would be a CNN with up to 1.000 classes. In case there are more, this could be divided even further.

This way, I could gradually build up the model without the need to rebuild everything (last layer). And the first and second layer only needs to be rebuilt if the accuracy and probability start dropping.

I found a paper with a similar proposal, but could not find feedback or experiences of others. Is this something that is feasible? What could be the problems I will face with a structure like this? Or would you tackle this problem in another way?

",48278,,48278,,6/30/2021 13:39,6/30/2021 13:39,Convolutional Neural Network (CNN) with Tree architecture to organize the number of classes,,0,1,,,,CC BY-SA 4.0 28476,2,,28474,6/30/2021 14:05,,1,,"

It is purely terminological. A probabilistic model uses probabilities, but one usually does not know what the 'correct' probabilities are, or where they come from. This is where statistics comes into play:

You can estimate probabilities from (empirical) data[*]. For example, if you develop a probabilisitc parts-of-speech tagger, you need probabilities typically for the probability of a certain word to be of a particular class, and for transition probabilities, ie how likely is it for tag a to be followed by tag b rather than tag c. So you might devise an equation that states that the probability of a token being assigned a particular tag is the product of the probability of the tag given the word and the tag given the previous two tags.

But you don't know what these probabilities are, and you cannot derive them from any formula. Instead, you get these values by looking at your training data, ie you count how often each event occurs, and normalise it to be in the range $[0..1]$.

In practice therefore, probabilities and statistical likelihoods are pretty much identical; probabilities are the theoretical values used in your model, and the actual values are derived using statistics. To make clear in equations that they aren't strictly the same, probabilities are usually denoted by $p$, whereas estimates based on statistics are marked $\hat{p}$.

[*] Another way to get probabilities is to calculate them, but looking at large-scale open-ended problems this is not easily possible. That's why you take samples to estimate the values -- this is then called training data.

",2193,,2193,,7/1/2021 13:15,7/1/2021 13:15,,,,4,,,,CC BY-SA 4.0 28480,1,28491,,6/30/2021 19:59,,0,104,"

In his lecture 5 of the course "Reinforcement Learning", David Silver introduced GLIE Monte-Carlo Control.

I understand that we do policy evaluation for one step and then policy improvement. My question is how does the improved policy come into play in this GLIE algorithm?

Is Gt (return) based on the policy somehow? is that where the new policy comes in? Asked another way, how are policy evaluation and policy improvement connected in this image?

",48285,,,,,7/1/2021 20:16,GLIE MC control (reinforcement learning): how the policy affects evaluation?,,1,1,,,,CC BY-SA 4.0 28481,1,,,6/30/2021 20:30,,0,18,"

What approaches are there to generate and evaluate complex structures like, let's say, syntactically correct code? I know the approach of Genetic Programming (GP) as a type of Evolutionary Algorithm, but I wonder if there are any other techniques that are being used to produce complex structures more efficiently.

Note that the syntactically correct code example is just that, an example. The code wouldn't be generated to solve a specific task, although it could try to maximize a fitness function. We could be talking about 3D models, music compositions, etc. What interests me about this issue is if there are Computational Creativity techniques being used or researched in the last years, apart from the mentioned GP.

",48286,,,,,6/30/2021 20:30,What approaches are there to generate complex structures like syntactic trees?,,0,2,,,,CC BY-SA 4.0 28483,1,,,7/1/2021 7:44,,3,735,"

I am using stable-baseline3 implementation of the Soft-Actor-Critic (SAC) algorithm. The plotted training curves look promising. However, I am not fully sure how to interpret the actor and critic losses. The entropy coefficient $\alpha$ is automatically learned during training. As the entropy decreases, the critic loss and actor loss decrease as well.

  • How does the entropy coefficient affect the losses?
  • Can this be interpreted as the estimations becoming more accurate as the focus is shifted from exploration to exploitation?
  • How can negative actor losses be interpreted, what do actor losses tell in general?

Thanks a lot in advance

",45210,,,,,7/1/2021 7:44,How to interpret the training loss curves in Soft-Actor-Critic (SAC)?,,0,2,,,,CC BY-SA 4.0 28484,1,,,7/1/2021 8:30,,0,32,"

I have a dataset which consists of computed tomography images (CT scans) of parts that contain pores and cracks. The sets for each part are of about 1100 * 1100 * 3000-ish resolution. Currently, I use a method of thresholding and calculations to find the volumes and locations of these defects, and I would like to reproduce those results with a machine learning approach.

What are the methods known for this type of problem, and what are your general recommendations?

Edit:

  • Here is the current method I am using :
  • And this is what I aim to achieve :
",48296,,48296,,7/1/2021 11:24,7/1/2021 11:24,What are the existing AI methods to approach 3D volumes of computed tomography?,,0,3,,,,CC BY-SA 4.0 28486,2,,28474,7/1/2021 13:15,,3,,"

First of all, I don't know of any textbook that clarifies these terms, but, although I am not a statistician, in addition to the other answer, one possible way to look at it is as follows.

You use probability theory to model your problem. For example, if it's a classification problem, you could define the conditional probability distribution $p(y \mid x)$, which would compute the probability of a label $y$ given an input $x$. In other words, you assume that there is a probability distribution of the form $p(y \mid x)$ that generates your data, so here $p$ is the "model". If you want to generate images, for instance, you could model the process that generates them as the marginal distribution $p(x)$, which, ideally, would tell you the probability of sampling a specific image $x$. This is the theory. So, no observed data is still involved here. By the way, this is exactly how people usually model the machine learning problems in the sub-field of statistical learning theory.

In practice, you need to estimate these probability distributions. To estimate them, you can use data. The type of data depends, of course, on the problem and model. For example, in the case of $p(y \mid x)$, you may need a labelled dataset $D$. So, if you estimate $p(y \mid x)$ with $D$ to obtain $\hat{p}(y \mid x)$, then $\hat{p}$ would be a statistical model, in the sense that you estimated it from observed/empirical data. In general, statistics is all about taking data and using it to build "models" that can be used for prediction or forecasting (of future inputs) or inference (i.e. understanding the properties of the data-generating process or probability distribution) or just to compute the so-called "statistics" (hence the name of the field!), such as the "sample average" (i.e. the average of your observe data points, where the "sample" here refers to your dataset of points, which are also sometimes known as "samples", just to make things even more confusing!)

So, let me address your questions and comments, but take my comments below with a grain of salt, because I am not a statistician.

All the models that calculates probabilities using datasets, for any task, are called (empirical) probabilistic models.

To me, this would be a reasonable statement. In this example, you seem to be talking about $\hat{p}(y \mid x)$, which I would also call a probabilistic model, although it's just an estimate of the theoretical/ideal one.

Is there any model in literature that is a statistical model but not probabilistic?

If we follow my reasoning above, initially, if you do not explicitly model your problem as the estimation of some probability distribution that generated the data, then we would be estimating something from data (so we would be building a statistical model), but it wouldn't be clear whether this "statistical model" is an estimate of some theoretical/probabilistic one. So, I don't really have a definitive answer to your question. I suppose that any statistical model could be modelled with the tools of probability theory, so I would be more inclined to think that the answer to your question is "no".

In addition to what I just said above, if you take a book like Machine Learning: A Probabilistic Perspective, here are a few examples of how the author uses the terms "statistical model" and "probabilistic model". For example, he writes (section 7.3, page 217)

A common way to estimate the parameters of a statistical model is to compute the MLE, which is defined as

$$ \hat{\boldsymbol{\theta}} \triangleq \arg \max _{\boldsymbol{\theta}} \log p(\mathcal{D} \mid \boldsymbol{\theta}) $$

So, in this case, is $p$ a statistical or probabilistic model, according to my definitions above? Of course, ignoring the potentially different notation being used here to refer to a statistical model, i.e. without the $\hat{}$, I think that this $p$ could be considered a statistical model (in the sense that $\hat{\boldsymbol{\theta}}$ would be estimated from the observed dataset $\mathcal{D}$, assuming it's the observed dataset and not some random variable) but at the same time also a probabilistic one, in the sense that, here, we are assuming that we have some kind of "theoretical likelihood". In any case, the likelihood is something that can make this discussion even more confusing, because the likelihood is not really a probability distribution (if you integrate with respect to the parameters). In any case, here, you could consider $p(\mathcal{D} \mid \boldsymbol{\theta})$ as a (Bayesian) probabilistic model, i.e. you assume that there's some parameters that generate the data and, if you consider it as a conditional probability distribution over the data, rather than the parameters, then this would be consistent with what I said above.

Here's another example (section 1.3.1, p. 10).

In this book, we focus on model based clustering, which means we fit a probabilistic model to the data, rather than running some ad hoc algorithm.

This usage also seems consistent with my description above. Here, I interpret the part "we fit a probabilistic model to the data" as "we estimate the probability distribution given the data".

Or, in section 1.4.1 (p. 16)

In this book, we will be focussing on probabilistic models of the form $p(y \mid x)$ or $p(x)$, depending on whether we are interested in supervised or unsupervised learning respectively.

The discussion can become even more complicated, if we start to consider parametric vs non-parametric models, which are mentioned in that same section, where you make or not assumptions about the data-generating process.

So, to conclude, I think that these terms are often used vaguely and sometimes interchangeably, so the confusion is normal.

",2444,,2444,,7/1/2021 13:52,7/1/2021 13:52,,,,0,,,,CC BY-SA 4.0 28487,1,,,7/1/2021 14:17,,0,524,"

I am trying to build a neural network that has an input of $n$ pairs of integer values (where $n$ is random) and a corresponding output of a binary array with length $n$.

The input will be a set of integer value coordinates $[(x_{1}, y_{1}), (x_{2}, y_{2}), (x_{3}, y_{3}), \dots, (x_{50}, y_{50}), \dots]$, where each instance can be of various lengths, like $[(x_{1}, y_{1}), (x_{2}, y_{2}), (x_{3}, y_{3}), \dots, (x_{52}, y_{52})]$ or $[(x_{1}, y_{1}), (x_{2}, y_{2}), (x_{3}, y_{3}), \dots, (x_{101}, y_{101})]$, etc.

The output is a set of binary arrays with each instance having the same length as the corresponding input.

May I know if anyone has any recommendations on what neural network would fit this use case?

",48302,,2444,,7/13/2021 10:57,8/12/2021 11:01,How to design a neural network with arbitrary input and output length?,,2,0,,,,CC BY-SA 4.0 28488,2,,28390,7/1/2021 14:43,,1,,"

I know at least one example where the rank of the dataset (more specifically, the rank of a matrix that is computed from the design matrix, i.e. the matrix with your data, which I will describe more in detail below) can have an impact on the number of solutions that you can have or how you find those solutions. I am thinking of linear regression.

So, in linear regression, you have the model (written in matrix form)

$$ \mathbf{y} = \mathbf{X} \beta + \epsilon $$

where

  • $\mathbf{y}$ is an $N \times 1$ vector of dependent variables (i.e. the labels)
  • $\mathbf{X}$ is an $N \times K$ matrix of indedendent variables (aka regressors or features)
  • $\beta$ is an $N \times 1$ vector of parameters
  • $\epsilon$ is the noise (i.e. you assume that there's some noise that corrupts the function that relates the features to the labels through the parameters)

It turns out that, if $X$ has full rank, then the so-called ordinary least squares (OLS) solution to the linear regression problem (i.e. the estimate of the parameters $\beta$), which can be denoted by

$$ \hat{\beta} = \arg \min \| \mathbf{X} \beta - \mathbf{y} \|_{2}, $$

is given by a closed-form expression

$$\hat{\beta}=(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}y \tag{1}\label{1}.$$

If you look at this equation, you see that we are computing the inverse of a matrix, and that matrix is $\mathbf{X}^{T}\mathbf{X}$. What if this matrix is not invertible? It turns out that you cannot invert a matrix if it's not full rank. It also turns out that, if $\mathbf{X}$ is not full rank, then $\mathbf{X}^{T}\mathbf{X}$ wouldn't also be (see this and this), so we couldn't use the closed-form solution \ref{1} to solve the linear regression problem, i.e. we wouldn't have anymore a convex problem (i.e. a unique solution).

So, this is the only implication of the rank of the dataset (or design matrix) has on the machine learning algorithm that I am aware of and comes to my mind right now, but it's possible that the rank can play other roles.

",2444,,,,,7/1/2021 14:43,,,,0,,,,CC BY-SA 4.0 28491,2,,28480,7/1/2021 20:16,,0,,"

The policy is used in determining the next sequence of state-action pairs in the next episode. This means that the policy is determining indirectly the next Gt

",48285,,,,,7/1/2021 20:16,,,,0,,,,CC BY-SA 4.0 28493,1,28501,,7/2/2021 0:22,,1,67,"

Consider the scenario, where there are two players. One of the players perform the action randomly, whereas I want second player as a Q-player. I mean, the player selects a best action from the Q-table for given state i.e., the action with maximum Q-value. So, in case of second player, a Q-table is required.

It is known that Q-table has to be constructed only by running several episodes after its arbitrary initialization. So, the two players has to play with some policy to construct a Q-table for player two. Since the first player uses random policy. I have doubt regarding the policy of second player for constructing Q-table.

I have doubt regarding the policy of second player only. I know the policy he need to follow after the completion of updation of Q-table. I am not sure about the policy he need to follow while updating the Q-table only.

Which policy does my second player need to follow while constructing the Q-table for himself? Can he use random policy like player one? Or does he need to use arbitrarily initialized or partially updated Q-table itself for selecting best action? Or can he use some other policy till the completion of updating Q-table?

",21964,,2444,,7/3/2021 2:27,7/3/2021 2:27,Which policy has to be followed by a player while construction of its own Q-table?,,1,1,,,,CC BY-SA 4.0 28495,1,28508,,7/2/2021 1:59,,1,171,"

I came across a new term "held-out corpora" and I confused regarding its usage in the NLP domain

Consider the following three paragraphs from N-gram Language Models

#1: held-out corpora as a non-train data

For an intrinsic evaluation of a language model we need a test set. As with many of the statistical models in our field, the probabilities of an $n-$gram model come from the corpus it is trained on, the training set or training corpus. We can then measure training set the quality of an n-gram model by its performance on some unseen data called the test set or test corpus. We will also sometimes call test sets and other datasets that are not in our training sets held out corpora because we hold them out from the held out training data.

This paragraph clearly says that held-out corpora can be used for either testing or validation or others except training.

#2: development set or devset for hyperparameter tuning

Sometimes we use a particular test set so often that we implicitly tune to its characteristics. We then need a fresh test set that is truly unseen. In such cases, we call the initial test set the development test set or,devset. How do we divide our data into training, development, and test sets? We want our test set to be as large as possible, since a small test set may be accidentally unrepresentative, but we also want as much training data as possible. At the minimum, we would want to pick the smallest test set that gives us enough statistical power to measure a statistically significant difference between two potential models. In practice, we often just divide our data into 80% training, 10% development, and 10% test. Given a large corpus that we want to divide into training and test, test data can either be taken from some continuous sequence of text inside the corpus, or we can remove smaller “stripes” of text from randomly selected parts of our corpus and combine them into a test set.

This paragraph clearly says that development set is used for hyperparameter tuning.

#3: held-out corpora for hyperparameter tuning

How are these $\lambda$ values set? Both the simple interpolation and conditional interpolation $\lambda'$s are learned from a held-out corpus. A held-out corpus is an additional training corpus that we use to set hyperparameters like these $\lambda$ values, by choosing the $\lambda$ values that maximize the likelihood of the held-out corpus.

This paragraph is clearly saying that held-out corpus is used for hyper-parameter training.

I am interpreting or understanding the terms as follows:

Train corpus is used to train the model for learning parameters.

Test corpus is used for evaluating the model wrt parameters.

Development set is used for evaluating the model wrt hyperparameters.

Held-out corpus includes any corpus outside training corpus. So, it can be used for evaluating either parameters or hyperparameters.

To be concise, informally, data = training data + held-out data = training data + development set + test data

Is my understanding true? I got confusion because of paragraph 3, which says that held-out corpus is used (only) for learning the hyperparameters while paragraph 1 says that held-out corpus includes any corpus outside train corpus. Does held-out corpora include devset or same as devset?

",18758,,2444,,7/4/2021 2:26,7/4/2021 2:26,"Are the held-out datasets used for testing, validation or both?",,1,0,,,,CC BY-SA 4.0 28499,1,28500,,7/2/2021 8:09,,1,251,"

I saw a video on Youtube about AI and Super Resolution Image Reconstruction with TecoGAN. I must say I am impressed.

Now, I am wondering how reliable this is.

I have learned at university that you lose information if you do not sample to fullfill Nyquist. I also don't think that the example images are in any way sparse...

Is the AI just trying to fill in the blanks by guessing?

This would be fine for entertainment, but probably not so much to enhance robbery pictures and charge people based on enhanced pictures. It also wouldn't be a good solution for improving the resolution of scientific data if it is just "guessing".

",48314,,2444,,7/2/2021 12:08,7/2/2021 19:37,Why is AI Super Resolution Reconstruction more than just guessing?,,3,0,,,,CC BY-SA 4.0 28500,2,,28499,7/2/2021 9:20,,4,,"

Yes, it's guessing. In the training phase, you show it lots of coarse and detailed pictures, and the algorithm learns a mapping from course to detailed. Then you present it a new coarse image, and it executes the same mapping. The information from the original picture is gone, and it cannot be retrieved, so it's filled in by analogy to other cases.

"Guessing" sounds a bit random, so it's more like a very informed guess. A bit like reading lots of books, and then being asked what word comes after "the cat sat on the" -- you're likely to say "mat", and will be right in many cases, but there's no guarantee that the most common word actually does occur. So now just substitute words with pixel values, and add a complex statistical model to make the decision, but you still won't know what the correct element is.

As you rightly say, this is fine for entertainment, but not for serious applications, where missing details in a crime scene are filled in according to how previous similar scenes may have looked.

",2193,,,,,7/2/2021 9:20,,,,0,,,,CC BY-SA 4.0 28501,2,,28493,7/2/2021 11:35,,3,,"

I'll assume Q-player is being trained with Q learning (note, Q tables can be useful in other algorithms too, like SARSA).

Q learning is an off policy algorithm, meaning that the Q values can be learned regardless of the policy used to collect data. So the Q player can be following a random policy, or even a fixed pre defined policy if you want. Usually, however, before the Q table has converged, we want the agent to explore the environment to collect the data it needs (without sacrificing too much return, this is the exploration-exploitation tradeoff).

It is very common practice to use the intermediate Q tables to induce a policy for the agent. For example, the epsilon-greedy policy will select the best action according to the current Q table with high probability, and will otherwise choose a random action.

",37829,,,,,7/2/2021 11:35,,,,0,,,,CC BY-SA 4.0 28502,2,,28499,7/2/2021 12:16,,0,,"

You actually don't have to loose information if you don't fulfill Nyquist — although that topic is quite advanced and has limitations. Still, super resolution is reliable and used by most 4K TVs today to upscale 1080p video to fit the 4K screen. You may notice TV ads for 4K TVs occasionally mentioning this.

What super resolution does is just generalising shapes. For example, imagine a simple image with just a black rectangle. You can easily image enlargening that image to whatever size you want because you know how it's supposed to look. By training a neural network on a lot of images it can learn to generalise features such as faces and enlarge them.

Of course you can't take a single pixel of a face and expand this into a full 100x100 image. Therefore, it's best to use super resolution to enlarge entire images, not just individual areas of the image. If you were to use it to enlarge a very small patch of text in the image, it may not replicate the text correctly and read as something else.

Furthermore, very good super resolution models are slow. Most also only increase the resolution of a single frame at a time. This means there might be inconstancies between sequential frames in a video.

",48319,,2193,,7/2/2021 13:51,7/2/2021 13:51,,,,0,,,,CC BY-SA 4.0 28503,1,,,7/2/2021 19:29,,0,66,"

In a Dueling DQN agent (Wang et al.), the Q function is decomposed as

$$ Q(s, a)=V(s) + A(s, a) - \frac{1}{|A|}\sum_{a'\in \mathcal{A}}A(s, a') $$

representing the value of the state, plus the advantage of the action, minus the average advantage of all actions available in that state.

However, this formulation means that the value of the given state is skewed, even after subtracting out the mean advantage of all actions available. Why isn't it set up so that the advantage of the best action is 0 (with the other actions' advantages being negative), thus leading to a more accurate $V(s)$?

",48327,,2444,,7/3/2021 2:19,11/25/2022 5:04,"In a DDQN architecture, why is the value of a state assumed to be the average of the Q values of the actions?",,1,0,,,,CC BY-SA 4.0 28504,2,,28503,7/2/2021 19:29,,0,,"

This is considered in the original paper, but is rejected due to training instability. To quote:

[The average-advantage formulation] increases the stability of the optimization...the advantages only need to change as fast as the mean, instead of having to compensate any change to the optimal action's advantage in [the formulation where $Q(s, a) is normalized with the best action's advantage].

Furthermore, the authors attempt a softmax version, but they find it doesn't perform that differently than the average-advantage version. I couldn't find any additional experiments using this, but it may be worth trying if it is absolutely necessary to obtain an accurate $V(s)$.

",48327,,,,,7/2/2021 19:29,,,,0,,,,CC BY-SA 4.0 28505,2,,28499,7/2/2021 19:37,,0,,"

You were correct in that the model won't be able to reconstruct any missing information with complete certainty (an intuitively impossible task). As Oliver Mason mentioned, it is estimating what a similar image (from the training data it has been exposed to) would have looked like (I should note that in the vast majority of cases, we don't/can't actually know what the model's internal representation looks like or how it works, only how to get there - see explainable AI).

In other words, it attempts to match the image it is given to a "latent space" (different terms are sometimes used for the same concept, but you may find the links in the article helpful - also see this question) that it has approximated (representing the data it was trained on) and mapped to the output space (i.e., the "high-resolution" images).

In theory it is possible that AI could be used to enhance certain features of crime scene images (e.g., sharpening the contours of a face so a detective can better recognize the person) but again, this only approximates the "true" data. In practical settings, it is more likely that the raw data is fed into a model designed specifically for achieving some end goal (for example, facial recognition) and humans are taken out of the loop once a sufficiently accurate model is available.

Generally, "highly educated guessing" is an appropriate summary for this type of method and you are right in your assumption that this limits its usefulness in certain applications where precision is important. Though this is somewhat tangential, you might find machine learning-based computational fluid dynamics (e.g., this paper) enlightening; while most machine learning techniques (anything involving neural networks, which are inherently approximators and also often nondeterministic) cannot produce scientifically rigorous simulations in the way traditional algorithms might, they can speed up some of the processing/analysis in the way a cleverly devised heuristic might.

Here are a couple articles you might find informative (some of these are pretty sparse in information relating to machine learning, but most have links to in-depth treatments):

",22392,,,,,7/2/2021 19:37,,,,0,,,,CC BY-SA 4.0 28506,2,,16627,7/2/2021 22:06,,0,,"

I don't think that the input size inconsistency won't leave out spatial information in the Convolutional Neural Network. The image resizing would loose the characteristics of the object on the image.

It looks like that you don't want to crop your input image, which looks like being fabricated. I like to suggest these preprocessing before the Convolutional Neural Network:

  (1) Find an original image or a picture of the real object
  (2) Perform image registration between a suspicious image and the original image (the registration result should be fine)
  (3) Calculate color difference in each pixel position
  (4) Generate new image with these differences
  (5) Feed to your Convolutional Neural Network for the anomaly detection

",27229,,,,,7/2/2021 22:06,,,,0,,,,CC BY-SA 4.0 28508,2,,28495,7/3/2021 3:06,,1,,"

I think that these terms may be used inconsistently across sources.

If someone says held-out dataset, I would immediately think of a dataset that is not used for training, but can be used for anything else, validation (hyper-parameter tuning or early stopping) or testing; so, to determine what they are referring to, I would probably take into account the context.

In your second quote, the development set seems to be used as a synonym for validation dataset (a more common name to refer to the same concept), i.e. the dataset used for early stopping or hyper-parameter optimization (see also this).

So, my answer to your question in the title would be

Yes, the heldout dataset can be used for validation or testing, but not because it's a special dataset, but because people may use this term to refer to either the validation dataset or the test dataset.

Here's another example of the usage of the term held-out set to refer, in this case, to the validation dataset (section 1.6 of this famous ML book)

The results above suggest a simple way of achieving this, namely by taking the available data and partitioning it into a training set, used to determine the coefficients $\mathbf{w}$, and a separate validation set, also called a hold-out set, used to optimize the model complexity

Here's another example that shows that the term may be used inconsistently (emphasis mine, taken from section 5.3 of the famous deep learning book by Goodfellow et al.). In fact, in that same section, they refer to the validation dataset, which is distinct from this held-out test set (so, in this case, the held-out set is used to refer only to the test set).

Earlier we discussed how a held-out test set, composed of examples coming from the same distribution as the training set, can be used to estimate the generalization error of a learner, after the learning process has completed.

",2444,,,,,7/3/2021 3:06,,,,0,,,,CC BY-SA 4.0 28510,1,,,7/3/2021 10:19,,1,66,"

I have 17 nodes in my network with 3000 different paths in total. I have to select the path with highest available bandwidth, using genetic algorithm. I'm confused about the approach! Should I have all paths as the population, or should I create a population same size as the nodes(17).

",48333,,48333,,7/5/2021 15:53,7/5/2021 15:53,What exactly is the population in the problem of finding the best path in a network of nodes using genetic algorithms?,,1,0,,,,CC BY-SA 4.0 28513,2,,28510,7/3/2021 12:58,,1,,"

Should I have all paths as the population,

No, this is not usually possible for more realistic problems where a population that covered all possibilities would be far too large to manage.

or should I create a population same size as the nodes(17).

No, there is no need to link the population size to other properties of the problem so directly.

If your path must pass through all 17 nodes (like a travelling salesman problem) then your genome coding might have 17 elements to it, and could simply be the path through the nodes. That's not the only way to address even the TSP, and may not be the case here. However, I mention it because it is common that numerical features of the problem will influence the design of the genomes.

The population size is a hyperparameter for the solution, along with mutation rate, rules for recombination etc. It is something you will want to experiment with.

With 3000 combinations to assess, a direct search of all combinations would be fast and effective (and probably easier to code). My understanding is therefore that this is a learning exercise. Your eventual goal might be to have the genetic algorithm find a good solution with less than 3000 evaluations of the path. Finding a good solution in any number of iterations is also a reasonable start to demonstrate you have understood the basics of genetic algorithms.

",1847,,,,,7/3/2021 12:58,,,,0,,,,CC BY-SA 4.0 28515,1,28517,,7/3/2021 16:45,,2,114,"

In order to check, whether the visitor of the page is a human, and not an AI many web pages and applications have a checking procedure, known as CAPTCHA. These tasks are intended to be simple for people, but unsolvable for machines.

However, often some text recognition challenges are difficult, like discerning badly, overlapping digits, or telling whether the bus is on the captcha.

As far as I understand, so far, robustness against adversarial attacks is an unsolved problem. Moreover, adversarial perturbations are rather generalizable and transferrable to various architectures (according to https://youtu.be/CIfsB_EYsVI?t=3226). This phenomenon is relevant not only to DNN but for simpler linear models.

With the current state of affairs, it seems to be a good idea, to make CAPTCHAs from these adversarial examples, and the classification problem would be simple for human, without the need to make several attempts to pass this test, but hard for AI.

There is some research in this field and proposed solutions, but they seem not to be very popular.

Are there some other problems with this approach, or the owners of the websites (applications) prefer not to rely on this approach?

",38846,,38846,,7/4/2021 7:36,7/4/2021 11:12,Why adversarial images are not the mainstream for captchas?,,2,0,,,,CC BY-SA 4.0 28516,2,,28515,7/3/2021 20:22,,0,,"

Because the examples are fit to a particular ML model and if you train using different parameters they probably won't be valid.

",32390,,,,,7/3/2021 20:22,,,,1,,,,CC BY-SA 4.0 28517,2,,28515,7/3/2021 21:27,,2,,"

I think the problem is that this type of attack will only work for the model that was used to produce the perturbations. These perturbations are computed by backpropagating an error for an image of, say, a panda, but with the true label "airplane".

In other words, perturbations are nothing more than gradients indicating in which direction each pixel needs to be changed to make the panda look like an airplane for that particular model. Since the same model will have different weights after each training, this attack will only work for the model used to generate the gradients.

Here is an illustrative example of this idea when training a generator in a GAN:

Update

While we can transfer an adversarial attack from one model to another, this is only possible under strict constraints. To successfully generate perturbations for the target model, we first need to know the dataset that was used to train it. We also need to know the architecture including the activation and loss functions as well as the hyperparameters of this model. Here is a work in which the authors take a closer look at this topic.

Even though it is possible, in my opinion, using CAPTCHAs does not make sense as these attacks may not work in the real world. For example, if we apply this attack to a road sign to trick the autopilot in a vehicle, the lighting conditions and camera perspective can significantly affect the classification.

",12841,,12841,,7/4/2021 11:12,7/4/2021 11:12,,,,2,,,,CC BY-SA 4.0 28520,1,,,7/4/2021 3:01,,0,26,"

I found people used deep neural network to get optimal policy by solving a nonconvex optimization problem. Moreover, they didn't use any set of training data and claimed that it's the difference between their approach and the supervised learning. I wonder can people set loss function of neural network by themselves instead of choosing cross entropy or mean square error?

My experience in machine learning is very limited. I audited two machine learning courses offered by applied math department in my school. I read twenty or more papers on the application of machine learning. I began to use Keras very recently.

",48349,,48349,,7/4/2021 14:56,7/4/2021 14:56,Can people set loss function of neural network by themselves instead of choosing cross entropy or mean square error?,,0,4,,,,CC BY-SA 4.0 28521,1,,,7/4/2021 11:36,,2,166,"

Generative models in artificial intelligence span from simple models like Naive Bayes to the advanced deep generative models like current day GANs. This question is not about coding and involves only science and theoretical part only.

Are there any standard textbooks that covers topics from scratch to the advanced?

",18758,,,,,7/4/2021 13:21,Book(s) on generative models,,1,0,,,,CC BY-SA 4.0 28522,2,,28521,7/4/2021 13:21,,3,,"

From the theoretical foundations one can look into the Chapter 20: Deep Generative Models of the classic DL book by Goodfellow, Bengio https://amzn.to/2MmZNbH. Not the most recent reference, but written by the professionals in simple and accessible way.

There is a nice book Generative Deep Learning by D.Foster with some simple heuristics and probability theory motivations and examples https://www.google.ru/books/edition/Generative_Deep_Learning/RqegDwAAQBAJ?hl=en&gbpv=1&printsec=frontcover.

Finally, there is a book from Jason Brownlee (author of many nicely written articles on machinelearningmastery.com) https://machinelearningmastery.com/generative_adversarial_networks/

",38846,,,,,7/4/2021 13:21,,,,0,,,,CC BY-SA 4.0 28524,1,28532,,7/4/2021 14:45,,1,866,"

It seems that neural networks (NNs) can be applied to supervised learning, unsupervised learning and reinforcement learning. Some people even train neural networks without the set of training data. If NNs are used in reinforcement learning, is it possible that we don't need training data?

",48349,,2444,,7/5/2021 13:59,7/6/2021 12:50,Can people use neural networks without providing the set of training data?,,3,0,,,,CC BY-SA 4.0 28525,1,,,7/4/2021 21:25,,1,26,"

Regarding the MobileNet SSD v2 model, I was wondering to what extend it captures uncertainty of the predictions.

There are 2 types of uncertainty, data uncertainty (aleatoric) and model uncertainty (epistemic).

The model outputs bounding boxes with a confidence score, but what uncertainty does that score represent?

From what I know models usually only capture aleatoric uncertainty in their predictions, but not the epistemic one. Is this also true for MobileNet SSD v2?

",48362,,2444,,7/5/2021 12:31,7/5/2021 12:31,Does MobileNet SSD v2 only capture aleatoric uncertainty (and so not the epistemic one)?,,0,0,,,,CC BY-SA 4.0 28526,1,,,7/4/2021 23:29,,1,85,"

In naive Bayes classification, we estimate the class of a document as follows

$$\hat{c} = \arg \max_{c \in C} P(c \mid d) = \arg \max_{c \in C} \dfrac{ P(d \mid c)P(c) }{P(d)} $$

It has been said in page 4 of this textbook that we can ignore the probability of document since it remains constant across classes.

We can conveniently simplify the above equation by dropping the denominator $p(d)$. This is possible because we will be computing $\dfrac{P(d \mid c)P(c)}{P(d)}$for each possible class. But $P(d)$ doesn't change for each class; we are always asking about the most likely class for the same document $d$, which must have the same probability $P(d)$. Thus, we can choose the class that maximizes this simpler formula

$$\hat{c} = \arg \max_{c \in C} P(c \mid d) = \arg \max_{c \in C} P(d \mid c)P(c) $$

Since the value of the document does not influence the choice of the class, naive Bayes algorithm does not consider that.

But, I want to know the value of $P(d)$. Is it $\dfrac{1}{N}$, if total number of documents are $N$? How should I calculate $P(d)$?

",18758,,2444,,7/5/2021 13:55,11/28/2022 14:03,How would the probability of a document $P(d)$ be computed in the Naive Bayes classifier?,,1,2,,,,CC BY-SA 4.0 28527,2,,28524,7/5/2021 0:51,,5,,"

You cannot train a neural network without training data. It would be like training a football player without making him/her play/watch football or anything that resembles football: it's simply not possible. The definition of training/learning in machine learning strictly requires data.

You can train a neural network in different ways (e.g. supervised or unsupervised) and with different types of data (e.g. labelled or unlabelled, respectively), but this is a different story. In reinforcement learning, you also have training data, but the data may not be given to the neural network in the same way that it's given e.g. in supervised learning. Still, this does not mean that there is no training data. Of course, there is or must be (by definition)!

However, note that you can use a (e.g. randomly initialised) neural network without training it, but it would probably be a useless neural network. You could also use a neural network that has been trained by someone else with data that you may not have access to anymore (and maybe that answers your question in the title).

",2444,,2444,,7/5/2021 0:57,7/5/2021 0:57,,,,2,,,,CC BY-SA 4.0 28528,1,,,7/5/2021 1:20,,0,20,"

There are multiple datasets for machine comprehension tasks such as SQuAD. However, most of the questions are straightforward. One can find the answers easily by using the find feature of the browser to look for the question keywords in the passage.

I'd appreciate it if you let us know about standardized in-depth reading comprehension tests for either human or machine that are generalizable. By generalizable, I mean they include a broad range of disciplines and academic levels and are not specifically designed for a target population.

I thought of GRE reading comprehension but was not able to find any study indicating that GRE reading comprehension questions are standardized or generalizable.

",48355,,,,,7/5/2021 1:20,How to Study Improving In-depth Reading Comprehension?,,0,4,,,,CC BY-SA 4.0 28529,1,,,7/5/2021 1:37,,0,378,"

After some amount of training on a custom Multi-agent sparse-reward environment using RLlib's (1.4.0) PPO network, I found that my continuous actions turn into nan (explodes?) which is probably caused by a bad gradient update which in turn depends on the loss/objective function.

As I understand it, PPO's loss function relies on three terms:

  1. The clipped surrogate objective which depends on outputs of old policy and new policy, the advantage, and the "clip" parameter(=0.3)

  2. The Value Function Loss

  3. The Entropy Loss [mainly there to encourage exploration]

Total Loss = Surrogate objective (clipped) - vf_loss_coeff * VF Loss + entropy_coeff * entropy.

The surrogate loss ( Reference: https://arxiv.org/abs/1707.06347 )

I have a bunch of questions:

  1. Is the ratio rt(theta) used that of the actual actions taken from new policy vs. old policy or is it the probability distributions of those actions? (since actions are continuous?)

  2. Follow up question to 1: Assuming it is probability, can the probability ever be 0? Because if it is ever 0, then log probability would result in log(0) = inf/undefined - which would prove that is the root cause?

  3. If 1 and 2 are safely debunked, then do I

(A) lower my learning rate?

(B) Reduce the number of layers in my network?

(C) Use gradient clipping or action or reward clipping of some sort?

To anyone who would be kind enough to share any insights into the matter, you have my gratitude.

For more information, see relevant part of progress table below: where the total loss becomes inf. The only change I found is that the policy loss was all negative until row #445.

Total loss policy loss VF loss
430 6.068537 -0.053691725999999995 6.102932
431 5.9919114 -0.046943977000000005 6.0161843
432 8.134636 -0.05247503 8.164852
433 4.222730599999999 -0.048518334 4.2523246
434 6.563492 -0.05237444 6.594456
435 8.171028999999999 -0.048245672 8.198222999999999
436 8.948264 -0.048484523 8.976327000000001
437 7.556602000000001 -0.054372005 7.5880575
438 6.124418 -0.05249534 6.155608999999999
439 4.267647 -0.052565258 4.2978816
440 4.912957700000001 -0.054498855 4.9448576
441 16.630292999999998 -0.043477765999999994 16.656229
442 6.3149705 -0.057527818 6.349851999999999
443 4.2269225 -0.05446908599999999 4.260793700000001
444 9.503102 -0.052135203 9.53277
445 inf 0.2436709 4.410831
446 nan -0.00029848056 22.596403
447 nan 0.00013323531 0.00043436907999999994
448 nan 1.5656527000000002e-05 0.0002645221
449 nan 1.3344318000000001e-05 0.0003139485
450 nan 6.941916999999999e-05 0.00025863337
451 nan 0.00015686743 0.00013607396
452 nan -5.0206604e-06 0.00027541115000000003
453 nan -4.5543664e-05 0.0004247162
454 nan 8.841756999999999e-05 0.00020278389999999998
455 nan -8.465959e-05 9.261127e-05
456 nan 3.8680790000000003e-05 0.00032097592999999995
457 nan 2.7373152999999996e-06 0.0005146417
458 nan -6.271608e-06 0.0013273798000000001
459 nan -0.00013192794 0.00030621013
460 nan 0.00038987884 0.00038019830000000004
461 nan -3.2747877999999998e-06 0.00031471922
462 nan -6.9349815e-05 0.00038836736000000006
463 nan -4.666238e-05 0.0002851575
464 nan -3.7067155e-05 0.00020161088
465 nan 3.0623291e-06 0.00019258813999999998
466 nan -8.599938e-06 0.00036465342000000005
467 nan -1.1529375e-05 0.00016500981
468 nan -3.0851965e-07 0.00022042097
469 nan -0.0001133984 0.00030230957999999997
470 nan -1.0735256e-05 0.00034000343000000003

Optional

For even further context, check my related question

",21513,,21513,,7/5/2021 15:26,7/5/2021 15:26,RLlib's Multi-agent PPO continuous actions turn into nan,,0,4,,,,CC BY-SA 4.0 28530,1,28531,,7/5/2021 5:39,,0,72,"

I got this feedback for my thesis paper.

The improvement shown in the results section could be the result of random initialization. There should be multiple runs with means and standard deviations.

Can anyone explain this feedback with details?

I used a neural network with pre-trained weights for transfer learning (specifically, EfficientNetB0, with 'noisy-student' for the weights). It was a classification problem to classify between Covid-19, Viral Pneumonia, and normal cases. I normalised the dataset so that the images are in the range [0, 255] and I also did k-fold cross-validation.

",43993,,2444,,7/5/2021 12:54,7/5/2021 12:54,"Why would the ""improvement"" be the result of random initialization, and so why should we use multiple runs?",,1,0,,,,CC BY-SA 4.0 28531,2,,28530,7/5/2021 7:20,,2,,"

Neural networks use random number generators in multiple places. Most notably for weight initialisation, but also for features such as dropout, selecting minibatches within epochs, and train/cv split for cross-validation.

That means that any result metric from the neural network e.g. accuracy, loss, F1 score, is a random variable.

Reporting a single value of a random variable is not very informative. It may be higher or lower than expected, and it is difficult to tell if a result is significant.

Your reviewers are asking you to run the entire training and reporting from your dataset multiple times, to get the mean and standard deviation of any metrics that you reported.

You can keep the same train/test split for your hold-out test data set. Ideally this is the same train/test split as used by the classifiers that you are comparing your thesis results with.

If you used a seed for the RNG for repeatability, you can keep that as is and perform multiple training and reporting runs within a single script starting with the same seed set at the very beginning. Alternatively, if this would take too long, you could generate a set of seeds to use in consecutive runs - provided you are not selecting seed values depending on the results it does not matter.

",1847,,,,,7/5/2021 7:20,,,,1,,,,CC BY-SA 4.0 28532,2,,28524,7/5/2021 7:53,,2,,"

Neural networks are trained by using pairs of example input/output vectors that they learn to associate and can generalise from. In that sense, they always need training data.

For supervised learning, a neural network (NN) is trained on a dataset of example inputs and outputs (aka "a labelled dataset") that the user must provide somehow.

There are scenarios involving neural networks that do not require the user to possess a labelled dataset, or even any dataset at all, but they all have some restriction:

  • The NN is already trained, and is now being used to make predictions. You do not need to have access to the original dataset to use a trained NN, although you do need access to some inputs that are of the same type that the network was trained on. The restriction here is that the NN cannot be trained further.

  • Semi-supervised learning: In some cases, the desired outputs can be generated automatically from the inputs (they may even be same as the inputs). You still need a dataset of inputs, but may be spared the hard work of adding labels, making it a lot easier to collect a dataset. The restriction here is that this is for specific use cases, such as Generative Adversarial Networks, and is not an approach you can use in general.

  • Reinforcement learning (RL). The labelled data for RL are generated through a trial and error process, and do not need to be provided separately by the user. However, the user does need to write the RL code for the environment to allow this data generation to happen. Internally in systems like Deep Q Networks (DQN), the training process looks a lot like supervised learning.

Both RL and semi-supervised learning are special cases of auto-generation of datasets, where the NN is being used to learn a complex function that can be calculated in some other way. As well as the semi-supervised and reinforcement learning cases, NNs have successfully been applied to fluid dynamics and ray tracing problems in this way, where the CPU cost for full calculation is even higher than using a neural network. These scenarios don't require you to possess a labelled dataset before training starts, but do require effort in developing something that generates input/output pairs.

",1847,,1847,,7/5/2021 9:56,7/5/2021 9:56,,,,2,,,,CC BY-SA 4.0 28535,1,28536,,7/5/2021 10:55,,1,31,"

The following figure comes from the paper The perceptron: A probabilistic model for information storage and organization in the brain

I can tell the labels pointed out by blue rectangles are: "Projection area", "A-units", "$R_1$", "inhibitory connections" and "$R_2$".

Could someone help tell what the labels are pointed out by red rectangles?

",45689,,2444,,7/5/2021 12:24,7/5/2021 12:24,Could someone help tell what the labels are pointed out by red rectangles?,,1,0,,,,CC BY-SA 4.0 28536,2,,28535,7/5/2021 11:17,,1,,"

Left-to right:

red: SENSORY RECEPTOR OR
blue: PROJECTION AREA

blue: A-UNITS

blue: R$_1$
red: BROKEN LINES SHOW
blue: INHIBITORY CONNECTIONS

blue: R$_2$

",2193,,,,,7/5/2021 11:17,,,,0,,,,CC BY-SA 4.0 28537,1,,,7/5/2021 12:28,,3,408,"

Is it possible to exclude specific layers from the optimization?

For example, let's say I have an input layer, 2 hidden layers, and the output layer. I know there is a perfect solution for my problem with this setup and I already know the perfect weights between the first and the second hidden layer.

Can I have the weights between the first and the second hidden layer be fixed during the training phase?

I understand that I could just not update these specific weights after I computed the backpropagation for the entire network. But if I throw away those specific weights, will this affect the optimization of the rest of my weights?

",48314,,2444,,7/6/2021 1:38,7/6/2021 15:43,Can some of the weights be fixed during the training of a neural network?,,1,0,,,,CC BY-SA 4.0 28538,2,,28323,7/5/2021 12:37,,0,,"

The famous book Artificial Intelligence: A Modern Approach (by Stuart Russell and Peter Norvig) covers all or most of the theoretical aspects of artificial intelligence (such as deep learning) and it also dedicates one chapter to the common philosophical topics that you mention.

",2444,,,,,7/5/2021 12:37,,,,0,,,,CC BY-SA 4.0 28540,2,,28323,7/5/2021 21:18,,0,,"

I would like to add "The Master Algorithm" by Pedro Domingos. I would say it's more philosophical but still provides high level discussions about differences between algorithms. He also has a sense of humor which makes it a lighter read.

",47898,,2444,,7/5/2021 21:52,7/5/2021 21:52,,,,0,,,,CC BY-SA 4.0 28542,2,,28537,7/6/2021 1:43,,4,,"

Yes, you can fix (or freeze) some of the weights during the training of a neural network. In fact, this is done in the most common form of transfer learning (which is described here). I don't know exactly how this affects learning in general. In transfer learning, this is definitely beneficial, as we are freezing the weights that are associated with the learned general features of objects, such as corners (where general here is defined intuitively), which can be useful for other tasks, so, by having them frozen, we reuse them.

",2444,,2444,,7/6/2021 15:43,7/6/2021 15:43,,,,0,,,,CC BY-SA 4.0 28543,2,,28524,7/6/2021 2:00,,1,,"

In case the question is if NNs can be trained without data, as pointed by others, the answer is negative - any training by definition involves the use of data in some way - supervised, semi-supervised, reward, etc.

However, if the question is whether one can obtain something useful I would think about the following use cases:

  1. One can use randomly initialized networks as a random map. The application of this seems to be rather specific, but maybe there are some applications of this.
  2. One can add certain evolution to the weights like in the statistical physics system of form: $$ w_{n+1} - w_n = f(w) $$ Where $f(w)$ can be deterministic or non-deterministic. It is not actually a neural network, but a related concept - https://en.wikipedia.org/wiki/Boltzmann_machine.
",38846,,2444,,7/6/2021 12:50,7/6/2021 12:50,,,,0,,,,CC BY-SA 4.0 28544,1,,,7/6/2021 4:56,,2,1525,"

I have a neural network where there are two hidden layers. Each hidden layer has 128 neurons. The input layer has 20 inputs, and the output layer has 3 outputs.

I have 1 million records of data. 80% is used to train the network, 20% is used for validation. I run the training for 100000 epochs.

I see that the neural network attains 100% accuracy on the training data after only 12000 epochs.

Should I stop training or continue until all 100000 epochs are complete? Please, explain why.

",20721,,2444,,7/6/2021 11:10,7/6/2021 14:23,Should I continue training if the neural network attains 100% training accuracy?,,1,2,,,,CC BY-SA 4.0 28545,2,,28544,7/6/2021 8:43,,6,,"

First of all, as mentioned by @Neil Slater in the comment - you need to have three splits into the train, validation and test set.

One sometimes disregards the difference between validation and test set. However they serve for different purposes. Here I would like to cite https://machinelearningmastery.com/difference-test-validation-datasets/ :

Validation Dataset: The sample of data used to provide an unbiased evaluation of a model fit on the training dataset while tuning model hyperparameters. The evaluation becomes more biased as skill on the validation dataset is incorporated into the model configuration.

Test Dataset: The sample of data used to provide an unbiased evaluation of a final model fit on the training dataset.

Secondly, in order to understand what's happening plot jointly the train and validation loss. In case the performance on validation data becomes much worse, that on the training - It is better to terminate training, since it is the indication of overfitting.

A good practice is to use early stopping, there is an implementation of this callback in Tensorflow - https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping.

It a kind of regularization procedure https://en.wikipedia.org/wiki/Early_stopping.

",38846,,38846,,7/6/2021 14:23,7/6/2021 14:23,,,,4,,,,CC BY-SA 4.0 28546,2,,28526,7/6/2021 10:46,,0,,"

$P(d)$ (aka evidence) is a probability of your data (observation) and is defined as follows:

$$ P(d) = \sum_i P(d|c_i)P(c_i) $$

for all classes $c$.

According to the book, $P(c)=\frac{N_c}{N_{doc}}$ and $P(d|c)$ is a likelihood and, applying the assumptions from the book, can be defined as $P(w_i |c)=\frac{\text{count}(w_i, c)+1}{\sum_{w \in V}\text{count}(w, c) + |V|}$, where $V$ consists of the union of all the word types in all classes.

Taking the example from Section 4.3 of the book,

Dataset Cat Documents
Training - just plain boring
- entirely predictable and lacks energy
- no surprises and very few laughs
+ very powerful
+ the most fun film of the summer
Test ? predictable with no fun

we'll get:

$$ P(-) = \frac{3}{5}\\ P(+) = \frac{2}{5}\\ P(S|−) = \frac{2\times 2\times 1}{34^3}\\ P(S|+) = \frac{1\times 1 \times 2}{29^3}\\ P(d) = P(S|−)P(−) + P(S|+)P(+) = 6.1\times 10^{−5} + 3.2\times 10^{−5} $$

",12841,,12841,,7/6/2021 10:56,7/6/2021 10:56,,,,5,,,,CC BY-SA 4.0 28548,1,28650,,7/6/2021 13:30,,-1,216,"

I am using TD3 on a custom gym environment, but the problem is that the action values stick to the end. Sticking to the end values makes reward negative, to be positive it must find action values somewhere in the mid. But, the agent doesn't learn that and keeps action values to maximum.

I am using one step termination environment (environment needs actions once for each episode).

How can I improve my model? I want action values to be roughly within 80% of maximum values.

In DDPG, we have inverted gradients, but could something similar be applied to TD3 to make action values search within legal action space more?

The score decreases as episodes increases.

",48002,,2444,,4/14/2022 12:32,4/14/2022 12:32,TD3 sticking to end values,,1,2,,4/14/2022 16:17,,CC BY-SA 4.0 28549,1,,,7/6/2021 20:38,,2,249,"

In transfer learning, we use big data from similar tasks to learn the parameters of a neural network, and then fine-tune the neural network on our own task that has little data available for it. Here, we can think of the transfer learning step as learning a (proper) prior, and then fine-tuning as learning the posterior.

So, we can argue that Bayesian networks can also solve the problem of small data-set regimes. But, what are the directions that we can mix Bayesian neural networks with similar tasks to transfer learning, for example, few-shot learning?

They make sense when they both take a role as a solution to the low data regime problems, but I can't think of a mix of them to tackle this issue.

Is it possible, for example, to learn a BNN for which we have picked a good prior to learn the posterior with little data and use the weight distribution for learning our new task? Is there any benefit in this?

",37744,,2444,,7/7/2021 10:46,11/29/2022 13:07,How could Bayesian neural networks be used for transfer learning?,,1,2,,,,CC BY-SA 4.0 28551,2,,28549,7/6/2021 21:05,,0,,"

Well, I would say, that purpose of Bayesian inference is not transfer learning, but uncertainty estimation.

In case you have good feature extractor in the beginning, you can adjust small number of parameters, like few last layers to achieve good quality in few epochs.

However, this is about adjusting the means of distributions over each weight.

Concerning the variance, I think transfer learning is inapplicable since the source and target distributions can be very different. For example, ImageNet is a broad and diverse dataset with many classes, and the target problem can involve only a few classes. Most probably, uncertainty estimate and the standard deviations of model weights on the ImageNet would be larger, than for the model, trained solely on the target task.

",38846,,,,,7/6/2021 21:05,,,,1,,,,CC BY-SA 4.0 28552,1,,,7/6/2021 23:09,,1,340,"

Consider the following statement from p14 of Naive Bayes and Sentiment Classification

While the use of a devset avoids overfitting the test set, having a fixed training set, devset, and test set creates another problem: in order to save lots of data for training, the test set (or devset) might not be large enough to be representative.

I heard about overfitting on train data. A model is said to be overfit on train data if it is giving low train error and high test error.

But, what does it mean overfitting on test set?

",18758,,2444,,7/7/2021 9:30,7/7/2021 9:30,What does it mean by overfitting the test set?,,1,0,,,,CC BY-SA 4.0 28554,1,,,7/7/2021 0:48,,0,83,"

In the context of research papers related to deep learning models, the authors usually mention these terms in the experiment section when they are talking about the model: configuration, setup. For example: Akbik et al. 2018.

For example:

  1. "We utilize the BiLSTM-CRF sequence labeling architecture proposed by Huang et. al (2015) in all configurations of our comparative evaluation."

  2. "Baselines. We also evaluate setups that involve only previous word embeddings."

What is the difference between the terms? Is the model architecture the same with different hyperparameters?

Thank you in advance.

",44718,,44718,,7/7/2021 10:51,7/7/2021 10:51,"What is the difference between model setup, model configuration, and model customization?",,0,2,,,,CC BY-SA 4.0 28555,1,,,7/7/2021 5:26,,1,30,"

Let $A$ and $B$ be two models for a classification task. Let $x$ be a test set and $M$ be a metric for the classification task. $X$ be a random variable on test sets.

Now,

$M(A,x) = $ Score of model $A$ on test set $x$

$M(B,x)$ = Score of model $B$ on test set $x$

$\delta(x) =$ difference in performance of models wrt test set $x$ $= M(A, x)-M(B,x)$

Now, consider the following (statistical) hypothesis on the performance difference $\delta$

$$H_o : \delta(x) \le 0$$ $$H_1 : \delta(x) > 0$$

We define $p-$value as follows

$$P(\delta(X) \ge \delta(x) | H_o \text{is true} ) $$

With this as context, I confused with the following paragraph (taken from p15 of Naive Bayes and Sentiment Classification)

So in our example, this $p-$value is the probability that we would see $\delta(x)$ assuming $A$ is not better than B. If $\delta(x)$ is huge (let’s say $A$ has a very respectable $M$ of $.9$ and $B$ has a terrible $M$ of only $.2$ on $x$), we might be surprised, since that would be extremely unlikely to occur if $H_0$ were in fact true, and so the $p-$value would be low (unlikely to have such a large $\delta$ if $A$ is in fact not better than $B$). But if $\delta(x)$ is very small, it might be less surprising to us even if $H_0$ were true and $A$ is not really better than $B$, and so the $p-$value would be higher.

It is told in the paragraph that $p-$value is very low if $A's$ performance is better than $B$.

I am thinking that $p-$value should be zero if $A's$ performance is better than $B$ since it is a disjoint event wrt $H_0$. Where am I going wrong?

",18758,,2444,,7/7/2021 9:46,7/7/2021 9:46,How can the probability of two disjoint events be non-zero?,,0,2,,,,CC BY-SA 4.0 28556,1,28647,,7/7/2021 6:54,,2,513,"

I would like to know how I could measure the pronunciation of two words. These two words are quite similar and differ only in one vowel. I know there is, e.g., the Hamming distance or the Levenshtein distance but they measure the "general" difference between words. I'm also interested in that but mainly I would like to know how they sound differently. I think there must be something like this to test text-to-speech results?

Best would even be an online source where I could just type in those two words.

",26353,,,,,7/13/2021 6:45,How to measure the similarity the pronunciation of two words?,,1,0,,,,CC BY-SA 4.0 28557,1,,,7/7/2021 8:06,,1,60,"

Without any evidence, I have wondered it might be possible to predict the upcoming mutations of the COVID-19 virus. I am further assuming people did so.

So, has someone correctly predicted the emergence of one of the variants of SARS-CoV-2 (like the Delta variant)?

I would be happy to have an explanation in layman's terms and citations to papers (if any).

",48420,,2444,,12/6/2021 8:19,12/6/2021 8:24,Has someone correctly predicted one of the variants of SARS-CoV-2 (like the Delta variant)?,,1,1,,,,CC BY-SA 4.0 28558,2,,28552,7/7/2021 8:25,,3,,"

Essentially, any data you use to train or develop the model shouldn't be used as test data. In principle, "unseen" data gives a good estimate for the generalisation performance of the model; but this is only valid if the data really is unseen and hasn't been used in the model development process. If you've been tuning a model to increase its accuracy on the test set, then that data has influenced the model, so it's not unseen any more!

An example of what is wrong:

  1. Train a neural network on the training set.
  2. Evaluate the performance on the test set, and then change the parameters of the model in some way to try and increase the test set performance.
  3. Use the best parameters you found, and get a final evaluation of performance on the test set.

To make this procedure legitimate, you should have a three way split: train, dev, test. Do the tuning on the dev set, and then you can get a final estimate of generalisation using the test set.

If you don't do this, you'll generally think your model is a lot more accurate than it actually is. It's just like trying to estimate your generalisation performance from the training set, which I'm sure you know is a bad idea!

This phenomenon is what is sometimes known as overfitting the test set. To see why this name is used, just consider what overfitting the training data is: picking parameters that seem to fit the training data well, but don't generalise well. Likewise, overfitting the test set involves picking hyperparameters that seem to work well, but don't generalise. In each case, the solution is to have an additional set so you can get an unbiased estimate of what's actually happening.

",44413,,44413,,7/7/2021 9:12,7/7/2021 9:12,,,,2,,,,CC BY-SA 4.0 28559,1,28682,,7/7/2021 8:50,,2,1755,"

Recently, I came across the BERT model. I did some research and tried some implementations.

I wanted to tackle a NER task, so I chose the BertForSequenceClassifications provided by HuggingFace.

for epoch in range(1, args.epochs + 1):
    total_loss = 0
    model.train()
    for step, batch in enumerate(train_loader):
        b_input_ids = batch[0].to(device)
        b_input_mask = batch[1].to(device)
        b_labels = batch[2].to(device)
        model.zero_grad()

        outputs = model(b_input_ids, token_type_ids=None, attention_mask=b_input_mask, labels=b_labels)
        loss = outputs[0]

        total_loss += loss.item()
        loss.backward()
        torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
        # modified based on their gradients, the learning rate, etc.
        optimizer.step()

The main part of my fine-tuning follows as above.

I am curious about to what extent the fine-tuning alters the model. Does it freeze the weights that have been provided by the pre-trained model and only alter the top classification layer, or does it change the hidden layers that are contained in the already pre-trained BERT model?

",48422,,2444,,7/15/2021 12:30,7/15/2021 12:30,Does BERT freeze the entire model body when it does fine-tuning?,,1,0,,,,CC BY-SA 4.0 28560,1,28566,,7/7/2021 10:20,,3,669,"

I am following the book "Reinforcement Learning: An Introduction" by Richard Sutton and Andrew Barto, and they give an example of a problem for which the value function can be computed explicitly by solving a system of $\lvert S \rvert $ equations that have $\lvert S \rvert $ unknowns. Each of these $\lvert S \rvert$ equations is given by:

$$v_{\pi}(s) = \sum_{a} \pi(a\rvert s) \sum_{s^{\prime}}\sum_{r} p(s^{\prime}, r \rvert s,a)[r + \gamma v_{\pi}(s^{\prime})] $$

I am having a hard time understanding how one could solve this system of equations. It seems to me as if each equation consists of a summation of an infinite amount of terms and therefore one would not be able to analytically solve them. Could anyone offer any intuition as to how this system of equations could be explicitly solved?

",48423,,2444,,7/7/2021 15:55,7/7/2021 20:44,How can we find the value function by solving a system of linear equations?,,2,0,,,,CC BY-SA 4.0 28562,2,,28560,7/7/2021 11:57,,3,,"

Provided you have a finite number of states and actions, then there will not be an infinite number of terms. Therefore the state and action spaces need to be discrete and finite before the quote from the book applies.

I am having a hard time understanding how one could solve this system of equations.

There are a few techniques for solving simulteneous equations.

However, what I would probably do is number all the state values from $v_1 = v_\pi(s_1)$ to $v_{N = |\mathcal{S}|} = v_\pi(s_N)$, and write out each line in order:

$$v_1 = w_{1,1} v_1 + w_{1,2} v_2 + w_{1,3} v_3 + ... w_{1,N} v_N + r_1$$

Where $r_1$ is a constant - it is the expected immediate reward when starting from state $1$, but that is not important. It is the constant offset value you get from resolving the sum that is not multiplied by any $v_i$ unknown variable.

You can discover the values of $w_{i,j}$ by expanding the sum in the Bellman equation for each state in turn.

At that point you can build a matrix of the weights, and solve the linear equations by taking the inverse of the matrix.

[from comments] But if the game has no end then theoretically the sum of expected future rewards should be infinite.

The time series definition of $v_{\pi}(s)$:

$$v_{\pi}(s) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty} \gamma^k R_{t+k+1} | S_t = s]$$

does not appear in the Bellman equation used to establish the linear equations. This is the main benefit of the Bellman equation, it changes the infinite series view of returns into a set of relations that must hold between the value functions of each state.

",1847,,1847,,7/7/2021 20:44,7/7/2021 20:44,,,,2,,,,CC BY-SA 4.0 28564,1,,,7/7/2021 13:26,,5,6548,"

When we are training a neural network, we are going to determine the embedding size to convert the categorical (in NLP, for instance) or continuous (in computer vision or voice) information to hidden vectors (or embeddings), but I wonder if there are some rules to set the size of it?

",5351,,2444,,7/11/2021 13:06,9/23/2022 4:07,How to determine the embedding size?,,3,0,,,,CC BY-SA 4.0 28565,2,,28564,7/7/2021 13:26,,1,,"

I get an answer from this book: Machine Learning Design Patterns: Solutions to Common Challenges in Data Preparation, Model Building, and MLOps.

If we’re in a hurry, one rule of thumb is to use the fourth root of the total number of unique categorical elements while another is that the embedding dimension should be approximately 1.6 times the square root of the number of unique elements in the category, and no less than 600.

",5351,,,,,7/7/2021 13:26,,,,0,,,,CC BY-SA 4.0 28566,2,,28560,7/7/2021 14:42,,1,,"

First of all, we assume that we have a finite MDP, i.e. the set of states $\mathcal{S}$, the set of actions $\mathcal{A}$ and the set of rewards $\mathcal{R}$ all have a finite number of elements (I didn't think about how the explanations below would extend to other cases, but I suspect you will need differential equations).

For simplicity, let's only consider the value function $v$ (as opposed to the state-action value function $q(s, a)$, but this also applies to $q$). The value function $v$ is defined for all states, i.e. it's a function of the form $v : \mathcal{S} \rightarrow \mathbb{R}$ or, with an alternative notation, $v(s), \forall s \in \mathcal{S}$. So, we can define this function as a vector $\mathbf{v}$ of dimension $|\mathcal{S}| = n$, i.e. $\mathbf{v} \in \mathbb{R}^{|\mathcal{S}|}$, where the $i$th element contains the value of the $i$th state (so we need a function that maps states to indices of this vector, but this is trivial).

The fact that you can represent the value function, in this finite MDP, as a vector should already suggest that you can find this value function by solving a linear system of equations.

However, let me show that by starting with the definition of the value function you also provided

\begin{align} v_{\pi}(s) &= \sum_{a} \pi(a\rvert s) \sum_{s^{\prime}}\sum_{r} p(s^{\prime}, r \rvert s,a)[r + \gamma v_{\pi}(s^{\prime})] \label{1}\tag{1}, \; \forall s \in \mathcal{S} \end{align} which can be expanded as follows \begin{align} v_{\pi}(s) &= \sum_{a} \pi(a\rvert s) \sum_{s^{\prime}}\sum_{r} \left[p(s^{\prime}, r \rvert s,a)r + \gamma p(s^{\prime}, r \rvert s,a) v_{\pi}(s^{\prime}) \right] \\ &= \sum_{a} \pi(a\rvert s) \left[ \sum_{r} \underbrace{\sum_{s^{\prime}} p(s^{\prime}, r \rvert s,a)}_{\text{Marginalization of }p \text{ over } s'}r + \gamma \sum_{s^{\prime}} \underbrace{\sum_{r} p(s^{\prime}, r \rvert s,a) }_{\text{Marginalization of }p \text{ over } r} v_{\pi}(s^{\prime}) \right]\\ &= \sum_{a} \pi(a\rvert s) \left[ \sum_{r} p(r \rvert s, a)r + \gamma \sum_{s^{\prime}}p(s^{\prime} \rvert s,a) v_{\pi}(s^{\prime}) \right] \\ &= \sum_{a} \pi(a\rvert s) \left[ r(s, a) + \gamma \sum_{s^{\prime}}p(s^{\prime} \rvert s,a) v_{\pi}(s^{\prime}) \right] \label{2}\tag{2}, \; \forall s \in \mathcal{S} \end{align} where

  • $\sum_{r} p(r \rvert s, a)r = r(s, a)$ (see this).
  • $\sum_{s^{\prime}}p(s^{\prime}, r \rvert s,a) = p(r \rvert s, a)$ (marginalization)
  • $\sum_{r} p(s^{\prime}, r \rvert s,a) =p(s^{\prime} \rvert s,a) $ (marginalization)

In this form, as in equation \ref{2}, the value function can also be written in a different notation

\begin{align} v_{\pi}(s) &= \sum_{a} \pi(a\rvert s) \left[ R_{s}^a + \gamma \sum_{s^{\prime}}P_{ss'}^a v_{\pi}(s^{\prime}) \right] \label{3}\tag{3}, \; \forall s \in \mathcal{S} \end{align} where

  • $R_{s}^a = r(s, a)$
  • $P_{ss'}^a = p(s^{\prime} \rvert s,a)$

We can still write equation \ref{3} in a "simpler" form as follows \begin{align} v_{\pi}(s) &= \sum_{a} \pi(a\rvert s) R_{s}^a + \gamma \sum_{s^{\prime}} \sum_{a} \pi(a\rvert s) P_{ss'}^a v_{\pi}(s^{\prime}) \\ &= R_{s}^\pi + \gamma \sum_{s^{\prime}} P_{ss'}^\pi v_{\pi}(s^{\prime}) \label{4}\tag{4}, \; \forall s \in \mathcal{S} \end{align} where

  • $\sum_{a} \pi(a\rvert s) R_{s}^a = R_{s}^\pi$
  • $\sum_{a} \pi(a\rvert s) P_{ss'}^a = P_{ss'}^\pi$

We can write the definition of the value function in \ref{4} in matrix form for all states $s \in \mathcal{S}$ as follows

\begin{align} \begin{bmatrix} v_\pi(1) \\ \vdots \\ v_\pi(n) \end{bmatrix}= \begin{bmatrix} {R}_1^\pi \\ \vdots \\ {R}_n^\pi \end{bmatrix} +\gamma \begin{bmatrix} {P}_{11}^\pi & \dots & {P}_{1n}^\pi\\ \vdots & \ddots & \vdots\\ {P}_{n1}^\pi & \dots & {P}_{nn}^\pi \end{bmatrix} \begin{bmatrix} v_\pi(1) \\ \vdots \\ v_\pi(n) \end{bmatrix} \tag{5}\label{5}, \end{align} which can be written in a more compact form as follows \begin{align} \mathbf{v} = \mathbf{r} + \gamma \mathbf{P}\mathbf{v} \tag{6}\label{6}, \end{align} which is a very compact form of the Bellman equation (which is a recursive equation: as you can notice, the $\mathbf{v}$ appears on the left and right of the equals sign) that represents the value function (i.e. the value function can be defined as a recursive equation).

In equation \ref{5}, the unknowns are the $|\mathcal{S}| = n$ values of the value function $v$ and there are $n$ equations, so it should now be clear why we can solve this problem by solving a system of equations. Note that here it's assumed that $\pi$, $r(s, a)$ and $p$ are given and known, which, generally, is not the case, that's why we use algorithms like Q-learning.

",2444,,2444,,7/7/2021 15:56,7/7/2021 15:56,,,,0,,,,CC BY-SA 4.0 28567,2,,28564,7/7/2021 14:59,,4,,"

In most cases, seems that embedding dim is chosen empirically, by trial and error.

Older papers in NLP used 300 conventionally https://petuum.medium.com/embeddings-a-matrix-of-meaning-4de877c9aa27. More recent papers used 512, 768, 1024.

One of the factors, influencing the choice of embedding is the way you would like different vectors to correlate with each other. In high dimensional space with probability 1, chosen at random vectors would be approximately mutually orthogonal. Whereas in the low dimensions and case of many different classes, many vectors will have dot product, significantly different from 0.

I think, that if one expects, that many vectors have to be correlated then the dimension shouldn't be very high. And otherwise, if each of the possible keys in the embedding is expected to produce a different, unrelated vector, than dimensionality is expected to be large.

",38846,,,,,7/7/2021 14:59,,,,0,,,,CC BY-SA 4.0 28568,1,,,7/7/2021 15:13,,2,164,"

On the OpenAI's Spinning Up, they justify the fact that adding a baseline $b(s_t)$ in the policy gradient doesn't change its gradient by saying that this is

an immediate consequence of the EGLP Lemma

However, I did not manage to prove it with this lemma. Can somebody help me, please?

The proof is trivial when $b$ is a constant, but I struggle to derive it whenever $b$ is a function of the current state $s$ because you can't take it out of the integral.

",48439,,2444,,7/11/2021 13:04,7/11/2021 13:04,Why adding a baseline doesn't affect the policy gradient?,,1,0,,,,CC BY-SA 4.0 28570,1,,,7/7/2021 17:13,,0,49,"

I have a text generation model and I want to evaluate its output by comparing it to a set of gold human-annotated references.

I went through machine-translation metrics and I found that BLEU is used as the main metric usually. I didn't like using it because it's shallow as it uses ngrams comparison; the semantics of the translation is missed.

Is there any other metric to do a semantic-based evaluation?

I've thought of using a text similarity model to evaluate the output or even an NLI (Natural language inference) system. I am not sure how precise the evaluation will be because SOTA systems are not really accurate.

",40251,,,,,10/15/2022 19:06,Semantic-based evaluation of translations instead of BLEU,,1,0,,,,CC BY-SA 4.0 28571,2,,28568,7/7/2021 17:46,,2,,"

The policy gradient states that $$\nabla J(\theta) \propto \sum_s \mu(s) \sum_a q_\pi(s, a) \nabla\pi(a | s; \theta)\;$$ where the derivatives are taken wrt the parameter $\theta$.

Now, if we say incorporate a baseline we get $$\nabla J(\theta) \propto \sum_s \mu(s) \sum_a \left( q_\pi(s, a) - b(s) \right)\nabla\pi(a | s; \theta)\;$$ and this does not effect the gradient at all. To see this, note that $$\sum_a b(s) \nabla\pi(a|s; \theta) = b(s) \nabla \sum_a \pi(a|s; \theta) = b(s) \nabla 1 = 0\;;$$ where all I have done is expand the bracketed terms inside the sum over $a$ from the second equation, and shown that the new term is equal to 0 -- thus the gradient is unchanged.

If you really want to confirm this then you can fully write down the expansion of the second equation and use the trick I have shown you in my third equation to see that expanded second equation is equal to the first equation.

I imagine that the EGLP lemma that the authors referred to will use a similar trick of a derivative of a probability distribution equalling to 0 when summing(/integrating) over the support of the random variable, which is what I have used here to go from $\nabla \sum_a\pi(a|s; \theta) = \nabla 1$.

",36821,,,,,7/7/2021 17:46,,,,0,,,,CC BY-SA 4.0 28572,1,28582,,7/7/2021 22:07,,1,521,"

Specifically for continuous control PPO, let's say my action space range is between $X$ (low) and $Y$ (high) and they are all sampled from a Gaussian Action Distribution with mean $\mu$ and standard deviation $\rho$.

From what I understood, the actions sampled should fall between $\mu - \rho$ and $\mu + \rho$, but that's not what happens in practice? What am I misunderstanding here? How do I ensure this range constraint from a custom action distribution with a given mean and standard deviation?

Any advice or tips for me? I would really appreciate any insights!

",21513,,2444,,7/8/2021 18:07,10/19/2021 12:35,How to define a continuous action distribution with a specific range for Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 28574,1,28588,,7/7/2021 22:32,,3,605,"

The sigmoid, tanh, and ReLU are popular and useful activation functions in the literature.

The following excerpt taken from p4 of Neural Networks and Neural Language Models says that tanh has a couple of interesting properties.

For example, the tanh function has the nice properties of being smoothly differentiable and mapping outlier values toward the mean.

A function is said to be differentiable if it is differentiable at every point in the domain of function. The domain of tanh is $\mathbb{R}$ and $ \dfrac{e^x-e^{-x}}{e^x+e^{-x}}$ is differentiable in $\mathbb{R}$.

But what is meant by "smoothly differentiable" in the case of tanh activation function?

",18758,,18758,,7/8/2021 10:54,7/8/2021 13:38,"Why is tanh a ""smoothly"" differentiable function?",,1,2,,,,CC BY-SA 4.0 28577,1,,,7/7/2021 22:51,,2,920,"

I know two differences between a neuron and a perceptron

  1. Neuron employs non-linear activation function and perceptron employs only a threshold activation function.

  2. The output of a neuron is not necessarily a binary number and the output of a perceptron is always a binary number

I know no other difference between a perceptron and a neuron other than the above.

Are there any other differences between perceptron and neuron?

",18758,,18758,,12/18/2021 0:40,12/18/2021 0:42,What are (all) the differences between a neuron and a perceptron?,,1,0,,,,CC BY-SA 4.0 28578,1,,,7/8/2021 2:08,,3,266,"

I came across the following statement from the caption of figure 7.8 from the textbook Neural Networks and Neural Language Models

the input layer is usually not counted when enumerating layers

Why is the input layer excluded from counting?

Is the reason just convention or based on its contribution?

",18758,,2444,,7/8/2021 11:42,7/8/2021 11:57,Why is the input layer of a neural network usually not counted?,,1,1,,,,CC BY-SA 4.0 28582,2,,28572,7/8/2021 10:08,,1,,"

First of all, the support of a normal distribution is the entire real line (or, in general, $\mathbb{R}^n$ for an $n$-dimensional multivariate normal distribution) so your action can be any number in $\mathbb{R}$. What you may be getting confused with is that with probability 0.68 you will obtain an action that is within +/- 1 standard deviation from the mean.

Now, to answer the question of how you can do this using RL:

To use the Normal distribution in this setting I would simply clip my actions in the environment. If, for example, the actor gives an action below your minimum value $X$, lets say it gives $X-0.5$, then I would simply clip the action to be $X$ when executed in the environment. This way your actor can still sample from a normal distribution which could give answers below $X$ (or above $Y$) and be used with your environment.

If, for instance, your desired range was $(-1, 1)$ then another option would be to define your distribution to be $Y = \mbox{tanh}(X)$, where $X \sim N(\mu, \sigma)$. You can then find the density function of $Y$ using e.g. the density transformation method.

",36821,,36821,,10/19/2021 12:35,10/19/2021 12:35,,,,4,,,,CC BY-SA 4.0 28584,2,,1,7/8/2021 10:32,,0,,"

We need to compute the gradients in-order to train the deep neural networks. Deep neural network consists of many layers. Weight parameters are present between the layers. Since we need to compute the gradients of loss function for each weight, we use an algorithm called backprop. It is an abbreviation for backpropagation, which is also called as error backpropagation or reverse differentiation.

It can be understood well from the following paragraph taken from Neural Networks and Neural Language Models

For deep networks, computing the gradients for each weight is much more complex,since we are computing the derivative with respect to weight parameters that appear all the way back in the very early layers of the network, even though the loss is computed only at the very end of the network.The solution to computing this gradient is an algorithm called error backpropagation or backprop. While backprop was invented for neural networks, it turns out to be the same as a more general procedure called backward differentiation, which depends on the notion of computation graphs.

",18758,,18758,,7/8/2021 10:45,7/8/2021 10:45,,,,0,,,,CC BY-SA 4.0 28585,1,,,7/8/2021 11:43,,0,206,"

I need to input data conditionally to my deep network. In order to explain cases, I'd like to give an example. Assume that I have a 50-attribute dataset. For some attributes, a specific part of hidden layers is responsible, and for others, a different part is responsible. Also, for some cases, the same parts of the hidden layers might intersect. I think I can decide which attributes must go which hidden neurons in the input layer by using some kind of if-else block. However, I could not figure out how.

My current idea

I can enter an identity element for some attributes. For example, I have att1, att2, att3, etc. I have ins1, ins2, etc. For ins1 -> att1 = 0.5, att2 = 0.2, att3 = None For ins2 -> att1 = 0.1, att2 = None, att3 = None

But, if I do this approach, the number of attributes for an instance becomes bigger unnecessarily.

End of my current idea

Are there any opinions on this? Should I rearrange my excel file or is there any way to use if-else conditions? Regards,

",48462,,,,,7/8/2021 11:43,Conditional input deep neural network,,0,2,,,,CC BY-SA 4.0 28586,2,,28578,7/8/2021 11:45,,3,,"

The input layer is just an abstraction for defining the number and/or type/shape of inputs that the neural network accepts (for example, in Keras, you can use the class InputLayer), so it doesn't usually compute any function (although it's possible that your implementation of the input layer performs e.g. some kind of preprocessing), like the other layers, including the output layer, do, but it just represents the inputs, which are passed to the next layer during the forward pass.

Whether it's counted or not as part of the count of the number of layers of a neural network, it's just a matter of convention. If it's not counted, it's probably because of the just mentioned reasons.

",2444,,2444,,7/8/2021 11:57,7/8/2021 11:57,,,,0,,,,CC BY-SA 4.0 28587,1,28591,,7/8/2021 11:45,,1,91,"

I'd like to ask you why, no matter my neural network function approximator for parametrized Q-learning implementation for a Contextual Bandits environment, I'm getting bad results. I don't know if it's a problem with my formulation of the problem and how I'm trying to solve it, or is it the neural architecture. I tried different fully-connected neural networks with different number of layers and different number of neurons (sticking to low numbers since my environment is not complex) but I always get bad results, and it seems the results are random.

if my implementation of the Q-learning algorithm for the Contextual Bandits problem is right. I made an environment that randomly generates three integers between 0 and 89 and given an action (integer between 0 and 4) it returns a reward following a certain logic (if all three integers are between 0 and 29 and the action is 0 then the reward is 0 otherwise it's -1).

My environment is:

class Environment():

  def __init__(self):
      
      self._observation = np.zeros((3,))
  
  def interact(self, action):
      self._observation = np.zeros((3,))
      c1, c2, c3 = np.random.randint(0, 90, 3)
      self._observation[0]=c1
      self._observation[1]=c2
      self._observation[2]=c3
      reward = -1.0
      condition = False
      if (c1<30) and (c2<30) and (c3<30) and action==0:
          condition = True
      elif (30<=c1<60) and (30<=c2<60) and (30<=c3<60) and action==1:
          condition = True
      elif (60<=c1<90) and (60<=c2<90) and (60<=c3<90) and action==2:
          condition = True
      else:
          if action==4:
              condition = True
      if condition:
        reward = 0.0
            
      return {"Observation": self._observation,
                  "Reward": reward}

The interaction method doesn't return state or time step, not like what TF-Agents environments' step method does. I just thought it's not necessary for the current problem; I don't rely on time steps since each state doesn't influence the next state. I thought that observation is what should be returned, the state being a more general data that could contain information the agent can't observe. I don't return the action too because we can get it outside the environment.

My function approximator of the Q-values are neural networks, always a fully connected architecture. For instance:

model = keras.models.Sequential([
        keras.layers.Dense(16, activation="relu", input_shape=[n_inputs]),
        keras.layers.Dense(16, activation="relu"),
        keras.layers.Dense(n_outputs)])

I took the next blocks of code from Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition and adapted them to my situation:

env = Environment()

n_inputs = 3 #Observations are made of three integers
n_outputs = 4 #Four actions

def epsilon_greedy_policy(observation, epsilon=0):
  if np.random.rand() < epsilon:
    return np.random.randint(4)
  else:
    Q_values = model.predict(observation[np.newaxis])
    return np.argmax(Q_values[0])

replay_buffer = deque(maxlen=2000)

def sample_experiences(batch_size):
  indices = np.random.randint(len(replay_buffer), size=batch_size)
  batch = [replay_buffer[index] for index in indices]
  observations, rewards, actions = [np.array([experience[field_index] for experience in batch]) for field_index in range(3)]
  return observations, rewards, actions

def play_one_step(env, observation, epsilon):
  action = epsilon_greedy_policy(observation, epsilon)
  observation, reward = env.interact(action).values()
  replay_buffer.append((observation, reward, action))
  return observation, reward

batch_size = 16
optimizer = keras.optimizers.Adam(learning_rate=1e-3)
loss_fn = keras.losses.mean_squared_error

def training_step(batch_size):
  experiences = sample_experiences(batch_size)
  observations, rewards, actions = experiences
  target_Q_values = rewards
  mask = tf.one_hot(actions, n_outputs)
  with tf.GradientTape() as tape:
    all_Q_values = model(observations)
    Q_values = tf.reduce_sum(all_Q_values * mask, axis=1, keepdims=True)
    loss = tf.reduce_mean(loss_fn(target_Q_values, Q_values))
  grads = tape.gradient(loss, model.trainable_variables)
  optimizer.apply_gradients(zip(grads, model.trainable_variables))

epsilon = 0.01
obs = np.random.randint(0,90,3)
for episode in tqdm(range(1000)):
  if episode<250:
    obs, reward = play_one_step(env, obs, epsilon)
  else:
    obs, reward = play_one_step(env, obs, epsilon)
    training_step(batch_size)

I'm not sure at how to evaluate the performance of the agent, but I tried this as a first approach just to see if the predicted Q-values will enable a greedy-policy to choose the best action:

check0 = np.random.randint(0,30,3)

for i in range(30):
  arr = np.random.randint(0,30,3)
  check0 = np.vstack((check0, arr))

predictions = model.predict(check0)

c = 0
for i in range(predictions.shape[0]):
  if np.argmax(predictions[i])==0:
    c+=1

(c/predictions.shape[0])*100

Every time I ran the code above it gave me a totally different value. Sometimes it's 0%, sometimes it's 45%, sometimes it's 19%...

The issue is that no matter my model architecture, at the end, I get random results. I wonder if it's something wrong in the overall approach to solve the problem. I want to solve a Contextual Bandit where the agent observe a continuous context, take actions and try to link together the rewards obtained with the actions and the context in order to "understand" the logic behind it.

I hope you can help me figure out why do I get these random results.

Thank you.

",44965,,,,,7/9/2021 9:04,Why do I get bad results no matter my neural network function approximator for parametrized Q-learning implementation for Contextual Bandits?,,1,0,,,,CC BY-SA 4.0 28588,2,,28574,7/8/2021 13:38,,4,,"

A smooth function is usually defined to be a function that is $n$-times continuously differentiable, which means that $f$, $f'$, $\dots$, $f^{(n - 1)}$ are all differentiable and $f^{(n)}$ is continuous. Such functions are also called $C^n$ functions.

It can be a bit of a vague term; some people might even stretch the definition and say any continuous function is smooth (though I'd be a little surprised if I saw that in use, personally). Other people write smooth to mean infinitely differentiable: for example $f(x) = e^x$ can be differentiated as many times as you like.

I guess what the author is trying to point out is that the ReLU rectifier function isn't differentiable. Even if you use the "trick"1 of treating ReLU as differentiable everywhere, you would still get a derivative that is discontinuous:

$$\mathrm{ReLU}'(x) = \begin{cases} 1 & x \ge 0 \\ 0 & \text{otherwise.} \end{cases}$$

So, it's fair to say that ReLU isn't smooth in the same sense of the $\tanh$ function, which has a continuous derivative (and, in fact, you could carry on and consider the higher derivatives).


1 If this doesn't sound familiar, see p. 188 of Deep Learning by Bengio et al. We can get around the fact that ReLU functions aren't differentiable at zero by just pretending it has a well-defined derivative of zero or one. A little dishonest, perhaps, but it works very well.

",44413,,,,,7/8/2021 13:38,,,,0,,,,CC BY-SA 4.0 28590,2,,28557,7/8/2021 14:19,,1,,"

Atomwise, an AI startup, uses a 3d convolution neural network to predict if a molecule will bind to an protein. The covid protein had several human attempts to find a molecule to bind.

See on young inventors approach to solving covid

https://www.cnn.com/2020/10/18/us/anika-chebrolu-covid-treatment-award-scn-trnd/index.html

Anika's winning invention uses in-silico methodology to discover a lead molecule that can selectively bind to the spike protein of the SARS-CoV-2 virus.

I wonder if the atomwise simulator would concur that lead would bind to the spike protein

",44679,,2444,,12/6/2021 8:24,12/6/2021 8:24,,,,0,,,,CC BY-SA 4.0 28591,2,,28587,7/8/2021 16:43,,3,,"

Scale your neural network inputs.

The raw observations are in range $[0,89]$, and neural networks will cope badly with that used as inputs.

The ideal case for NN for each input feature is a gaussian distribution with mean 0, standard deviation 1. You don't need that to be perfect, though. A simple scale - divide each element by $30$ and subtract $1.5$ - will be fine here.

You can keep the environment as-is, and scale after observations are received. Up to you whether you put ready-scaled observations in the experience replay table or not. In your case it may be very slightly more efficient to do so in terms of CPU effort, but probably not something you would notice.

There are other ways you might deal with these kinds of numbers in a neural network's input, but pre-scaling to a standard range is usually the simplest and by far the most common solution.

",1847,,1847,,7/9/2021 9:04,7/9/2021 9:04,,,,5,,,,CC BY-SA 4.0 28592,1,,,7/8/2021 19:19,,1,209,"

So, I'm fairly new to reinforcement learning and I needed some help/explanations as to what the action_mask and avail_action fields alongside the action_embed_size actually mean in RLlib (the documentation for this library is not very beginner friendly/clear).

For an example, this is one of the resources (Action Masking With RLlib) I tried to use to help understand the above concepts. After reading the article, I completely understand what the action_mask does, but I'm still a bit confused as to what exactly the action_embed_size is and what the avail_actions fields actually are/represent (are the indices of avail_actions supposed to represent the action 0 if invalid, 1 if valid? Or are the elements supposed to represent the actions themselves - a value of 1, 4, 5, etc corresponding to the actual value of the action itself?).

Also when/how would there be a difference with the action_space and action_embed_size?

This is from the article that I used to sort of familiarize myself with the whole concept of Action Masking (this network is designed to solve the Knapsack Problem):

class KP0ActionMaskModel(TFModelV2):
    
    def __init__(self, obs_space, action_space, num_outputs,
        model_config, name, true_obs_shape=(11,),
        action_embed_size=5, *args, **kwargs):
        
        super(KP0ActionMaskModel, self).__init__(obs_space,
            action_space, num_outputs, model_config, name, 
            *args, **kwargs)
        
        self.action_embed_model = FullyConnectedNetwork(
            spaces.Box(0, 1, shape=true_obs_shape), 
                action_space, action_embed_size,
            model_config, name + "_action_embedding")
        self.register_variables(self.action_embed_model.variables())

    def forward(self, input_dict, state, seq_lens):
        avail_actions = input_dict["obs"]["avail_actions"]
        action_mask = input_dict["obs"]["action_mask"]
        action_embedding, _ = self.action_embed_model({
            "obs": input_dict["obs"]["state"]})
        intent_vector = tf.expand_dims(action_embedding, 1)
        action_logits = tf.reduce_sum(avail_actions * intent_vector,
            axis=1)
        inf_mask = tf.maximum(tf.log(action_mask), tf.float32.min)
        return action_logits + inf_mask, state

    def value_function(self):
        return self.action_embed_model.value_function()

From my understanding, the action_embedding is the output of the neural network and is then dotted with the action_mask to mask out illegal/invalid actions and finally passed to some kind of softmax function to get the final neural network output?

Please, correct me if I'm wrong.

",46275,,2444,,7/15/2021 13:13,7/15/2021 13:13,RLLib - What exactly do the avail_action and action_embed_size represent? How do they work with the action_mask to phase out invalid actions?,,0,0,,,,CC BY-SA 4.0 28593,1,,,7/8/2021 20:38,,1,51,"

Suppose one has a time series (univariate or multivariate) and the goal is to predict values of these series several steps ahead. I see two possible strategies:

  1. Create a model (recurrent, convolutional, transformer, whatever) that predicts the value of the signal in the next moment of time, based on the values from previous timestamps from (t_start, t_end). If we aim to predict not one, but several steps ahead we can pass (signal[t_start + 1: t_end], signal[t_end + 1]) to predict signal[t_end + 2] and so on. In the training stage, we can pass the predicted value of signal[t_end + 1] or the ground truth with some probability, this can be seen as some kind of teacher forcing. In the inference stage, one passes each time the predicted signal. The optimization algorithm aims to minimize (MSE, MAE) loss between the ground truth and prediction. In other words $$ \begin{aligned} x_{t+1} &= f(x_t, \ldots, x_{t-N+1}) \\ x_{t+2} &= f(x_{t+1}, \ldots, x_{t-N+2}) \\ x_{t+k} &= f(x_{t+1}, \ldots, x_{t-N+k}) \\ \end{aligned} $$

  2. Create a model that predicts simultaneously several values ahead. Standard layers from DL frameworks (PyTorch of Tensorflow) for sequence processing problems have two options - output single hidden state in the end or the whole sequence of the hidden states. Therefore, seems like they do not have the functionality, say, to predict values of the time series 16 steps ahead from the values of the last 256 timestamps. $$ [y_{t+k}, \ldots, y_{t+1}] = f(x_t, \ldots x_{t - N + 1}) $$ I see two potential solutions:

    • output hidden state (16) times larger than the expected output and reshape - however, it seems that this approach breaks the locality and causal structure and would not achieve good performance.
    • Choose the option, that returns the sequence of the same length as the input (here 256) and take the last (16) tokens of the output. This approach is inapplicable if the length of the prediction exceeds the length of the previous history, but I think, that such long predictions would produce poor quality in any case.

How stock, weather, sales prediction problems are solved usually in practice?

",38846,,38846,,12/7/2021 9:08,12/7/2021 9:08,What is a better approach to perform predictions of time-series several values ahead?,,1,0,,,,CC BY-SA 4.0 28594,1,,,7/8/2021 21:33,,1,416,"

I am currently doing a master's in applied mathematics, and I recently got interested in machine learning and artificial intelligence, and I am thinking of going for a Ph.D. in this area. I have a reasonable maths and stats background, but I haven't done any course in ML/AI. Next semester, I am thinking of doing courses in ML (uses the book by Bishop), AI (uses the book by Norvig) and reinforcement learning at my university. Another advanced course in C++ is being offered, which I am also very interested to take, but the problem is it will be very difficult to manage all of these courses together. I have some knowledge of C++ (built some parts of a reasonably big project in the past but got a bit rusty nowadays) and very basic knowledge of Python, though I find Python much easier to learn and use than C++.

So, my question is: how important is C++ if I go for a Ph.D. in ML/AI/CV/NLP, etc.? Should I bother taking the C++ course or be more focused on Python and do the other three courses i.e., ML, AI, and reinforcement learning?

",48475,,2444,,7/9/2021 20:26,8/6/2021 10:39,How much C++ is needed for research in machine learning and artificial intelligence?,,2,6,,,,CC BY-SA 4.0 28596,2,,18157,7/9/2021 4:38,,2,,"

Neural networks are not invariant to translations, but equivariant,

Invariance vs Equivariance

Suppose we have input $x$ and the output $y=f(x)$ of some map between spaces $X$ and $Y$. We apply transformation $T$ in the input domain. For general map,output will change in some complicated and unpredictable way. However, for certain class of maps, change of the output becomes very tractable.

Invariance means that output doesn't change after application of the map $T$. Namely: $$ f(T(x)) = f(x) $$

For CNN example of the map, invariant to translations, is the GlobalPooling operation.

Equivariance means that symmetry transformation $T$ on the input domain leads to the symmetry transformation $T^{'}$ on the output. Here $T^{'}$ can be the same map $T$, identity map - which reduces to invariance, or some other kind of transformation.

This picture is illustration of translational equivariance.

Equivariance of operations in CNN

  • Convolutions with stride=1: $$ f(T(x)) = T f(x) $$ Output feature map is shifted in same direction and number of steps.
  • Downsampling operations. Convolutions with stride=1, Pooling (non-global): $$ f(T_{1/s}(x)) = T_{1/s} f(x) $$ They are equivariant to the subgroup of translations, which involves translations with integer number of strides.
  • GlobalPooling : $$ f(T(x)) = f(x) $$ These are invariant to arbitrary shifts, this property is useful in classification tasks.

Combination of layers

Stacking multiple equivariant layers you obtain equivariant architecture a whole.

For classification layer it makes sense to put GlobalPooling in the end in order to for NN to output the same probabilities for the shifted image.

For segmentation or detection problem architecture should be equivariant with the same map $T$, in order to translate bounding boxes or segmentation masks by the same amount as the transform on the input.

Non-global downsampling operations reduce equivariance to the subgroup with shifts integer multiples of stride.

",38846,,,,,7/9/2021 4:38,,,,0,,,,CC BY-SA 4.0 28598,1,,,7/9/2021 6:12,,2,1810,"

Decision trees learn by measuring the quality of a split through some function, apply this to all features and you get the best feature to split on.

However, with a continuous feature it becomes problematic because there are an infinite number of ways you can split the feature. How is the optimal split for a continuous feature chosen?

",26726,,40434,,7/11/2021 3:03,7/31/2021 7:15,How does a decision tree split a continuous feature?,,1,1,,,,CC BY-SA 4.0 28599,1,,,7/9/2021 9:13,,3,929,"

Transformer architectures, based on the self-attention mechanism, have achieved outstanding performance in a variety of applications.

The main advantage of this approach is that the given token can interact with any token in the input sequence and extract global information since the first layer, whereas CNN has to stack multiple convolutional or pooling layers in order to achieve a receptive field, that would involve the whole input sequence.

By receptive field I mean the number of timestamps from the input signal on which does the output depend. For example, for sequence of two Conv1D with kernel_size=3 receptive field is 5. And in transformer the output of the first blocks depends on the whole sequence.

However, this comes at large computational and memory cost in the vanilla formulation: $$ O(L^2) $$ where $L$ is the length of the sequence.

There have been proposed various mechanisms, that try to reduce this amount of computation:

  • Random attention
  • Window (Local attention)
  • Global attention

All these forms of attention are illustrated below:

And one can combine different of these approaches as in the Big Bird paper

My question is about local attention, attending only to the tokens in the fixed neighborhood of size $K$. By doing so, one reduces the number of operations to: $$ O(L K) $$ However, now it is local as the ordinary convolution, and global receptive field will be achieved only via stacking many layers.

Are there any advantages of Local self-attention against CNN, or it can be beneficial only in combination with other forms of attention?

",38846,,2444,,1/1/2022 10:11,1/1/2022 10:11,Are there any advantages of the local attention against convolutions?,,1,1,,,,CC BY-SA 4.0 28600,1,,,7/9/2021 12:04,,1,86,"

I'm currently going through the OpenAI's spinning up introduction course to reinforcement learning. On one of the final sections, they derive an expression for the gradient of the undiscounted return with respect to the policy weights:

$$\nabla_{\theta} J\left(\pi_{\theta}\right)=\underset{\tau \sim \pi_{\theta}}{\mathrm{E}}\left[\sum_{t=0}^{T} \nabla_{\theta} \log \pi_{\theta}\left(a_{t} \mid s_{t}\right) R(\tau)\right]$$

Then they give the following explanation:

Taking a step with this gradient pushes up the log-probabilities of each action in proportion to $R(\tau$).

My question is: How does this expression mathematically reflect the fact that this gradient will push up the log probabilities of the actions?

",48484,,2444,,7/9/2021 16:21,7/9/2021 16:21,How to interpret the policy gradient expression in reinforcement learning?,,1,2,,,,CC BY-SA 4.0 28601,1,,,7/9/2021 14:33,,0,322,"

In the following network, the convolution operations of convolutional blocks are performed by three 1-D kernels with the sizes 8, 5, and 3 respectively along with stride equal to 1. The final network is constructed by stacking three convolution blocks with the filters of sizes 128, 256, and 128 in each block. Pooling operation is excluded from the network. I wanted to find the computational complexity of the following network. I was wondering if you could give me some hints to compute the computational complexity of this network. I appreciate your time! Thanks!

",48485,,,,,7/9/2021 14:33,Computational complexity of a CNN network,,0,3,,,,CC BY-SA 4.0 28602,2,,28600,7/9/2021 15:10,,1,,"

The value of the objective depends on policy (probabilities of taking an action). Intuitively speaking, better actions lead to better returns and by "pushing up" the probabilities (log or not same thing since log is monotonically increasing function) of those actions you're making sure you're getting better returns and increasing the value of your objective.

",20339,,,,,7/9/2021 15:10,,,,0,,,,CC BY-SA 4.0 28603,1,,,7/9/2021 15:48,,3,148,"

For example, I am implementing AI for turn based game and have enough computational resources for build full game tree. My problem is the game can be infinite if both players will repeat moves and my minimax implementation stucks because game tree is infinite respectively.

For example, my game is in state S1, player 1 do action A1, player 2 do action A2 and we are again in state S1. I can't evaluate S1 node because I need to evaluate all subnodes.

I have no idea how to handle this.

",48486,,,,,10/17/2021 16:26,How to handle cycles in minimax algorithm,,0,1,,,,CC BY-SA 4.0 28605,2,,28593,7/9/2021 17:33,,1,,"

I have found nice tutorial in the Tensorflow documentation: https://www.tensorflow.org/tutorials/structured_data/time_series

They implement and test both strategies.

  1. In the first case, for multi dimensional time series, they output the vector of dimension out_steps * series_dim and then reshape to (out_steps, series dim)

  2. They create a model (AR LSTM), that predicts one step ahead and then apply it several times, where the first step from the previous input is discarded, and the new prediction is last step in new input.

Second approach, seems to require less params, but the obtained quality for this specific case, seems to be comparable for both cases.

",38846,,,,,7/9/2021 17:33,,,,0,,,,CC BY-SA 4.0 28608,2,,28594,7/9/2021 20:41,,3,,"

Of course, whether or not you will need to know and use C++ depends on the topics you will research during your Ph.D. or job. If you'll need just to use and/or combine some existing ML models (yes, in a Ph.D., you're expected to come up with new ideas/tools), then you won't probably need to know C++, as the most commonly used libraries for machine learning nowadays, such as TensorFlow, Keras, or PyTorch, have their main APIs written in Python (but there are also APIs written in other languages, but they are not typically as mature as the Python ones), although the core of these libraries is or can be written in C++, but you may never need to have to look at the core of these libraries.

I can say that I also know C++ (of course, not everything or every detail and library, and, of course, my knowledge of it also becomes rusty if I don't use it for a long time), but I rarely need to use my knowledge of C++ to do research in ML or AI (which is what I am currently doing), but, again, it all depends on the topic of your Ph.D. For example, if you wanted to contribute to the progress of OpenCog or if your Ph.D. involved an efficient implementation of some algorithm or data structure, then it may be a good idea to know C++, C, or a programming language like Rust.

",2444,,,,,7/9/2021 20:41,,,,0,,,,CC BY-SA 4.0 28612,1,,,7/10/2021 2:23,,1,37,"

HMM contains two types of states: observable and hidden. Let $\{ h_1,h_2,h_3,\cdots,h_n\}$ be hidden states and $\{o_1,o_2,o_3,\cdots, o_m\}$ be the observable states.

Suppose the $n^2$ transition probabilities $p(h_j|h_i)$ and the $mn$ emission probabilities $p(o_j/h_j)$ are given along with the initial probability distribution vector $\pi =[\pi_1, \pi_2, \pi_3, \cdots, \pi_n]$

Then what is meant by decoding in HMM?

",18758,,2444,,7/10/2021 15:01,7/10/2021 15:01,What is meant by decoding in a Hidden Markov Model?,,0,0,,,,CC BY-SA 4.0 28614,1,,,7/10/2021 10:57,,0,27,"

Here is an implementation for Perceptron

class Perceptron:
    def __init__(self, eta=.1, n_iter=10, model_w=[.0, .0], model_b=.0):
        self.eta = eta
        self.n_iter = n_iter
        self.model_w = model_w
        self.model_b = model_b

    def predict(self, x):
        if np.dot(self.model_w, x) + self.model_b >= 0:
            return 1
        else:
            return -1

    def update_weights(self, idx, model_w, model_b):
        w = model_w
        b = model_b
        w += self.eta * y_train[idx] * x_train[idx]
        b += self.eta * y_train[idx]
        return w, b

    def fit(self, x, y):
        if len(x) != len(y):
            print('error')
            return False
        for i in range(self.n_iter):
            for idx in range(len(x)):
                if y[idx] != self.predict(x[idx]):
                    self.model_w, self.model_b = self.update_weights(idx,
                                            self.model_w, self.model_b)

Does this code

Perceptron(eta=.1, n_iter=10)

mean the model trains 10 epochs?

",45689,,,,,7/10/2021 10:57,Does this code mean the model trains 10 epochs?,,0,3,,,,CC BY-SA 4.0 28615,1,,,7/10/2021 13:37,,1,47,"

I went through a Stats StackExchange's post about the difference between logistic regression and perceptron, which is too long to get the key point.

I'd like to consider the question in terms of the formulas for them.

The logistic regression is defined as

$$\hat{y} = \sigma(\mathbf{w} \cdot \mathbf{x} + b)$$

where

$$ \sigma(z) = \dfrac {1}{1+e^{-z}} $$

The perceptron is defined as

$$\hat{y} = sign(\mathbf{w} \cdot \mathbf{x} + b)$$

where

$$ sign(z) = \begin{cases} 1, & z \ge 0 \\ -1, & z < 0 \end{cases} $$

So, the main between the two models is the activation function, is my understanding correct?

",45689,,2444,,7/15/2021 13:07,7/15/2021 13:07,Is the main difference between the logistic regression and the perceptron the activation function they use?,,0,0,,,,CC BY-SA 4.0 28617,2,,28599,7/10/2021 21:45,,1,,"

It is true that when using local attention with a window of size 5, the "receptive field" is the same as a CNN with kernel size 5 (or two CNN layers with kernel size 3). However, there is a key difference in how the learned weights are applied to the inputs.

In a CNN, the values of the many convolutional kernels are learned, but once learned, the kernels are static. In other words, at every position in the input (whether it be a 1D signal or 2D image), the dot product between the inputs within the window and the same CNN kernels is taken, and then a non-linear function applied.

With attention, the Query/Key/Value matrices additionally allow context to be taken into account. Instead of taking the dot-product of the input region with a set of fixed kernels, the additional matrices are effectively used to dynamically compute a new set of kernels for each position. "Attention" basically figures out for each convolution, which inputs are important (which inputs the network should "pay attention" to) by computing higher-valued weights using Q, K, and V.

I highly recommend reading a breakdown of the original "Attention is All You Need" paper such as this blog post: https://jalammar.github.io/illustrated-transformer/

",31975,,,,,7/10/2021 21:45,,,,1,,,,CC BY-SA 4.0 28618,1,,,7/10/2021 21:48,,1,246,"

I have recently started learning time series forecasting. I have a dataset of the weekly payment history of 10k clients over 1 year, and I want to predict the future 5 payments for a test set of 1k clients.

From what I have tried, I've found that using LSTMs instead of a simple MLP doesn't improve the prediction as much as I anticipated. My understanding is that LSTMs captures the relations between time steps, whereas simple MLPs treat each time step as a separated feature (doesn't take succession into consideration).

So, my question is: why doesn't the LSTM model improve the forecasting significantly? What are the best models for such a task, given that the time series are short (maximum sequence's length = 52)?

",48146,,2444,,7/15/2021 13:05,7/17/2021 0:23,Why doesn't the LSTM model improve the time-series forecasting significantly with respect to the MLP model?,,1,2,,,,CC BY-SA 4.0 28619,1,,,7/11/2021 8:31,,0,238,"

I am curious about how depth maps work. While searching I came across this website which contains some images and their depth maps. I took this depth map and tried to study it using a python pillow.

from PIL import Image
import numpy as np

image = Image.open('elephant_depth_s.png')
img = np.asarray(image)

print(img.shape)

The depth map shape is (400, 400, 3) with 3 channels. Contrary to my assumption this depth map has three channels instead of one. Even though most of the values are zeros some are not. This len(np.where(img>0)) shows that all the channels have some values greater than zero. My question is;

  • In color images, RGB channel values are used for creating corresponding color pixels. Example RGB (255,255,0) creates yellow.

In this depth map how these three channel corresponds to the depth?

Can you please give us some more information on depth maps and their real-world applications?

",39,,32410,,9/21/2021 2:00,9/21/2021 2:00,Dissection of a depth map,,1,0,,,,CC BY-SA 4.0 28620,1,,,7/11/2021 14:37,,2,857,"

What is the right way to input continuous, temporal (time-series) data into the Transformer? Assume we're using the basic TransformerBlock here.

Since data is continuous with no tokens, Token embedding can be directly skipped. How about positional encoding? I tried this example, removing Token embedding while keeping positional encoding but ended up with shape-related errors. Skipping both token and positional encoding resulted in a network that runs and trains but results were relatively poor compared to the LSTM benchmark with the same data.

I am unsure if the positional encoding is still needed.

Overall, my question is, what is the proper way to process continuous sequence data, such as time-series, using the Transformer architecture?

",48523,,2444,,11/30/2021 15:06,8/27/2022 17:02,"What is the proper way to process continuous sequence data, such as time-series, using the Transformer?",,1,0,,,,CC BY-SA 4.0 28621,2,,28619,7/11/2021 16:37,,1,,"

Depth maps are created using principles of photometry (method of measuring light).

The depth maps (rather images) you took from the website are "images" not exact depth "maps". So by default when you pull out a png image from a webpage, it will be saved in "RGB". That is the reason you got an array with 3 layers. In practice, it will always be a single layer that simply shows relative brightness at each point.

The webpage you referred to is talking about "ray tracing" using software called PoVRay (Persistence of Vision Ray Tracing). What it does is creates a 3D surface with a source of light at a point in the frame and simply measures the intensity of light falling on the surface. Remember this is 3D. When you capture this surface from a point using a camera you get a 2D image that represents a "depth map" as seen from a point of reference with respect to the source of light.

A depth map has plenty of applications in computer vision, photography, and ray tracing. Illuminating a frame, measuring depth are few such applications.

",25400,,,,,7/11/2021 16:37,,,,0,,,,CC BY-SA 4.0 28622,2,,2980,7/11/2021 17:07,,2,,"

An experimental paper exist in arxiv about the effect of whether to mask or to give negative rewards to invalid actions. There are some references in this paper which also discuss the effects and the mechanism to handle invalid actions. However, those main references are still only pre-prints in the arxiv (not published and presumably not peer-reviewed yet).

On a way to handle that situation, other answers have given good practical methods to ignore the invalid actions. I just want to add one more trick to do that. You can pre-compute a binary vector as the mask for the actions, and add the log of the mask before the softmax operation. The log of 0 is -inf and exp(-inf) is 0 in Pytorch (I don't know if the same applies in Tensorflow).

$P(a_i| A'_t, X_t) = \text{softmax}(u_i+\log{m^t_i}), \forall i \in \{1,\dots,|A|\}$ and $m^t_i \in \{0,1\}$

with $P(a_i| A'_t, X_t)$ is the probability to take action $a_i$ given the action history $A'_t$ up to the $t$-th step and the current environment's features $X_t$, $u$ is the output of the last layer of the model, $A$ is the action space $m^t_i$ is the feasibility mask of action $a_i$ for the current step. Therefore, the probability of the invalid action is 0 after the softmax operation. That way, you can treat the mask as the part of the state as the input to your model. This is actually more handy for algorithm which employs experience memory because the mask then can be saved in the experience memory too.

",44920,,44920,,7/13/2021 9:47,7/13/2021 9:47,,,,0,,,,CC BY-SA 4.0 28623,2,,28487,7/11/2021 18:35,,1,,"

A recurrent neural network (RNN, specifically either an LSTM or GRU) will work well for variable length sequences like you’ve described. Assuming the order of the sequence is meaningful (I.e. you can’t just break up the sequence into individual inputs and associated target value) an RNN model will learn how the sequence of inputs maps to the sequence of outputs.

",48528,,,,,7/11/2021 18:35,,,,1,,,,CC BY-SA 4.0 28625,1,,,7/11/2021 19:54,,0,93,"

I have a low resolution thermal/IR image (for example 32x32 or 80x64) and a high resolution webcam image. I would like to combine the two to "fake" a high resolution thermal image (I can already map them together via homography). One could probably just apply a FLIR-like palette to the IR image, scale it up, and combine it with the brightness channel of the visible spectrum image. But that would of cause visible artifacts at the pixel edges of the IR picture.

I wonder if there is an AI based approach to colorize the webcam image with the IR data. When a warm IR pixel partially covers a person and partially the background, it would only color the "person" warm, and take the "background" color from the neighboring IR pixel. For this it would have to consider a small vicinity of either picture at a time.

Although I'm familiar with machine learning in the context of multivariate analysis and classification, I have no experience with modern deep learning or AI based image processing. I would guess that something like style transfer would be a starting point for what I'm trying to achive. One would need 1) a way to identify features (like foreground/background, person/wall) and 2) a way to combine these features with the IR truth to result in a colorized bitmap, I assume.

What would be the best approach to do this? Maybe this is already a solved problem - I have a feeling this might already be a solved problem. In any case I would be grateful for literature pointers.

",48529,,,,,7/11/2021 19:54,Upscaling a low-res IR image with a high res webcam image,,0,2,,,,CC BY-SA 4.0 28626,1,,,7/11/2021 23:55,,1,17,"

Random variables can be broadly classified into three types:

  1. random variables whose range is finite,
  2. random variable whose range is countably infinite and
  3. random variables whose range is uncountable.

Random variable is called discrete if its range (the set of values that it can take) is finite or at most countably infinite.

Random variables that can take an uncountably infinite number of values are not discrete

Almost all the probabilistic models used in artificial intelligence contain random variables.

In theory, one can deal with all three types of random variables. For suppose, in reinforcement learning or probabilistic graphical models, we can take any type of random variables as state or action spaces (in RL) and as nodes (in PGM) and can analyze.

But, in several textbooks, most of the analysis is restricted to random variables of the first type. The reason they mention is "to make analysis easy". It will be complex if we deal with either type 2 or type 3 random variables. So, textbooks and materials generally prefer analysis with type 1 only.

My doubt is:

Do researchers use random variables of type 2 or type 3 during the implementation of (any) AI tasks? Is it impossible to use them due to their (infinite) cardinality? If possible, please provide an example mechanism for implementing such random variables.

",18758,,18758,,1/14/2022 23:44,1/14/2022 23:44,Is it possible to use (infinite cardinal) random variables during implementation?,,0,0,,,,CC BY-SA 4.0 28627,1,,,7/12/2021 5:56,,1,40,"

Many recent research papers contain the phrase "Zero-Shot Visual Recognition".

What exactly is meant by zero-shot visual recognition? Does the task need only images or also the other data like text?

",18758,,2444,,7/14/2021 0:58,7/14/2021 0:58,"What is meant by ""Zero-Shot Visual Recognition""?",,0,0,,,,CC BY-SA 4.0 28629,1,,,7/12/2021 9:09,,2,27,"

Suppose, I have a problem, where there is rather a small number of training samples, and transfer learning from ImageNet or some huge NLP dataset is not relevant for this task.

Due to the small number of data, say several hundred samples, the use of a large network will very probably lead to overfitting. Indeed, various regularization techniques can partly solve this issue, but, I suppose, not always. A small network will not have much expressive power, however, with the use of Bayesian approaches, like HMC integration, one can effectively obtain an ensemble of models. Provided models in the ensemble are weakly correlated, one can boost the classification accuracy significantly.

Here I provide the picture from Mackay's book "Information Theory Inference and Learning Algorithms". The model under consideration is single layer neural network with a sigmoid activation function: $$ y(x, \mathbf{w}) = \frac{1}{1 + e^{-(w_0 + w_1 x_1 + w_2 x_2)}} $$

On the left picture, there is a result of Hamiltonian Monte Carlo after a sufficient number of samples, and, on the right, there is an optimal fit.

Integration over the ensemble of models produces a nonlinear separating boundary for NN.

I wonder, can this approach be beneficial for some small-size problems, but not toy, with real-life applications?

",38846,,2444,,12/5/2021 16:51,12/5/2021 16:51,What are the practical problems where full bayesian treatment is affordable?,,0,0,,,,CC BY-SA 4.0 28632,2,,28487,7/12/2021 14:30,,0,,"

Look for padding.

There are even versions of it: pre-padding, post-padding. https://stackoverflow.com/questions/46298793/how-does-choosing-between-pre-and-post-zero-padding-of-sequences-impact-results

",48523,,,,,7/12/2021 14:30,,,,0,,,,CC BY-SA 4.0 28635,1,28636,,7/12/2021 21:06,,0,13,"

Context: I want to determine if someone's written review contains content that is relevant to a paragraph that they are reviewing.

To do so, I am trying to determine if one paragraph is relevant to another paragraph. I initially tried to use TF-IDF to calculate the relevancy, but I think TF-IDF works well for determining if one paragraph is relevant to a whole set of paragraphs. I only want to determine if two paragraphs are relevant with each other.

What would be a good approach for this problem?

",48551,,40434,,7/15/2021 1:31,7/15/2021 1:31,Language Processing: Determine if one paragraph is relevant to another paragraph,,1,0,,,,CC BY-SA 4.0 28636,2,,28635,7/12/2021 22:25,,0,,"

A very simple approach can be:

  • Calculate tf-idf vector for sentence 1 and 2.
  • Calculate vector similarity (Cosine similarity) of these 2 vectors.

This is a general approach and works for any representational vector.

For a more complex one, check semantic similarity with BERT post from Keras blog.

",48523,,,,,7/12/2021 22:25,,,,0,,,,CC BY-SA 4.0 28637,1,28638,,7/13/2021 0:03,,1,43,"

I'm a student and writing my first paper for submission on conference. I have a question

there is a dataset below. this is temporal-spatial dataset.

Date         Hour   City       Sensor1  Sensor2  Sensor3 Sensor4 ...
21-06-10     0      Region1      0.12     0.52    0.33     0.44  ...
21-06-10     1      Region2      0.16     0.83    0.34     0.49  ...
21-06-10     2      Region1      0.21     0.44    0.57     0.5   ...
...

My Task is anomaly detection for each region

I want to use LSTM. So, I represent the temporal-spatial data to two time-series data. my dataset can be represented below.

City       Date       Hour     Sensor1  Sensor2  Sensor3 Sensor4 ...
Region1   21-06-10     0         0.12     0.52    0.33     0.44  ...
Region1   21-06-10     2         0.21     0.44    0.57     0.5   ...
...


City       Date       Hour     Sensor1  Sensor2  Sensor3 Sensor4 ...
Region2   21-06-10     1         0.16     0.83    0.34     0.49  ...
...

However, then, there is no a row with attribute 'Hour=1' in Region1 dataset (you can see the table below)

City       Date       Hour     Sensor1  Sensor2  Sensor3 Sensor4 ...
Region1   21-06-10     0         0.12     0.52    0.33     0.44  ...
Region1   21-06-10     1         NaN      NaN     NaN      NaN   ...
Region1   21-06-10     2         0.21     0.44    0.57     0.5   ...
...

Can I insert estimated values into the row with attribute 'Hour=1' in Region1 dataset? (for example, I want to insert average between the first row and the third row)

Can I claim to have utilized a real world dataset even with this missing value estimation?

",38808,,,,,7/13/2021 1:18,How can I address missing values for LSTM?,,1,0,,,,CC BY-SA 4.0 28638,2,,28637,7/13/2021 1:18,,2,,"

You can claim to use a real-world dataset, you would just need to specify that some values were interpolated.

Do you have to have the inter-mediate values though? By the looks of it, each "region" was only measured every 2 hours, so I would just keep it that way and just have the resolution be 2 hours. It doesn't have to be hourly, and probably shouldn't since that isn't the resolution of the data by the looks of it.

If it does need to be hourly then it is fine to just linearly interpolate the data. Additionally, you can try and train the network to accept empty inputs (though It'd definitely be easier to just interpolate your dataset)

",26726,,,,,7/13/2021 1:18,,,,4,,,,CC BY-SA 4.0 28639,1,,,7/13/2021 2:34,,0,85,"

Suppose, I have the following data-set:

... ...
... ...
AABBB  7.027  5.338  5.335  8.122  5.537  6.408
ABBBA  5.338  5.335  5.659  5.537  5.241  7.043
BBBAA  5.335  5.659  6.954  5.241  8.470  8.474
BBAAA  5.659  6.954  5.954  8.470  9.266  9.334
BAACA  6.954  5.954  6.117  9.266  9.243 12.200
AABAA  5.954  6.117  6.180  9.243  8.688 11.842
ACAAA  6.117  6.180  5.393  8.688  5.073  7.722
ABAAC  6.180  5.393  6.795  5.073  8.719  7.854
BAACC  5.393  6.795  5.796  8.719  9.196  9.705
... ...
... ...

Apparently, the feature values represent a string pattern comprising of only three letters A, B, and C.

I have to design a neural network that would be able to detect these patterns and spit out a binary representation of these strings where the letters should be encoded in 3-bit binary(one-hot encoding).

My first question is, What kind of problem is it and why?

My next question is, How should I approach this problem to solve it?

",20721,,20721,,7/15/2021 13:00,7/15/2021 13:00,How can I approach this problem of producing a 3-bit binary string given a sequence of letters?,,1,3,,,,CC BY-SA 4.0 28641,2,,27827,7/13/2021 3:03,,0,,"

Use the benchmarked algorithms or research papers will be a good start. Addition to that use the open sourced Bert GPT 2 like architectures is a good start.

",48554,,,,,7/13/2021 3:03,,,,0,,,,CC BY-SA 4.0 28642,1,,,7/13/2021 4:33,,1,50,"

We can use a model-free Monte Carlo approach to solving an MDP $(S,A,R,P,\gamma)$ with transition dynamics $P$ unknown by estimating Q-values by rolling out trajectories starting from random states $s_0 \in S$ and improving the policy $\pi$ greedily. This is the Monte Carlo Exploring Starts algorithm in Sutton and Barto page 99 2nd edition.

Does anyone know if there is a sample complexity result for this algorithm?

",45562,,2444,,7/13/2021 10:59,7/13/2021 10:59,What is the sample complexity of Monte Carlo Exploring Starts in RL?,,0,0,,,,CC BY-SA 4.0 28643,2,,28639,7/13/2021 5:04,,2,,"

If you're trying to predict the string pattern, given the numerical feature and assuming your string pattern is fixed sized, you can one-hot encode each letter then combine them (into an array that is no longer one-hot). So AABBC would look like:

[1,0,0,1,0,0,0,1,0,0,1,0,0,0,1] <- Use this for training
[A,B,C,A,B,C,A,B,C,A,B,C,A,B,C]
[A,_,_,A,_,_,_,B,_,_,B,_,_,_,C]
AABBC

where each group of triplets represent a single integer. Then you can train a network with cross-entropy. This is the problem formulation of multi-task learning where you predict multiple things simultaneously. Needless to say, it is classification.

",48523,,26726,,7/14/2021 0:30,7/14/2021 0:30,,,,5,,,,CC BY-SA 4.0 28644,1,28649,,7/13/2021 5:36,,0,259,"

I tried searching a lot, but I could neither find the paper that introduced Q-Learning nor the paper that introduced Deep Q Learning. If anyone knows anything about it please do tell me.

",46358,,2444,,7/13/2021 11:00,7/13/2021 11:00,Where can I find the original conference paper that introduced Q-learning and Deep Q-Learning?,,1,2,,,,CC BY-SA 4.0 28645,1,,,7/13/2021 5:58,,0,42,"

I've trained a model for heart sound classification with transfer learning (MobileNet) on Physionet dataset, and it works fine.

However, when I train it on my own dataset, it seems that it can not learn anything: more specifically, the loss is not decreasing and the accuracy is not going up. I've checked my labels and they seem to be correct. What other things should I check?

",42948,,2444,,7/15/2021 12:50,7/15/2021 12:50,"Model not learning anything, what can be the problem?",,0,8,,,,CC BY-SA 4.0 28646,1,,,7/13/2021 6:40,,2,39,"

I have two sequence prediction tasks, finding $\vec{\pi} \in \Pi$ and $\vec{\psi} \in \Psi$. Each sequence has its own objective function, i.e. $f_1(\vec{\pi})$ and $f_2(\vec{\psi})$. The input for the two sequence prediction tasks are also of different domain.

Say that by modification and extension in the model design, I can use one seq2seq or Pointer Network (or its variants) to produce the two sequence one at a time. In the training stage, however, the two objective functions are combined into $F(\vec{\pi}, \vec{\psi}) = \alpha f_1(\vec{\pi}) + \beta f_2(\vec{\psi})$ and the loss function to train the model use the combined objective function $F(\vec{\pi}, \vec{\psi})$.

Is this considered multi-task learning?

",44920,,2444,,7/15/2021 15:38,7/15/2021 15:38,Is optimizing weighted sum multi objective tasks considered a multi-task learning?,,0,0,,,,CC BY-SA 4.0 28647,2,,28556,7/13/2021 6:45,,1,,"

There are a handful of tools available for manually comparing pronounciations, though all are limited in some way. Depending on your usecase, you might be interested in:

  • Wikspeak: a tool that transcribes (single) words into IPA and generates a pronounciation. A web demo is available, though it’s a bit sensitive about browser versions.
  • espeak-ng: provides a CLI tool that does text-to-speech or text-to-IPA transcription
# use the —-ipa flag to display the inferred IPA transcription
espeak-ng -v en-US --ipa "horse”
# => hˈɔːɹs
espeak-ng -v en-US --ipa "hoarse"
# => hˈoːɹs

If you want a more automated solution, you could look into python libraries like eng-to-ipa to do IPA transcription (including disambiguation when a word can map to multiple IPA transcriptions). You could then try applying edit distance measurements to the IPA transcriptions to estimate the similarity of the pronounciations.

",42108,,,,,7/13/2021 6:45,,,,9,,,,CC BY-SA 4.0 28649,2,,28644,7/13/2021 9:42,,1,,"

This is the original Q-Learning paper by Watkins, though you may need to pay for access to this.

This is the Nature paper that introduced the DQN.

",36821,,,,,7/13/2021 9:42,,,,0,,,,CC BY-SA 4.0 28650,2,,28548,7/13/2021 9:52,,0,,"

I found the solution, it was changing reward function and using reward scaling. A little bit change in architecture and learning rate fixed the problem.

",48002,,,,,7/13/2021 9:52,,,,0,,,,CC BY-SA 4.0 28653,1,28663,,7/13/2021 15:34,,0,524,"

I have downloaded a pre-trained EfficientDet D2 model (Tensorflow 2.0) and trained it on some data (about 20000 images with 20 classes). I set the number of steps to 25000 and batch size to 3 (computer resources are not the best).

However, if I try to make predictions, the pre-trained model makes better predictions than the model I have trained on the additional data. Is this expected behaviour?

For example, an image of a person may be 78% accurate on the pre-trained model and only 54% accurate on the same image when trained.

",48562,,2444,,7/14/2021 11:29,7/14/2021 11:29,Is it possible that the fine-tuned pre-trained model performs worse than the original pre-trained model?,,1,1,,,,CC BY-SA 4.0 28659,1,,,7/13/2021 21:20,,0,54,"

I noticed something rather intriguing while testing the Deep Q-Network implementation from Aurélion Géron's book Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow, 2nd Edition; I copy-pasted the code exactly as it is but added some lines to get the graph on figure 18-10 presenting the sum of total rewards gained during each episode.

So everything is the same as the book except the training part where I added the lines related to rewards lists and the plotting:

all_rewards = []
for episode in range(600):
  obs = env.reset()
  episode_rewards = []
  for step in range(200):
    epsilon = max(1 - episode / 500, 0.01)
    obs, reward, done, info = play_one_step(env, obs, epsilon)
    episode_rewards.append(reward)
    if done:
      break
  all_rewards.append(episode_rewards)
  if episode > 50:
    training_step(batch_size)

sum_rewards = []
for i in range(len(all_rewards)):
  sum_rewards.append(sum(all_rewards[i]))

import matplotlib.pyplot as plt
episodes = range(1,601)
plt.plot(episodes, sum_rewards)

At my surprise I didn't get the same graph as the one the author presents in its book, so I reran the code again and got a totally different graph from what I had for the first time. Please find below two graphs that I obtained. I'm plotting the total sum obtained during each episode with respect to the episode.

I'd like to ask you if there is some intrinsic to the algorithm that makes it so random and in that case I'd like some references (if there are any) that prove that or I'm just doing something wrong. Thank you.

",44965,,,,,7/13/2021 21:20,Why don't I get the same results of Q-Learning as in Aurélion Géron's Hands-on Machine Learning book?,,0,7,,,,CC BY-SA 4.0 28662,1,35312,,7/14/2021 10:47,,4,281,"

I've read on wiki that already in 2017 there were over 40 institutions researching AGI, and I wonder what type of algorithms are being studied and developed in this field.

For example, for comparison with narrow AI, where models/techniques, such as ANNs, CNNs, SVMs, DT/RT, evolutionary algorithms, or reinforcement learning are used, how would AGI models differ? Do they also use these models but in some specialised way or maybe these algorithms are completely new and different from these currently used in narrow AI?

",22659,,2444,,7/14/2021 11:22,4/24/2022 12:09,What algorithms are used in Artificial General Intelligence research?,,2,0,,,,CC BY-SA 4.0 28663,2,,28653,7/14/2021 11:18,,4,,"

Yes, this is quite the expected behavior. The main difference between expected and current behavior lies in the amount of data you are using for training VS the amount of data that the pre-trained model was trained with.

Take into account that pre-trained models have been trained over popular datasets, the most common ones are: COCO, ImageNet and Open Images. And the amount of data differs:

  • COCO: 330K images
  • ImageNet: 1.5M images
  • OpenImages: 9M images
  • Your dataset: 20K images

You could say, well, but I started from a pre-trained model, so the network should already know how to detect a person. Well that is true, but you are training it again, you are not using transfer learning (freezing backbone layers, or adding extra heads / extra channels to detect other features or other classes). So what is happening is that your model, even when it started from good pre-trained weights, is fitting to your 20K images data.

As I see it, you have two options: either increase your dataset size or use transfer leanring.

",26882,,,,,7/14/2021 11:18,,,,2,,,,CC BY-SA 4.0 28664,1,,,7/14/2021 11:21,,1,23,"

It's easy to build a perceptron that can compute the logical AND and OR functions of its binary inputs.

Logistic regression could be used as a binary classifier.

$$z^{(i)} = w^T x^{(i)} + b$$

$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})$$

$$ \sigma(z) = \frac{1}{1+e^{-z}} $$

Is it possible to compute the AND and OR with logistic regression?

",45689,,2444,,7/14/2021 19:46,7/14/2021 19:46,Is it possible to compute the logical AND and OR with logistic regression?,,0,0,,,,CC BY-SA 4.0 28665,1,,,7/14/2021 11:50,,1,24,"

I am looking for a gentle introduction (videos, lecture notes, tutorials, books) on reinforcement learning (MDPs) involving continuous states (or very large cardinality of state space). In particular, I am looking for ways on how to deal with them, including a good discussion on the build up to important and relevant concepts.

Most of the books I encountered just state that we need function approximation, and then moved on to talk about radial basis functions. These ideas, however, are very abstract and are not easy to understand. For example, why specifically those functions?

",23723,,2444,,7/14/2021 19:39,7/14/2021 19:39,Is there a gentle introduction to reinforcement learning applied to MDPs with continuous state spaces?,,0,1,,,,CC BY-SA 4.0 28666,1,,,7/14/2021 12:09,,1,73,"

Do the terms multi-task and multi-output refer to the same thing in the context of deep learning (with neural networks)? For example, do neural networks for multi-task learning use multiple outputs?

If not, what is the difference between them?

It would be helpful if you can also give examples.

I found some of the terms here. When I went to study this term on the Internet, I found it very convoluted, as different authors are found to be mixing up those terms.

",20721,,20721,,7/15/2021 13:59,7/15/2021 13:59,Do the terms multi-task and multi-output refer to the same thing in the context of deep learning?,,0,0,,,,CC BY-SA 4.0 28667,1,,,7/14/2021 12:20,,0,50,"

I am trying to define the number of states and possible actions for a reinforcement learning problem that I want to solve with Q-learning, but I am a bit confused, as I'm totally new to reinforcement learning.

The problem I'm trying to solve is to assign different groups with people in the sample group having sequential numbers. Let's say there are three in each group.

Group 1, Group2, Group3.

1:{"group: Group1, "number": 1},
2:{"group: Group2, "number": 2},
3:{"group: Group3, "number": 3},
4:{"group: Group2, "number": 4},
5:{"group: Group1, "number": 5},
6:{"group: Group3, "number": 6},
7:{"group: Group3, "number": 7},
8:{"group: Group2, "number": 8},
9:{"group: Group1, "number": 9},

An optimal output will be a case where the numbers are sequential with respect to the group. For example, all in group1 should have number 1, 2, 3, or 4,5,6 or 7, 8, 9. and not 1, 5, 9 as in the dictionary above.

In other words, group1, group2, group3 represent the group ids, which means I have 3 groups. The number represents seat numbers. All in each of the groups need to sit close to each other e.g seat numbers 1,2,3 or 4,5,6, or 7,8,9.

I am wondering if the possible state will be all possible combinations of the group and numbers, in which case it will be 1680 and the possible action is the number of numbers to swap to get the desired output which is 9.

Any useful information will be very much appreciated.

",48585,,48585,,7/15/2021 19:08,7/15/2021 19:08,Defining states and possible actions in Q learning,,0,4,,,,CC BY-SA 4.0 28674,2,,28620,7/14/2021 16:59,,1,,"

Instead of using a token embedding you can use a linear layer. For an input of (10, 5, 4) - (sequence length, batch size, features) you can create a linear layer:

self.embedding_layer = nn.Linear(4, d_model)

Where d_model is the dimension of the input to the transformer.

PositionalEncoding is still needed so as to have a representation of time in the inputs.

src = self.embedding_layer(src)
src = self.pos_encoding_layer(src)
output = self.transformer(src)
",45240,,,,,7/14/2021 16:59,,,,0,,,,CC BY-SA 4.0 28675,1,,,7/14/2021 17:40,,2,109,"

I'm currently reading a paper that uses CNN's as a base approach to solving some image classification issues and I've found that they kept mentioning the term "Differentiable Architecture", for which I have no idea about its meaning, as I'm new to this world of Deep Learning, Neural Networks, etc., so to sum up my question is

What does "differentiable architecture" mean?

",48594,,2444,,7/14/2021 19:17,8/14/2021 2:01,"What does ""differentiable architecture"" mean?",,1,1,,,,CC BY-SA 4.0 28676,1,,,7/14/2021 17:54,,0,46,"

It is a known fact that preprocessing images using CV techniques will improve CNN performance (see this answer).

But what happens when you feed in the entire image and the filtered image randomly to the network? Would the Neural Network learn to focus on the relevant aspects of an unfiltered image?

If yes, please explain how randomly processed images improve the CNN's performance/sample efficiency.

",31755,,2444,,7/14/2021 20:48,7/14/2021 20:48,Does randomly adding hand-engineered features increase the CNN's sample efficiency/performance?,,0,4,,,,CC BY-SA 4.0 28679,1,,,7/14/2021 22:39,,0,297,"

I'm trying to solve a reinforcement learning problem using a Monte Carlo policy gradient algorithm and, more specifically, REINFORCE, with rewards attributed to individual moves instead of applied to all steps in a rollout.

For this, I do $M$ rollouts, each with $N$ steps, and record the rewards. Let's say I fill an $M \times N$ matrix with the rewards. Sometimes just using these rewards as-is will work, but sometimes the rewards are always positive (or always negative), or the magnitudes cover a large range.

A simple thing is to just subtract the overall mean and divide it by the overall standard deviation.

In my particular case, though, the beginning is easier, and during bootstrapping the rewards will be higher. A typical case would have high rewards at the beginning with a taper to zero before the end of the rollout. So, it seems to make sense to subtract the mean along with the trial ($M$) dimension. Likewise, it might make sense to normalize the standard deviation along that dimension as well.

My question: Have others already figured this out and developed best practices for normalizing rewards?

I may have added my own twist, but I'm training in batches and using the stats of the batch (multiple rollouts) to do normalization. The subtracting the mean part is called a baseline in some papers I think. This article discusses it a bit.

",27616,,18758,,1/10/2022 7:57,1/10/2022 7:57,How to normalize rewards in REINFORCE?,,0,3,,,,CC BY-SA 4.0 28680,2,,28675,7/15/2021 1:17,,1,,"

Without the specific context, I cannot give a definitive answer, but it's very likely that a "differentiable architecture" refers to a neural network that represents/computes a differentiable function (so you need to use differentiable activation functions, such as the sigmoid), i.e. you can take the partial derivatives of the loss function with respect to each parameter/weight of the neural network, so you can use backpropagation to find the gradient of the loss function, consequently, you can train this neural network with gradient descent, which is a numerical/iterative optimization algorithm for finding a (local) minimum of a function.

Most architectures you will find around are differentiable. In fact, gradient descent is the most widely used algorithm for training neural networks nowadays.

",2444,,,,,7/15/2021 1:17,,,,0,,,,CC BY-SA 4.0 28682,2,,28559,7/15/2021 3:40,,2,,"

Taken directly from HuggingFace

Note that if you are used to freezing the body of your pretrained model (like in computer vision) the above may seem a bit strange, as we are directly fine-tuning the whole model without taking any precaution. It actually works better this way for Transformers model (so this is not an oversight on our side). If you’re not familiar with what “freezing the body” of the model means, forget you read this paragraph.

",48422,,2444,,7/15/2021 12:29,7/15/2021 12:29,,,,0,,,,CC BY-SA 4.0 28687,1,,,7/15/2021 8:56,,2,64,"

The algorithm is described as below:

My understanding: In the third last step, we act greedily w.r.t $Q$. Since we use importance sampling, this $Q \approx Q_\pi$. However, in the next step, whenever $A_t \neq \pi(S_t)$, it means the behavior policy isn't aligned with the target (greedy) policy. Hence, we can't use importance sampling and for such $(S_t, A_t)$ we simply take the average of $Q(S_t, A_t)$. Which means these $Q$ values aren't estimates of $Q_\pi$ but rather $Q_b$.

What's been bothering me is when the behavior and target policy eventually align for state $S_t$, won't that alignment be incorrect? Because in the previous step, we would be doing:

$\pi(S_t) = \arg \max [Q_\pi(S_t, a_1), Q_b(S_t, a_2), Q_b(S_t, a_3)]$

assuming $A(S_t) = \{a_1, a_2, a_3\}$ and the true greedy action is $a_1$.

",46214,,,,,7/15/2021 8:56,Doubt in Sutton & Barto's off-policy Monte Carlo control algorithm,,0,4,,,,CC BY-SA 4.0 28688,2,,28577,7/15/2021 15:37,,3,,"

In addition to those mentioned differences, a perceptron can be thought of as a standalone model (which is trained with a specific algorithm, the perceptron algorithm), while the artificial neuron (sometimes only referred to as neuron, in a similar way that an artificial neuron network is commonly abbreviated to neural network) is the smallest computational unit of a neural network, so it's an abstraction for a relatively simple function (e.g. sigmoid) that will be composed with other simple functions to produce a more complicated function, which is typically non-linear.

Moreover, note that people often refer to "regular" neural networks as multi-layer perceptrons (abbreviated to MLPs) for one simple reason: you can think of such an MLP as the composition of multiple perceptrons, where, in this case, the perceptron would be a synonym for artificial neuron, so the smallest computational unit of a neural network, which performs, for example, a linear combination of its inputs followed by the application of an activation function, which can be the sigmoid, tanh, ReLU, identity, or any other function that is differentiable, if you plan to train the neural network with gradient descent.

So, sometimes, the term perceptron is a synonym for artificial neuron, so the perceptron (aka neuron), in this case, could have any activation function. However, the perceptron is often assumed to have the sign function as the activation function, which is not strictly differentiable, while, as you point out, artificial neurons are not limited to the sign function.

The original (photo)perceptron models, as described in this paper, were more complicated (e.g. the inputs were not directly connected to the outputs, or you could have feedback connections), so the definitions of these concepts or what these terms refer to have evolved or can still evolve. In the past, I have also seen people use the term perceptron to refer to an MLP, but this is probably because they were not aware of the model that we typically refer to as the perceptron, for example, as described in section 8.5.4 (p. 265) of the book Machine Learning: A Probabilistic Perspective by Kevin Murphy (you can find free pdfs of this book on the web).

",2444,,18758,,12/18/2021 0:42,12/18/2021 0:42,,,,0,,,,CC BY-SA 4.0 28689,1,,,7/15/2021 15:49,,0,24,"

Time after time I need to merge two large JSON files, or more precisely add a json fragment to another file.

The too pieces are often written by different people and have different formatting (spacing), so margining them mechanically result in an ugly code with ragtag spacing, and formatting one of them manually or semi-manually takes a lot of time and effort.

How I can reformat the fragment in the same style the first file is formatted. I do not know what IDE, formatter, or style guide was used

I know that I can use automated tool to reformat the whole document into whatever is supported, but prefer to keep the original style.

I have heard that coding style can be imitated e.g. in the context of adversary authorship recognition by AI and imagine that for simple cases such as JSON that should be easy. I am interested in merely spacing, indentation, brackets placing, not, say property naming or nesting convention (not sure of ordering, probably not so interesting).

I am software developer trying to automate my tasks, rather than AI researcher, so please be patient.

I posted to stackoverflow, yet the task might be to challenging and open ended for traditional programming methods https://stackoverflow.com/questions/68396829

",48613,,48613,,7/18/2021 16:27,7/18/2021 16:27,How to apply the formatting of one json file to another. Coding style transfer for JSON,,0,2,,,,CC BY-SA 4.0 28691,1,,,7/16/2021 0:35,,1,27,"

In one-hot encoding, a vector is given to each class label. For each class, only one entry of the vector is equal to 1 and the remaining entries are zeros in this encoding.

Thus, in one-hot encoding, we are encoding the class label.

Is it true that label-embedding gives a vector for each class label like in one-hot encoding? Is one-hot encoding a type of label-embedding?

",18758,,2444,,7/16/2021 18:01,7/16/2021 18:01,Is label-embedding similar to one-hot encoding?,,0,0,,,,CC BY-SA 4.0 28693,1,,,7/16/2021 1:37,,1,146,"

I have been experimenting with activation functions on CNN, and it occurred to me to use a rectified tanh function. So that is basically if z > 0 tanh(z) else 0. I have implemented it and I compared with ReLu on odd MNIST. They both achieved about 94% success rate in 10 epochs. My logic was that usually humans tend to stop feeling more confident once they have learnt something. Similarly, I thought a Convolutional layer neuron should not feel much more confident (higher activation) with growing evidence (higher weighted input). So is there any evidence of perhaps such a rectified tanh being more successful ?

",48620,,,,,7/16/2021 1:37,Using a rectified Tanh to train a CNN?,,0,2,,,,CC BY-SA 4.0 28696,1,,,7/16/2021 8:42,,0,23,"

I want to use prediction models like LSTM-AE to predict time-series data. The feature that the neural network should learn is in frequency between 40-60Hz. So, in order to learn the feature more effectively and removing the noises, the signal will be filtered using a bandpass filter and the result will be then passed to the network.

The problem is that, if I want to develop an end-to-end (i.e. omitting the bandpass filtering) solution, how can I do that?

",18245,,2444,,7/16/2021 13:30,7/16/2021 13:30,End-to-end learning using LSTM-AE,,0,2,,,,CC BY-SA 4.0 28700,1,28739,,7/16/2021 14:33,,0,50,"

I have trained a model for four days. I noticed a behaviour quite strange/unnatural.

During the training, the score and loss look like this:

However, when I see the validation score, I got:

It seems the model learning by heart at the beginning and not generalise well afterwards. Is this a natural behavior? Maybe it's really not normal and there must be some errors in the code or algorithm? I don't know what to think anymore. Can you help me? What is a good solution?

",42628,,42628,,7/19/2021 2:54,7/19/2021 12:05,"The model learns well, but the validation decreases over time",,1,1,,7/21/2021 13:19,,CC BY-SA 4.0 28702,1,,,7/16/2021 16:21,,5,82,"

I think that it is common knowledge that for any infinite horizon discounted MDP $(S, A, P, r, \gamma)$, there always exists a dominating policy $\pi$, i.e. a policy $\pi$ such that for all policies $\pi'$: $$V_\pi (s) \geq V_{\pi'}(s) \quad \text{for all } s\in S .$$

However, I could not find a proof of this result anywhere. Given that this statement is fundamental for dynamic programming (I think), I am interested in a rigorous proof. (I hope that I am not missing anything trivial here)

",36116,,2444,,7/16/2021 17:52,7/16/2021 17:52,Proof that there always exists a dominating policy in an MDP,,0,2,,,,CC BY-SA 4.0 28705,1,,,7/16/2021 23:42,,3,649,"

Consider the following statements from A Simple Custom Module of PyTorch's documentation

To get started, let’s look at a simpler, custom version of PyTorch’s Linear module. This module applies an affine transformation to its input.

Since the paragraph is saying PyTorch’s Linear module, I am guessing that affine transformation is nothing but linear transformation.

Suppose $x = [x_1, x_2, x_3,\cdots,x_n]$ be an input, then the linear transformation on $x$ can be $a.x+b$, where $a$ and $b$ are $n-$ dimensional vectors of real numbers. And dot($.$) stands for dot product.

Is affine transformation same as the linear transformation? If yes, then why the name affine is used? Does it cover something more or less than linear transformation?

",18758,,2444,,7/16/2021 23:53,7/19/2021 10:43,Is there any difference between affine transformation and linear transformation?,,2,0,,,,CC BY-SA 4.0 28707,2,,28618,7/17/2021 0:23,,1,,"

RNNs are known to be superior to MLP in case of sequential data, like yours. But complex models like LSTM and GRU require a lot of data to achieve their potential. I don't know about your data but you can try to validate your architecture, approach and overall setting using a different, known time-series benchmark data. Maybe something is wrong with architecture, loss function, data, etc... So trying a different but known benchmark data can give you an idea about why you are unable to produce superior results with LSTM.

",48523,,,,,7/17/2021 0:23,,,,0,,,,CC BY-SA 4.0 28708,2,,28705,7/17/2021 0:24,,3,,"

In linear algebra, a linear transformation (aka linear map or linear transform) $f: \mathcal{V} \rightarrow \mathcal{W}$ is a function that satisfies the following two conditions

  1. $f(u + v)=f(u)+f(v)$ (additivity)
  2. $f(\alpha u) = \alpha f(u)$ (scalar multiplication),

where

  • $u$ and $v$ vectors (i.e. elements of a vector space, which can also be $\mathbb{R}$ [proof], some space of functions, etc.)
  • $\alpha$ is a scalar (e.g. which can be a real number, but not necessarily)
  • $\mathcal{V}$ and $\mathcal{W}$ are vector spaces (e.g. $\mathbb{R}$ or $\mathbb{R}^2$)

So, any function that satisfies these two conditions is a linear transformation.

In Euclidean geometry, $g(x) = ax + b$ is an affine transformation, which is generally not a linear transformation as defined in linear algebra. You can easily show that affine transformations are not linear transformations. For example, let $a = 1$ and $b = 2$, does $g$ satisfy the second condition above for any scalar $\alpha$? No. For example, let $\alpha = 3$, then $g(3x = y) = y + 2 = 3x + 2 \neq 3 g(x) = 3 (x + 2) = 3x + 6$.

However, in the context of neural networks, when people use the adjective "linear" they are often referring to a line. For example, in linear regression, you can have a bias (the $b$ in the affine transformation $g$ above), which would make the function not a linear transformation, but we still call it linear regression because we fit a line (hence the name linear regression) to the data.

So, no, an affine transformation is not a linear transformation as defined in linear algebra, but all linear transformations are affine. However, in machine learning, people often use the adjective linear to refer to straight-line models, which are generally represented by functions that are affine transformations. In this answer, I also talk about this issue.

",2444,,2444,,7/17/2021 0:37,7/17/2021 0:37,,,,1,,,,CC BY-SA 4.0 28710,1,28711,,7/17/2021 15:35,,1,66,"

I've been reading up on the convolution operation and neural networks. I understand that the convolution operation is defined as:

$$(f * g)(t)=\int_{-\infty}^{\infty} f(\tau) g(t-\tau) d \tau$$

The convolution operation has some properties, such as commutativity, associativity, etc.

How is the convolution connected to neural networks? How do we use this operation in a CNN?

",48556,,2444,,7/17/2021 16:01,7/17/2021 17:05,How is the convolution operation connected to neural networks?,,1,0,,,,CC BY-SA 4.0 28711,2,,28710,7/17/2021 16:50,,1,,"

The convolution operation performed by most CNNs that you will find (on the web) assumes that the signals/functions are discrete and 2-dimensional (e.g. images can be viewed as 2-dimensional discrete signals), although this does not have to be the case. In fact, 1D and 3D convolutions are also implemented in several deep learning libraries (see here for an example).

The equation in your post describes the convolution operation for 1-dimensional continuous signals with domain $\mathbb{R}$ (sometimes called the time domain, hence the usage of the letter $t$), so we use an integral that goes from $-\infty$ to $\infty$.

An equation that better describes the common 2D convolution operation performed by CNNs is the following

$$S(i, j)=(K * I)(i, j)=\sum_{m=-M}^M \sum_{n=-N}^N I(i+m, j+n) K(m, n), \tag{1}\label{1}$$

where

  • $S(i, j)$ is the convolution of the signals $I$ (e.g. an image) and $K$ (e.g. the kernel) evaluated at the coordinates (or points of the domain) $(i, j)$, so $S$ is the function that results from the convolution of $I$ and $K$
  • $(K * I)(i, j)$ is just another notation for $S(i, j)$, which emphasizes that the convolution is performed between $K$ and $I$, and $*$ is the convolution operation
  • $K(m, n)$ is the value of the function $K$ (in the context of neural networks and image processing, this is often known as the kernel, hence the $K$, or filter) evaluated at $(m, n)$
  • similarly, $I(i+m, j+n)$ is the value of the function $I$ (e.g. the image) evaluated at $(i+m, j+n)$

So, essentially, equation \ref{1} is telling us that the value of the convolution between $I$ and $K$, denoted by $S$, evaluated at $(i, j)$ is a sum of products, where the factors are $I(i+m, j+n)$ and $K(m, n)$, where the values of $m$ and $n$ depend on $M$ and $N$, which could also be equal, and typical values would be, for example, $1$ or $2$, so equation \ref{1} could be written as follows

$$S(i, j)=(K * I)(i, j)=\sum_{m=-1}^1 \sum_{n=-1}^1 I(i+m, j+n) K(m, n), \tag{2}\label{2}$$

If you expand the summations, you will get

$$S(i, j)= I(i-1, j-1) K(-1, -1) + I(i-1, j) K(-1, 0) + \dots + I(i+1, j+1) K(1, 1)$$

This is essentially a 2D dot product or, equivalently, an element-wise multiplication followed by the addition of those multiplications.

Another important thing to note is that the convolution of $I$ and $K$ evaluated at $(i, j)$ uses the values of $I$ with respect to the points $(i, j)$.

Moreover, in a 2D convolutional layer of a CNN, we do not just compute $S(i, j)$, but we compute $S(i, j)$ for all possible values of $i$ and $j$, which are typically the coordinates of the image (in the case of the first layer).

Finally, the equation \ref{2} actually describes what is known as the cross-correlation operation, which is exactly the same as the convolution operation, but we use $+$ instead of $-$ in $I(i+m, j+n)$. In practice, in the context of CNNs, it doesn't really matter whether you use the $+$ and $-$, given that the kernels $K$ are learned. However, the convolution operation and the cross-correlation would be different in the case the kernels are not symmetric, but identical in the case they are. You can read more about this topic in this answer that I had written a while ago.

You should probably also read this chapter that more thoroughly describes all these operations. There are also other answers that I had written in the past that could be useful, such as this, this, this, this, and/or this. By the time you have understood all these answers, you will also have understood the convolution operation and how it is related to CNNs.

",2444,,2444,,7/17/2021 17:05,7/17/2021 17:05,,,,0,,,,CC BY-SA 4.0 28714,1,,,7/18/2021 1:06,,1,89,"

Consider the following PyTorch code

# Run a sample training loop that "teaches" the network
# to output the constant zero function
for _ in range(10000):
  input = torch.randn(4)
  output = net(input)
  loss = torch.abs(output)
  net.zero_grad()
  loss.backward()
  optimizer.step()

and its corresponding explanation on training a neural network

A training loop…

  1. acquires an input,
  2. runs the network,
  3. computes a loss,
  4. zeros the network’s parameters’ gradients,
  5. calls loss.backward() to update the parameters’ gradients,
  6. calls optimizer.step() to apply the gradients to the parameters.

Code contains net.zero_grad() which has been explained as zeros the network’s parameters’ gradients.

What does it mean by zeros the networks parameters gradients? In general, loss is back propagated by calculating the gradients of loss wrt parameters. But, I didn't understand the phrase "zeros of networks parameters gradient". What does that particular step do?

",18758,,,,,7/18/2021 5:09,"What does it mean by ""zeros the networks parameters gradients"" in the context of training a neural network?",,1,0,,,,CC BY-SA 4.0 28715,2,,28714,7/18/2021 5:09,,2,,"

In the automatic differentiation procedure after backward pass the gradient with respect to the scalar is added to the current gradient. Without calling zero_grad you will have the sum of all gradients, calcluated before, with the current gradient.

Therefore, optimizer.step() will do not this:

w = w - eta * grad L[i] # L[i] - loss function for the i-th sample

But rather:

w = w - eta * sum_i(grad L[i]) # sum of gradient with respect to all samples

Which is not the desired behavior.

",38846,,,,,7/18/2021 5:09,,,,3,,,,CC BY-SA 4.0 28716,1,28717,,7/18/2021 10:18,,2,267,"

In one Udemy course was mentioned that "dropout is unique to neural networks". However, I remember an example of decision trees where nodes that are not participating in the overall result are removed, and I think that this technique is also called "dropout". Am I correct?

",48659,,2444,,7/18/2021 12:12,7/18/2021 13:07,Is the dropout technique specific only to neural networks?,,1,0,,,,CC BY-SA 4.0 28717,2,,28716,7/18/2021 13:07,,4,,"

I'm sure you can use dropout in any parameterized model, but I suspect it'll only really be helpful if you have enough parameters/nodes. Also dropout in neural nets has a Bayesian meaning, Yarin Gal for example has done lots of work on this.

In your decision tree example, I believe you're talking about pruning, which is different. In that context you're removing nodes that you know aren't contributing. In dropout, you randomly turn off nodes during training in order to prevent individual nodes from being too influential, but the nodes are never removed.

You might also be interested in L1 regularization in parameterized models. This is when you add a penalty according to the absolute weights (rather than square weights), which tends to drive less useful weights to 0. Then you can remove the nodes with almost no weight. This is more akin to your decision tree example rather than dropout though.

",37829,,,,,7/18/2021 13:07,,,,4,,,,CC BY-SA 4.0 28722,1,,,7/18/2021 15:21,,1,95,"

It's confusing me that how can we normalize the reward without actually knowing the true mean and variance of the reward distribution, specifically, at the early steps and episodes. This may cause problem for the RL algorithms that use the replay buffer such as DDPG, because this wrongly calculated rewards can stay in buffer for too long and the network will adapt with them. Is there something that I am missing or misunderstood? For algorithms with replay buffer, using standardization is better that normalization?

",46577,,,,,7/18/2021 15:21,Would the reward normalization be wrong in early episodes?,,0,0,,,,CC BY-SA 4.0 28724,2,,28705,7/18/2021 16:40,,2,,"

The fact is you can always express an affine transformation as a linear transformation (more convenient because it is just a matrix/dot product).

For instance, given an input $\textbf{x}=[x_1, ..., x_n]$, some weights $\textbf{a} = [a_1, a_2, ..., a_n]$ and a bias $b \in \mathbb{R}$, you can express the affine operation $y = \textbf{a}\cdot \textbf{x} + b$ as :

$y = \tilde{\textbf{a}} \cdot \tilde{\textbf{x}}$, with $\tilde{\textbf{a}} = [a_1, ..., a_n, b]$ and $\tilde{\textbf{x}} = [x_1, ..., x_n, 1]$ (linear operation)

When your affine transformation is a function $f:\mathbb{R}^p \rightarrow \mathbb{R}^q$ with $\textbf{y}=f(\textbf{x})=A\textbf{x} + \textbf{b}$, you can use the same trick (by adding a column with the biases at the right end of the weight matrix $A$), so you get: $\textbf{y}=\tilde{A}\tilde{\textbf{x}}$

I found an example in this video, where Andrew Ng uses this trick for a simple Linear Regression.

",48662,,48662,,7/19/2021 10:43,7/19/2021 10:43,,,,0,,,,CC BY-SA 4.0 28727,1,28753,,7/18/2021 20:28,,0,116,"

A simple two-player sniper game:

  • Each player has 9 houses that he can reside in. So 18 houses in total. The houses can be considered in a row: e.g. 1-9 for player A, and 10-18 for player B.

  • Each step, the player should make two actions! First, he can use his gun's limited view to check out three consecutive houses of the enemy to see if he is there (for example, he choose 3,4,5.). Then, based on that result, he can choose one house to shoot. That means if he guessed correctly, he will know the other player is in one of those three houses. Otherwise, he can shoot one of the remaining six houses.

  • The killer wins!


Please note that in each step, the player has to perform two actions without interruption from the other player. Based on the result of the first action (limited view), he will have more information to select his second action (shooting). Thus, the first action is informative to reduce action space.


I have decided to use stable-baselines3. I have to create an environment. I am not sure about the policy network.

How should I approach this game for training an AI agent? I would really appreciate it if you can guide me on env creation, policy selection, or any general tips.

",48665,,48665,,7/19/2021 10:10,7/20/2021 12:53,How to approach a two-agent two-step action game?,,1,0,,,,CC BY-SA 4.0 28728,1,,,7/18/2021 21:54,,0,401,"

I am using RLlib (Ray 1.4.0)'s implementation of PPO for a multi-agent scenario with continuous actions, and I find that the loss includes the KL divergence penalty term, apart from the surrogate loss, value function loss, and entropy.

The KL coefficient is updated in the update_kl() function as follows:

    if sampled_kl > 2.0 * self.kl_target:
        self.kl_coeff_val *= 1.5
    # Decrease.
    elif sampled_kl < 0.5 * self.kl_target:
        self.kl_coeff_val *= 0.5
    # No change.
    else:
        return self.kl_coeff_val

I don't understand the reasoning behind this. If the point of the KL "target" is to reach the target, then why do the conditions above imply that the KL coefficient is getting larger (multiplied by 1.5 when the sampled KL is already found to be larger than the target?) when it is supposed to be made smaller instead? I feel like I am missing something here, but I am not able to get my head around it.

I would appreciate any insights on this. Thank you.

",21513,,,,,11/29/2022 23:09,KL divergence coefficient update doesn't make sense in RLlib's PPO implementation,,1,0,,,,CC BY-SA 4.0 28729,1,,,7/18/2021 23:15,,0,56,"

Consider the following method related to buffers in PyTorch

buffers(recurse=True)

Returns an iterator over module buffers.

Parameters

    recurse (bool) – if True, then yields buffers of this module and all submodules. Otherwise, yields only buffers that are direct members of this module.

Yields

    torch.Tensor – module buffer

buffers() is a method used for models (say neural networks) in PyTorch. model.buffers() contains the tensors related to the model and you can see it from the example provided below.

>>> for buf in model.buffers():
>>>     print(type(buf), buf.size())
<class 'torch.Tensor'> (20L,)
<class 'torch.Tensor'> (20L, 1L, 5L, 5L)

The following method informs that a model in PyTorch has both parameters and buffers. So, buffers cannot be the same as the parameters (say weights) of the model.

 cpu()

    Moves all model parameters and buffers to the CPU.

I am not aware of any numbers to store for a model other than its parameters. So, I have no idea what buffers of a model in the PyTorch store. But, this implies that there are some other numbers related to a model that needs to be stored for efficiency or other purposes.

Is it PyTorch specific? Else, what are those numbers, other than parameters, that need to be stored for a model?

",18758,,18758,,1/17/2022 8:14,1/17/2022 8:14,What are the numbers that are useful (may need to be stored) other than parameters of a model?,,0,6,,,,CC BY-SA 4.0 28732,1,,,7/19/2021 6:35,,0,87,"

While reading about Module in PyTorch, I came across a new data type called half datatype.

half() method when calls on a Module casts all floating-point parameters and buffers to half datatype.

It is a 16-bit floating-point number as mentioned here.

It is mentioned in Wikipedia that

It is intended for storage of floating-point values in applications where higher precision is not essential for performing arithmetic computations.

It implies that the precision of parameters (say, weights for a neural network) is not important in certain applications and hence one can use half datatype while implementing a neural network.

Did any research support the statement that precision, that is the range of values it takes, of weights, is unimportant for certain applications?

",18758,,2444,,1/17/2022 10:30,11/14/2022 17:06,What are the applications in which the precision of the neural network's weights is unimportant?,,2,0,,,,CC BY-SA 4.0 28734,1,,,7/19/2021 9:33,,1,107,"

Let's say, for example, I have built the following CNN model using Keras:

model = Sequential()
model.add(Conv2D(32, (3,3), activation='relu', input_shape=(32,32,3)))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(32, (3,3), activation='relu'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(512))
model.add(Dense(10, activation='softmax'))

I wish to be able to transform the above model into a mathematical formula.


I understand the basic structure of a CNN as follows:

where


However, I do not know how to go from the above recursive formula to something like this (the first two summations are weights and the second two are adjustable biases):

Note: The formula above is just an example and not representative of the code given above.


  • Do I need to trace each weight, each bias and each connection of every neuron? If so, how?
  • Furthermore, I would highly appreciate it if someone could provide a generalized strategy for tackling such a problem (like finding a math formula to suit a different kind of classifier).
  • Lastly, is this an easy task and is it a worthwhile one?

Note: This question was originally posted on Stack Overflow. Unfortunately, I received no responses even after offering a bounty. Hence, I am uploading the question here. Link to the original post here.

",,user48670,,,,7/19/2021 9:33,How can I compute a mathematical formula for my CNN?,,0,2,,,,CC BY-SA 4.0 28735,1,,,7/19/2021 11:07,,0,29,"

Let's say I have designed an ML model that can take video input of a dog running around and give the breed of the dog as output. However, I do not want to wait for the video to finish before it is input into my model. I want something like the following to happen:

I am casually taking a video of my backyard when mid-way through a dog runs past the camera. Immediately, my model should identify (a) a dog has appeared within view and (b) the dog is a Labrador Retriever.

In an attempt to achieve the above, I have the following questions:

  1. Do I need to train a new model that detects when a dog has appeared within view?
  2. How can I make my model such that the input is continuous and that the model keeps running providing instant output?

Note: This question was originally posted on Stack Overflow. It was closed as a consequence of it being unfitting to the site. Hence, I am uploading the question here. Link to the original post here.

",,user48670,,,,7/19/2021 11:07,How can I take continuous video input into my model?,,0,2,,,,CC BY-SA 4.0 28736,1,,,7/19/2021 11:43,,1,19,"

I do not understand the purpose of the $\lambda$ parameter in equation 3 of the paper Interpretable Explanations of Black Boxes by Meaningful Perturbation.

$$m^{*}=\underset{m \in[0,1]^{\Lambda}}{\operatorname{argmin}} \lambda\|\mathbf{1}-m\|_{1}+f_{c}\left(\Phi\left(x_{0} ; m\right)\right) \tag{3}\label{3}$$

As far as I understand, the argmin function returns the $m$ for which the term $\lambda\|\mathbf{1}-m\|_{1}$ is smallest. If that's the case, I don't understand the purpose of $\lambda$, since it doesn't change the result of $m$.

",43632,,2444,,7/19/2021 17:49,7/19/2021 17:49,"What does the lambda parameter in the paper ""Interpretable Explanations of Black Boxes by Meaningful Perturbation"" do?",,0,0,,,,CC BY-SA 4.0 28737,2,,27920,7/19/2021 11:50,,0,,"

If you want to train a model that is similar to Facenet, you have to train a Triplet Loss Neural Network similar to the one that you have seen in the tutorial. After training the full network you have to use only a part of the network that is used for embeddings extraction not the whole network, so when you call model.predict() you will get embeddings as output.

For more theoretical info, you can refer to Andrew NG lecture about it

For practical part, unfortunately there is no too much but hopefully this can help at least for building and using the network.

",48681,,,,,7/19/2021 11:50,,,,0,,,,CC BY-SA 4.0 28739,2,,28700,7/19/2021 12:05,,1,,"

This is simply overfitting. The model is performing well on train data but bad on test (unseen) data; you can measure it also by noticing a huge difference between the training accuracy and the validation accuracy. Of course this is not a natural behavior, to solve this you need to apply some data or network modifications in order to avoid overfitting. Some techniques that might help you:

  1. Reduce network complexity by removing some of the layers if it is complex
  2. Use Dropout() inside your network
  3. Apply Regularization
  4. Check your classes distribution and make a train - dev/test split
",48681,,,,,7/19/2021 12:05,,,,1,,,,CC BY-SA 4.0 28740,1,28860,,7/19/2021 12:25,,1,95,"

I am reading this paper in an attempt to recreate the salient region detection and segmentation model employed. I have the following questions pertaining to section 3 of the paper and I would highly appreciate it if someone could provide clarity on them.

  1. The word "scales" is used at multiple points in the section, for example, line 4 of the section states "saliency maps are created at different scales". I do not exactly understand what the authors mean by the word scales. Moreover, is there a mathematical way to think about it?

  2. I understand that a saliency value is computed for each pixel at () using the equation

However, there is no mention of in the equation. Hence, I am confused as to what pixel the saliency value is being computed for. Is it ?

  1. I did not understand what the authors meant by the term "bin" in section 3.2 line 5 where it is stated, "The hill-climbing algorithm can be seen as a search window being run across the space of the d-dimensional histogram to find the largest bin within that window."

Note 1: This question was originally posted on Stack Overflow. I was advised to post it on another platform as a consequence of it being unfitting to the site. Hence, I am uploading the question here. Link to the original post here.

Note 2: In case you are unable to access the link to the research paper, the following citation may help: Achanta, R., Estrada, F., Wils, P., & Süsstrunk, S. (2008, May). Salient region detection and segmentation.

",,user48670,,,,7/27/2021 14:05,Questions about a research paper on salient region detection and segmentation,,1,3,,4/9/2022 4:52,,CC BY-SA 4.0 28743,2,,145,7/19/2021 20:20,,-2,,"

AIXI is important, the reinforcement learning we already see is a smaller classical version of the full intractable model, that one would require quantum computation to see it come to fruition.

",48692,,,,,7/19/2021 20:20,,,,0,,,,CC BY-SA 4.0 28744,1,,,7/19/2021 23:21,,1,119,"

Batch normalization is a procedure widely used to train neural networks. Mean and standard deviation are calculated in this step of training.

Since we train a neural network by dividing training data into batches, we use the word batch normalization as we consider a batch of training vectors at a time.

My doubt is about the position of batch normalization layers in a neural network.

Is it present before the input layer only? Or is it before every layer? Or is it dependent on the underlying task?

Suppose there is a neural network of 4 layers: $I \rightarrow h1 \rightarrow h2 \rightarrow h3 \rightarrow O$

Which one of the following is true

$$bn \rightarrow I \rightarrow h1 \rightarrow h2 \rightarrow h3 \rightarrow O$$

$$bn1 \rightarrow I \rightarrow bn2 \rightarrow h1 \rightarrow bn3 \rightarrow h2 \rightarrow bn4 \rightarrow h3 \rightarrow bn5 \rightarrow O$$

Here $I$ stands for input layer, $h$ for the hidden layer, $O$ for output layer, and $bn$ for batch normalization layer.

",18758,,18758,,11/29/2021 3:01,11/29/2021 3:01,Where does batch normalization layers present in a neural network?,,1,1,,,,CC BY-SA 4.0 28750,1,,,7/20/2021 10:29,,1,67,"

Currently I'm trying to implement Scrabble with MuZero.

The $15 \times 15$ game board observation (as input) is of size $27 \times15 \times15$ (26 letters + 1 wildcard) with a value of 0 or 1.

However I'm having difficulties finding a suitable way to encode the player's rack of letters (Always 7 letters on the rack).

The available tiles are: 26 letters $(A-Z)$ and 1 wildcard.

A rack can also contain multiple tiles of the same letter.

Example: rack of player 1 is $[A,A,C,E,T,T,H] -> A:2x, C:1x, E:1x, T:2x, H:1x$

How can I represent a rack of tiles as a $(? \times)15 \times15$ (or other board size) matrix ?

",48309,,18758,,7/22/2021 15:15,8/12/2022 23:52,Scrabble rack observation with MuZero,,1,0,,,,CC BY-SA 4.0 28751,1,,,7/20/2021 10:47,,1,16,"

Can anyone explain what language-conditioned visual reasoning is? I saw this term in this paper and I searched on the internet but I couldn't find a proper explanation.

",48700,,2444,,7/20/2021 20:11,7/20/2021 20:11,What is language-conditioned visual reasoning?,,0,0,,,,CC BY-SA 4.0 28752,1,,,7/20/2021 12:32,,0,47,"

I have a question

I have had a dataset below

             sensor1   sensor2  sensor3 ...
2021-01-01    1.32       2.2      1.0
2021-01-02    4.3        2.0      0.8 ...
...

I know ARMA model is useful for time-series forecasting

However, how can I use ARMA model for data with multiple attributes ?

If data with a single attribute can be a input for ARMA model, should I aggregate the attribute set? (for example, after normalizing each attribute, I add up all values every rows to transform all attributes to a single attribute)

",38808,,,,,12/12/2022 20:05,"How can I use a prediction model (e.g., ARMA model or LSTM) for multi-variate data?",,1,0,,,,CC BY-SA 4.0 28753,2,,28727,7/20/2021 12:53,,1,,"

I am not sure how in-depth the information you want need to be, but maybe I can share some thoughts that may help you!

Environment I would highly recommend using OpenAI Gym with your environment, since most of the already implemented RL-algorithms are designed to use gym environments. You can design your environment any way you want to and then use gym to exchange action, observation and reward handling. If you want a 3D environment you could look into Unity3D which has a machine learning toolbox, but as far as I understood you can just model the game in python.

Algorithm Now the tricky part. It really depends on how you design your game and how you want to handle the agents. A simple scenario would be only using one agent (Single-Agent RL). In this case, you would have to code a simple opponent by yourself and the agent could only be as good as the opponent you coded.

If you want them both to be as good as possible, you would have to use a Multi-Agent RL (MARL) approach. MARL comes with its own problems and difficulties. The main problem is that the learning environment is unstable, because the resulting states and rewards do not only depend on the action of one agent, but of the actions of multiple agents. In a competetive game, where the agents do not communicate, Independent Reinforcement Learning (IRL) could work. In this case you would use two completely seperated agents with a single-agent RL algorithm for each agent.

However other MARL algorithms make use of centralised training, which guarantees a stable learning environment and decentralised execution. MADDPG is a wildley used algorithm for MARL problems, however there are a lot of different and newer algorithms. I would recommend checking recent review papers like this one.

How I would approach the problem Now I don't know how helpful this was. If I had to train agents for your scenario, I would follow these steps:

  1. Create the environment and test it
  2. Design an observation space - what does the agent see in each time step?
  3. Decide on action spaces (discrete in your case), which action can he choose?
  4. Design a reward function / reward to tell the agent when the action was successful
  5. Decide on an algorithm and train the agents with scenarios as simple as possible, then make it more difficult if the training is working.

Hope this helps!

",43752,,,,,7/20/2021 12:53,,,,1,,,,CC BY-SA 4.0 28754,1,,,7/20/2021 13:14,,0,136,"

I want to create an AI that converts words to International Phonetic Alphabet (IPA), but I am not sure which architecture I am supposed to use.

It is not possible to translate the characters one by one since there are multiple characters in the source word corresponding to one IPA character. There are solutions for this kind of problem, for example using an Encoder that encodes the content of the input which the decoder then translates, but I am uncertain if this isn't too abstract for this problem.

Can anyone think of a suitable solution for this task?

",48702,,18758,,7/21/2021 12:55,12/17/2022 20:05,Building an AI that predicts the pronunciation of words,,1,2,,,,CC BY-SA 4.0 28755,2,,28752,7/20/2021 16:16,,1,,"

I don't think you need to go for aggregation -- this looks like a job for VARIMA, the vector-version of ARIMA. In ARIMA, the output of the sequence at time $t$, which can be notated $X_t$, is a function of the past inputs $\{X_1, X_2, \dots, X_{t-1}\}$. For a univariate $AR(k)$ process, the corresponding ARIMA model is given by

$$ X_t - \sum_{i=1}^k \alpha_i X_{t-i} = \varepsilon_t + \sum_{i=1}^k \theta_{i}\varepsilon_{t-i}$$

with parameters $\alpha_i, \theta_i$, inputs $X_i$, and i.i.d. zero-mean Gaussian error terms $\varepsilon_i$. The generalization of this to multiple variables is thus simply

$$\mathbf{x}_t - \sum_{i=1}^k \mathbf{A}_i \mathbf{x}_{t-1} = \mathbf{e}_t + \sum_{i=1}^k \mathbf{\Theta}_i \mathbf{e}_{t-i}$$

for $\mathbf{x}_i, \mathbf{e}_i \in \mathbb{R}^n$. Note that before, the parameters were scalars. Now, $\mathbf{A}_i, \mathbf{\Theta}_i \in \mathbb{R}^{n\times n}$ -- size-$n$ square matrices.

It looks like there's a Github implementation here as well, though I haven't looked closely at this.

If you're going for an LSTM-based sequence modeling approach, this is even easier -- an LSTM cell can take in input of arbitrary dimensions, so you shouldn't have to make any changes.

If you'd like to see the math, concretely, the LSTM forward pass equations have the form

$$(\cdot)^{(t)} = g(\mathbf{W}^{(\cdot)} x^{(t)} + \mathbf{U}^{(\cdot)}x^{(t)} + \mathbf{b}^{(\cdot)})$$

where $\mathbf{W}, \mathbf{U}$ are the parameter matrices associated with the inputs and hidden states, respectively, for each gate, and $\mathbf{b}$ is a bias term. So in the single-variable case, $\mathbf{W}, \mathbf{U}, \mathbf{b}$ are all scalars; in the multivariate case, $\mathbf{W}, \mathbf{U}$ are now size-$n$ square matrices, and $\mathbf{b}$ is a vector of length $n$. No further modification is needed, and you should be able to just plug-and-play with most deep learning libraries.

So your inputs would just be the vector of attributes at a particular time step.

",48705,,,,,7/20/2021 16:16,,,,0,,,,CC BY-SA 4.0 28756,2,,27021,7/20/2021 17:50,,1,,"

The most popular theoretical framework in use currently, in the neuromorphic (brain-inspired) computing community is the Neural Engineering Framework (NEF). Neural Engineering by Chris Eliasmith and Charles Anderson explains the framework comprehensively.

As a follow up to that, How to Build a Brain by Chris Eliasmith describes the more recent and more high-level description of how to get spiking neural networks to actually perform multiple functions: the Semantic Pointer Architecture (SPA).

If you're looking for hardware descriptions too, the research publications on the chips Neurogrid, Brainscales, Braindrop, SpiNNaker, Loihi, TrueNorth etc. provide some good high-level descriptions of how to actually build the aforementioned chips.

",48708,,,,,7/20/2021 17:50,,,,1,,,,CC BY-SA 4.0 28759,1,28770,,7/20/2021 21:38,,0,266,"

I am trying to build a Deep Learning model that takes a numeric vector $X$ of dimension $1 \times 50$ and predicts a numeric vector $y$ of dimension $1 \times 50$.

It's a linear regression problem. I am trying to achieve the coefficients/weights that can help me

Code I used:

X = np.array(...) // array of 50 features and 5 sample vectors (shape of X is 5x50)
y = np.array(...) // array of 50 features (shape of y is 5x50)
model = Sequential([Dense(1, input_shape=[5,50])])

optimizer = Adam(0.001)

model.compile(loss='mse', optimizer=optimizer, metrics=['mae', 'mse'])

model.fit(X,y, epochs=250, validation_split=0.25))

So, basically we are achieving X*w ~ y where $w$ is the weights/coefficients that we want to identify using DL.

Programmatically, I tried using the same logic and calculated $w = y . X^{-1}$ for all the vectors. Took average of the coeff and applied on the test data.

",48712,,18758,,7/21/2021 12:58,7/21/2021 12:58,Train a deep learning model with input as a vector and predicts as a vector?,,1,2,,7/22/2021 14:24,,CC BY-SA 4.0 28760,1,28799,,7/20/2021 22:33,,3,539,"

I know about CPU, GPU and TPU. But, it is the first time for me to read about XPU from PyTorch documentation about MODULE.

xpu(device=None)


Moves all model parameters and buffers to the XPU.

This also makes associated parameters and buffers different objects. So it should be called before constructing optimizer if the module will live on XPU while being optimized.

Note: This method modifies the module in-place.

Parameters

    device (int, optional) – if specified, all parameters will be copied to that device

Returns

    self

Return type

    Module

CPU stands for Central Processing Unit. GPU stands for Graphical Processing Unit. TPU stands for Tensor Processing Unit.

Most of us know that these processing units are highly useful in the research of some computational intensive domains of AI including deep learning. So, I am wondering whether XPU is also useful in AI research since it is used in PyTorch.

From the context, I can say that PU stands for processing unit. But I have no idea of what X is.

What is the full form for XPU? Where can I read about XPU in detail?

",18758,,18758,,10/21/2021 1:30,10/21/2021 1:30,What exactly is an XPU?,,1,1,,,,CC BY-SA 4.0 28767,1,29918,,7/21/2021 8:45,,2,1116,"

While reading about 1D-convolution in PyTorch, I encountered the concept of channels

in_channels (int) – Number of channels in the input image

out_channels (int) – Number of channels produced by the convolution

Although I encountered this concept of channels early, I am confused about channels and might understand them in the wrong manner.

Since the operation we are discussing is 1D convolution, then there will be two lists of numbers, one is the input list and the other is the filter list. The last one is the feature map (output list).

They look like below

The left one is the input list, the middle one is the filter list and the rightmost is the output list.

Each cell in the input list contains a whole number. Each cell may take value in the fixed range $[a,b]$ of numbers.

What is the concept of channels used here? From where the channels are coming? Is the number of channels stand for the number of elements in the corresponding list?

",18758,,18758,,6/1/2022 23:27,9/14/2022 5:40,What does 'channel' mean in the case of an 1D convolution?,<1d-convolution>,2,1,,,,CC BY-SA 4.0 28770,2,,28759,7/21/2021 12:48,,0,,"

as far as I can see there is a problem in your third line of code. input_shape should be the shape of the input to the network, not the shape of your training data. If your training data has the shape of 5x50 you would have 5 training samples in your dataset. For this to work your input has to be 50:

model = Sequential([Dense(1, input_shape=[50])])

Now with model.summary() you can check that the input of your network is (None, 50).

Also note that in a neural network, each neuron has an activation function. Depending on the activation function $f(z)$ , the output would be $y = f(b+W^TX)$ which is not the same as linear regression.

Hope this helps!

",43752,,,,,7/21/2021 12:48,,,,0,,,,CC BY-SA 4.0 28771,1,,,7/21/2021 12:58,,3,66,"

I am trying to prove the following lemma from Reinforcement Learning: Theory and Algorithms on page 8.

Lemma 1.6. We have that: $$ \left[(1-\gamma)\left(I-\gamma P^{\pi}\right)^{-1}\right]_{(s, a),\left(s^{\prime}, a^{\prime}\right)}=(1-\gamma)\sum_{h=0}^{\infty} \gamma^{t} \mathbb{P}_{h}^{\pi}\left(s_{h}=s^{\prime}, a_{h}=a^{\prime} \mid s_{0}=s, a_{0}=a\right) $$ where $\pi$ is a deterministic and stationary policy with: $$ P_{(s, a),\left(s^{\prime}, a^{\prime}\right)}^{\pi}:=\left\{\begin{aligned} P\left(s^{\prime} \mid s, a\right) & \text { if } a^{\prime}=\pi\left(s^{\prime}\right) \\ 0 & \text { if } a^{\prime} \neq \pi\left(s^{\prime}\right) \end{aligned}\right. $$ The Corollary 1.5 should also be useful: $Q^{\pi}=\left(I-\gamma P^{\pi}\right)^{-1} r$

To be honest, I don't have much idea to do it. I saw that the LHS of Lemma 1.6 is related to $Q^\pi$, so my idea is to expand Q and see if it's possible to separate out the $r$. I did the following but end up clueless, $$\begin{aligned} Q^{\pi}(s, a) &=E\left[\sum_{t=0}^{\infty} \gamma^{t} R\left(s_{t}, a_{t}\right) \mid \pi, s_{0}=s, a_{0}=a\right] \\ \ &=R(s, a)+\gamma \sum_{s^{\prime}} P\left(s^{\prime} \mid s, a\right) E\left[\sum_{t=0}^{\infty} \gamma^{t} R\left(s_{t+1} \cdot a_{t+1}\right) \mid \pi, S_{1}=s^{\prime}, a_{1}=\pi\left(s^{\prime}\right)=a^{\prime}\right] \\ &= R(s, a)+\gamma \sum_{s^{\prime}} P\left(s^{\prime} \mid s, a\right) \left[R(s^\prime,a^\prime) + \gamma\sum_{s^{\prime\prime}}P(s^{\prime\prime}|s^\prime,a^\prime)Q(s^{\prime\prime}, a^{\prime\prime})\right] \end{aligned}$$ I have been staring at this equation for hours with no progress. I hope I can get some guidance from you guys.

",48724,,2444,,7/23/2021 11:02,7/23/2021 11:02,"How to prove Lemma 1.6 in the book ""Reinforcement Learning: Theory and Algorithms""",,0,0,,,,CC BY-SA 4.0 28772,1,,,7/21/2021 18:17,,1,290,"

Recently I came across the paper that introduces the Vision Transformer (ViT) "AN IMAGE IS WORTH 16X16 WORDS: TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE".

The thing I don't really understand at the moment is, what is meant with "mean attention distance".

More specifically in the caption of Figure 11 on page 18 of the paper they state:

" ... Attention distance was computed for 128 example images by averaging the distance
      between the query pixel and all other pixels, weighted by the attention weight. ..."

How can the query be on a pixel level?
Isn't the overall approach of the ViT to divide the input image into patches which are linearly embedded, combined with a positional embedding and then feed into the transformer encoder.
So the attention should be on the patch level not on the pixel level?

I would be very happy if someone could elaborate a bit more on the above sentence, so far I found no deeper explanation.

",48730,,18758,,7/21/2021 22:41,7/21/2021 22:41,Computing the mean attention distance for ViT,,0,0,,,,CC BY-SA 4.0 28775,1,28776,,7/21/2021 21:46,,1,60,"

I am trying to understand the code mechanics when selecting the elite states and elite actions. It appears clear to me that they are those that appear in the episodes with the rewards bigger than the threshold.

My question is: should I select state-action pairs by their immediate reward or by the episode reward?

I am applying the method to a craft environment interesting to me and I have been studying an example applying the OpenAI's Gym taxi environment, but I do not fully understand the code.

",33566,,2444,,7/22/2021 10:38,7/22/2021 11:42,"In the cross-entropy method, should I select state-action pairs by their immediate reward or by the episode reward?",,1,6,,,,CC BY-SA 4.0 28776,2,,28775,7/21/2021 22:06,,1,,"

My question is if I should select state_action pairs by theirs immediate reward or should I select them by the episode reward?

By the return (sum of all rewards) from the whole episode. A lot of decisions made in "good" episodes do not lead to immediate rewards, but instead transition towards states where better rewards are possible.

In retrospect, you do not know whether any single action was a good choice, but with the cross entropy method (CEM) you rely on the fact that on average the better episodes will contain more good decisions than the worse episodes, so you train the policy neural network on the state, action pairs from the elite episodes as if all the decisions were correct. This will not be true, but will hopefully be true more often than by chance, so the policy should improve.

This can be a noisy, high variance approach with any RL method. CEM is one of the most sensitive to noise and variance. However, the taxi environment is deterministic, and that makes using CEM more reasonable.

",1847,,2444,,7/22/2021 11:42,7/22/2021 11:42,,,,2,,,,CC BY-SA 4.0 28777,1,,,7/21/2021 23:36,,1,274,"

PyTorch documentation provided the following descriptions to the Convolution layers

nn.Conv1d              Applies a 1D convolution over an input signal composed of several input planes.

nn.Conv2d              Applies a 2D convolution over an input signal composed of several input planes.

nn.Conv3d              Applies a 3D convolution over an input signal composed of several input planes.

nn.ConvTranspose1d     Applies a 1D transposed convolution operator over an input image composed of several input planes.

nn.ConvTranspose2d     Applies a 2D transposed convolution operator over an input image composed of several input planes.

nn.ConvTranspose3d     Applies a 3D transposed convolution operator over an input image composed of several input planes.

If you observe the descriptions on the right side. Each description is of the form "Applies an operation over an input signal/image composed of several input planes." It is not just confined to Convolution layers, same phrase has been used for several other layers including pooling layers and a normalization layer.

I have doubt with the word "input planes" used here.

What is the meaning of the input plane used here? Does it refer geometrical plane or some other?

",18758,,18758,,7/22/2021 22:41,7/23/2021 8:10,What does 'input planes' mean in the phrase 'input signal/image composed of several input planes'?,,1,0,,,,CC BY-SA 4.0 28778,1,,,7/22/2021 1:23,,1,854,"

I have seen tutorials online saying that you should do data augmentation AFTER doing the train/val/test split. However, when I go online to read some research papers, I see numerous instances of authors saying that they first do data augmentation on the dataset and then split it because they don't have enough data. Is it just that these are silly mistakes, even for papers with many citations, or is this acceptable?

Example: Research paper. they say: "Among these selected 480 images, 94 images were col-lected while changing the viewing angle, including images of 30 youngapples, 32 expanding apples, and 32 ripe apples.These 480 images were then expanded to 4800 images using dataaugmentation methods, yielding the training dataset. The training da-taset is used to train the detection model. The remaining 480 images areused as the test dataset to verify the detection performance of theYOLOV3-dense model".

",48734,,48734,,7/22/2021 2:43,7/22/2021 7:56,Train Validation Test Splitting After or Before Data Augmentation?,,2,0,,,,CC BY-SA 4.0 28779,1,,,7/22/2021 1:59,,2,43,"

Does someone know where can I find information about how much research, nowadays, is done in ANNs?

I've checked in this document Redes Neuronales: Conceptos básicos y aplicaciones, Universidad Tecnológica Nacional, México (2001) by D. J. Matich, that "nowadays research is uncountable" but that was in 2001.

I found nothing else on my further google search. Then, I've consulted Google Scholar, and by clicking on the option "Since 2021" it displayed 38,800 results, but, AFAIK, it includes a lot of different types of documents, e.g. books.

",44999,,44999,,7/23/2021 21:02,7/23/2021 21:02,"How much research, approximately, is done in ANNs?",,0,2,,,,CC BY-SA 4.0 28780,2,,28778,7/22/2021 2:00,,1,,"

In my personal experience, that depends. Augmenting data for training purposes is valid, and can even improve performance, as you may be aware. For testing purposes, it may be valid. Let me give you two examples when that may be the case:

Facial Recognition. Imagine that you have an augmentation function that can change the face pose (left/right pose, for simplicity). You may want to include augmented pose images for testing your models robustness.

The paper you mentioned. In this case, you have apple detection. As the authors in [1] say:

Apples in orchards were detected and the growth stages of apples were judged. Since the angle and intensity of sunlight illumination varies greatly during the day, whether the neural network can process the images collected at different time of the day depends on the integrity of the training dataset. In order to enhance the richness of the experimental dataset, the collected images were pre-processed in terms of colour, brightness, rotation, and image definition.

After this brief introduction, the authors proceed into discussing the augmentation types they used for enhancing the richness of the dataset. As for the case of Facial Recognition, augmenting the test data follows the same idea of having a diverse testing data.

References

[1] Tian, Y., Yang, G., Wang, Z., Wang, H., Li, E., & Liang, Z. (2019). Apple detection during different growth stages in orchards using the improved YOLO-V3 model. Computers and electronics in agriculture, 157, 417-426.

",48703,,,,,7/22/2021 2:00,,,,0,,,,CC BY-SA 4.0 28781,2,,28744,7/22/2021 2:43,,1,,"

I think the answer to your question is much more a rule of thumb than an appropriate analytical answer. First of all, I would like to remark that Batch Normalization [1] are applied most commonly to convolutional layer, constituting what is called a "convolutional block" (Convolution + Batch Normalization + Activation). Thus, for giving you an idea on where to put the normalization layers, I will analyze three papers that make use of Batch Normalization.

Unsupervised representation learning with deep convolutional generative adversarial networks [2]. In Section 3, Approach and Model Architecture, the authors make the following remark:

Third is Batch Normalization [1] which stabilizes learning by normalizing the input to each unit to have zero mean and unit variance. This helps deal with training problems that arise due to poor initialization and helps gradient flow in deeper models. This proved critical to get deep generators to begin learning, preventing the generator from collapsing all samples to a single point which is a common failure mode observed in GANs. Directly applying batchnorm to all layers however, resulted in sample oscillation and model instability. This was avoided by not applying batchnorm to the generator output layer and the discriminator input layer.

thus, the authors have experimented with Batch Normalization on all layers, but this approach resulted in model instability. They ultimately avoided Batch Normalization on the Generator's output layer, and Discriminator's input layer. These correspond to "raw image layers" (the output of the generator is an image, as well as the input of the discriminator). Using your notation, it would be something like:

$$G: z \rightarrow (C + BN + A)_{1} \rightarrow (C + A)_{2} \rightarrow O$$ $$D: I \rightarrow (C + A)_{1} \rightarrow (C + BN + A)_{2} \rightarrow h_{1} \rightarrow h_{2} \rightarrow O$$

where $G$ stands for Generator, $D$ for Discriminator, $C$ for convolution, $BN$ for batchnorm, $A$ for activation and $D_{i}$ for dense layers.

Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising [3]. In Figure 1., the authors show their proposed architecture that applies the same logic. Neither the first convolutional layer, neither the last use Batch Normalization. Although the authors make their point in justifying the usage of residual learning + Batch Normalization, the paper does not justify this architectural choice. The network would look something like this:

$$I \rightarrow (C + A)_{1} \rightarrow (C + BN + A)_{2} \rightarrow \cdots \rightarrow (C + BN + A)_{n} \rightarrow C \rightarrow O$$

Deep Residual Learning for Image Recognition [4]. This paper proposes a different scheme, as it applies Batch Normalization after each convolution operation on convolutional layers (thus including the input).

To sum up some papers use it only on the "hidden convolutional blocks", others on all of them. My advice is that, if you have the time, you should compare the two approaches. Remark: maybe I am unaware of some further development on the matter that gives a precise argument in favor of whether of these approaches.

References

[1] Ioffe, S., & Szegedy, C. (2015, June). Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning (pp. 448-456). PMLR.

[2] Radford, A., Metz, L., & Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434.

[3] Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2017). Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE transactions on image processing, 26(7), 3142-3155.

[4] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

",48703,,48703,,7/22/2021 2:50,7/22/2021 2:50,,,,0,,,,CC BY-SA 4.0 28782,1,,,7/22/2021 3:58,,1,352,"

The basic layers for performing convolution operations 1,2,3 in PyTorch are

nn.Conv1d:   Applies a 1D convolution over an input signal composed of several input planes.

nn.Conv2d:   Applies a 2D convolution over an input signal composed of several input planes.

nn.Conv3d:   Applies a 3D convolution over an input signal composed of several input planes.

Along with them, there are lazy versions 1,2,3 of each of the aforementioned layers. They are

nn.LazyConv1d:    A torch.nn.Conv1d module with lazy initialization of the in_channels argument of the Conv1d that is inferred from the input.size(1).

nn.LazyConv2d:    A torch.nn.Conv2d module with lazy initialization of the in_channels argument of the Conv2d that is inferred from the input.size(1).

nn.LazyConv3d:    A torch.nn.Conv3d module with lazy initialization of the in_channels argument of the Conv3d that is inferred from the input.size(1).

We can observe from the description of lazy layers that the in_channels argument undergoes lazy initalization. Along with it, the attributes that will be lazily initialized are weight and bias.

Lazy Initialization is a performance optimization where you defer (potentially expensive) object creation until just before you actually need it. Lazy initialization is primarily used to improve performance, avoid wasteful computation, and reduce program memory requirements

Since in_channels, weight and bias are undergoing lazy initialization in lazy convolution layers of PyTorch, I am guessing that there may be cases that the layers can perform convolution operation without the need of in_channels, weight and bias or bypassing some of them.

Am I guessing correct? Are there any cases in which convolution operation is said to be done without initializing weights or number of input channels? If not, what is the gain we are getting by making such lazy initialization?

Is it purely an implementation technique to postpone initialization of those three quantities till the actual execution of the convolution operation in order to use resources minimally?

",18758,,18758,,1/17/2022 8:10,1/17/2022 8:10,"Is there any gain by lazy initialization of weights, biases and number of input channels for a convolution operation?",,0,0,,,,CC BY-SA 4.0 28784,2,,28732,7/22/2021 5:42,,0,,"

It’s a tradeoff allowing you to fit a larger model into a fixed RAM budget (ie the size of your GPU). Whether this is a good tradeoff is model- and data-specific, but anecdotally I’ve had good luck with it and usually use half precision to good effect (NLP, mostly).

",13360,,,,,7/22/2021 5:42,,,,1,,,,CC BY-SA 4.0 28786,1,,,7/22/2021 7:44,,1,186,"

There are fourteen convolution layers in PyTorch. Among them six are related to convolution, another six are related to transposed convolution. The remaining two are fold and unfold operations.

The documentation of PyTorch itself provided this link in order to visualize the operations to understand the convolution and transposed convolution easily. You can see the visuals and understand those variants of convolution operation easily.

Are there any such visuals (e.g. a diagram or animation) or any other resources available to visualize and understand the remaining two operations: fold and unfold?

",18758,,2444,,7/23/2021 11:23,7/23/2021 11:23,"Is there any animation that illustrates the ""fold"" and ""unfold"" operations of convolutional layers?",,0,1,,,,CC BY-SA 4.0 28787,2,,28778,7/22/2021 7:56,,1,,"

I personally can’t think of any good reason to apply data augmentation before splitting the dataset, though one may exist. The issue is that if you augment first and split later, you risk introducing unwanted correlations between your training and test datasets.

In the paper you linked it sounds as if the training data is all procedurally derived from the test set, which makes it hard to trust the test accuracy. How do you know the network is doing any more than picking up some basic (non-generalizable) similarity between the various augmentations and their source input?

",42108,,,,,7/22/2021 7:56,,,,0,,,,CC BY-SA 4.0 28788,1,,,7/22/2021 9:03,,0,35,"

Does anyone have experience with using Cosine Similarity for text classification? I see a number of articles on how to find cosine similarity between documents using Doc2Vec, Gensim, etc.

I have a classification problem (binary) where I want to try out the cosine similarity. I do know how to calculate it, but all the articles that I see only explain until the point of calculating it between two documents.

Right now, I am planning to do this.

  1. Calculate the cosine similarity of 'my paragraph' (the one that I want to classify) with all samples in classi (their class is known). Then take the average (call that avgi)

  2. Calculate the cosine similarity of my paragraph (the one that I want to classify) with all samples in classo (their class is known). Then take the average (call that avgo)

  3. Compare avgi and avgo and then predict the class for 'my paragraph'

That sounds like a very manual way of doing it. Is there some better/widely used way of doing it?

",48740,,2444,,7/24/2021 12:44,7/24/2021 12:44,How to calculate cosine similarity for classification when you have say 10000 samples belonging to two classes have a bunch of samples,,0,4,,,,CC BY-SA 4.0 28791,1,28830,,7/22/2021 16:16,,0,61,"

I would like to train an LSTM neural network to either "approve" or "reject" a string based on the word-type sequence. For instance: "Mike's Airplane" would output "approved", but "Airplane Mike's" would output "reject". My method for doing this is to decompose the string into an array of words. eg.

["Mike's", "Airplane"]

, then convert the array of words to an array of word-types since the actual word is irrelevant. The word types (pronoun, noun, adjective etc.) are defined constants having numerical values. eg.

const wordtypes={propernoun:1, adjective:2, noun:3, ownername:4};
console.log(wordtypes.propernoun); // 1

Mike's Fast Airplane is

["Mike's", "Fast", "Airplane"] 

which becomes:

input:[properNoun, adjective, noun]
output: "approve"

properNoun represents the first word(Mike's), adjective the second word(Fast), and noun the third word(Airplane).

I would then like to use this array to train a Neural Network so that it can approve/reject other word-type sequences.

I am concerned with the methodology/algorithm rather than the syntax; I'm extremely new to Machine Learning and Artificial Neural Networks, so, I am using brain.js and NodeJS because they're relatively easy to use.

  • I would like to input multiple parameters for a single word because many words have multiple word types (depending on the context). For example, a word can be both a "noun" and a "verb". How do I represent this input?

  • Is this a good application for LSTM? Or is there a better-suited ML
    algorithm? My dilemma is in deriving the proper inputs & methodology to effectively train the Neural Network.

  • How is my methodology for accomplishing this approval system?

",48743,,,,,8/24/2021 20:05,Using a Neural Network (LSTM) to approve/reject word-type sequences,,1,2,,,,CC BY-SA 4.0 28794,2,,28249,7/22/2021 21:21,,0,,"

that behavior can be expected depending upon how you set your thresholds as those are what determines whether a node is explored and assigned a connection. Assuming empty networks don't perform well in your environment genomes that don't find any connections will likely go extinct early on.

",20044,,,,,7/22/2021 21:21,,,,0,,,,CC BY-SA 4.0 28796,1,,,7/22/2021 22:24,,1,261,"

PROBLEM

I'm writing a Monte-Carlo tree search algorithm to play chess in Python. I replaced the simulation stage with a custom evaluation function. My code looks perfect but for some reason acts strange. It recognizes instant wins easily enough but cannot recognize checkmate-in-2 moves and checkmate-in-3 moves positions. Any ideas?

WHAT I'VE TRIED

I tried giving it more time to search but it still cannot find the best move even when it leads to a guaranteed win in two moves. However, I noticed that results improve when I turn off the custom evaluation and use classic Monte Carlo Tree Search simulation. (To turn off custom evaluation, just don't pass any arguments into the Agent constructor.) But I really need it to work with custom evaluation because I am working on a machine learning technique for board evaluation.

I tried printing out the results of the searches to see which moves the algorithm thinks are good. It consistently ranks the best move in mate-in-2 and mate-in-3 situations among the worst. The rankings are based on the number of times the move was explored (which is how MCTS picks the best moves).

MY CODE

I've included the whole code because everything is relevant to the problem. To run this code, you may need to install python-chess (pip install python-chess).

I've struggled with this for more than a week and it's getting frustrating. Any ideas?

import math
import random
import time

import chess
import chess.engine


class Node:

    def __init__(self, state, parent, action):
        """Initializes a node structure for a Monte-Carlo search tree."""
        self.state = state
        self.parent = parent
        self.action = action

        self.unexplored_actions = list(self.state.legal_moves)
        random.shuffle(self.unexplored_actions)
        self.colour = self.state.turn
        self.children = []
        
        self.w = 0 # number of wins
        self.n = 0 # number of simulations

class Agent:
    
    def __init__(self, custom_evaluation=None):
        """Initializes a Monte-Carlo tree search agent."""
        
        if custom_evaluation:
            self._evaluate = custom_evaluation

    def mcts(self, state, time_limit=float('inf'), node_limit=float('inf')):
        """Runs Monte-Carlo tree search and returns an evaluation."""

        nodes_searched = 0
        start_time = time.time()

        # Initialize the root node.
        root = Node(state, None, None)

        while (time.time() - start_time) < time_limit and nodes_searched < node_limit:
            
            # Select a leaf node.
            leaf = self._select(root)

            # Add a new child node to the tree.
            if leaf.unexplored_actions:
                child = self._expand(leaf)
            else:
                child = leaf

            # Evaluate the node.
            result = self._evaluate(child)

            # Backpropagate the results.
            self._backpropagate(child, result)

            nodes_searched += 1

        result = max(root.children, key=lambda node: node.n) 

        return result

    def _uct(self, node):
        """Returns the Upper Confidence Bound 1 of a node."""
        c = math.sqrt(2)

        # We want every WHITE node to choose the worst BLACK node and vice versa.
        # Scores for each node are relative to that colour.
        w = node.n - node.w

        n = node.n
        N = node.parent.n

        try:
            ucb = (w / n) + (c * math.sqrt(math.log(N) / n))
        except ZeroDivisionError:
            ucb = float('inf')

        return ucb

    def _select(self, node):
        """Returns a leaf node that either has unexplored actions or is a terminal node."""
        while (not node.unexplored_actions) and node.children:
            # Pick the child node with highest UCB.
            selection = max(node.children, key=self._uct)
            # Move to the next node.
            node = selection
        return node

    def _expand(self, node):
        """Adds one child node to the tree."""
        # Pick an unexplored action.
        action = node.unexplored_actions.pop()
        # Create a copy of the node state.
        state_copy = node.state.copy()
        # Carry out the action on the copy.
        state_copy.push(action)
        # Create a child node.
        child = Node(state_copy, node, action)
        # Add the child node to the list of children.
        node.children.append(child)
        # Return the child node.
        return child

    def _evaluate(self, node):
        """Returns an evaluation of a given node."""
        # If no custom evaluation function was passed into the object constructor, 
        # use classic simulation.
        return self._simulate(node)

    def _simulate(self, node):
        """Randomly plays out to the end and returns a static evaluation of the terminal state."""
        board = node.state.copy()
        while not board.is_game_over():
            # Pick a random action.
            move = random.choice(list(board.legal_moves))
            # Perform the action.
            board.push(move)
        return self._calculate_static_evaluation(board)

    def _backpropagate(self, node, result):
        """Updates a node's values and subsequent parent values."""
        # Update the node's values.
        node.w += result.pov(node.colour).expectation()
        node.n += 1
        # Back up values to parent nodes.
        while node.parent is not None:
            node.parent.w += result.pov(node.parent.colour).expectation()
            node.parent.n += 1
            node = node.parent

    def _calculate_static_evaluation(self, board):
        """Returns a static evaluation of a *terminal* board state."""
        result = board.result(claim_draw=True)

        if result == '1-0':
            wdl = chess.engine.Wdl(wins=1000, draws=0, losses=0)
        elif result == '0-1':
            wdl = chess.engine.Wdl(wins=0, draws=0, losses=1000)        
        else:
            wdl = chess.engine.Wdl(wins=0, draws=1000, losses=0)

        return chess.engine.PovWdl(wdl, chess.WHITE)


def custom_evaluation(node):
    """Returns a static evaluation of a board state."""

    board = node.state
    
    # Evaluate terminal states.
    if board.is_game_over(claim_draw=True):
        result = board.result(claim_draw=True)
        if result == '1-0':
            wdl = chess.engine.Wdl(wins=1000, draws=0, losses=0)
        elif result == '0-1':
            wdl = chess.engine.Wdl(wins=0, draws=0, losses=1000)        
        else:
            wdl = chess.engine.Wdl(wins=0, draws=1000, losses=0)

        return chess.engine.PovWdl(wdl, chess.WHITE)
    
    # Evaluate material.
    material_balance = 0
    material_balance += len(board.pieces(chess.PAWN, chess.WHITE)) * +100
    material_balance += len(board.pieces(chess.PAWN, chess.BLACK)) * -100
    material_balance += len(board.pieces(chess.ROOK, chess.WHITE)) * +500
    material_balance += len(board.pieces(chess.ROOK, chess.BLACK)) * -500
    material_balance += len(board.pieces(chess.KNIGHT, chess.WHITE)) * +300
    material_balance += len(board.pieces(chess.KNIGHT, chess.BLACK)) * -300
    material_balance += len(board.pieces(chess.BISHOP, chess.WHITE)) * +300
    material_balance += len(board.pieces(chess.BISHOP, chess.BLACK)) * -300
    material_balance += len(board.pieces(chess.QUEEN, chess.WHITE)) * +900
    material_balance += len(board.pieces(chess.QUEEN, chess.BLACK)) * -900

    # TODO: Evaluate mobility.
    mobility = 0

    # Aggregate values.
    centipawn_evaluation = material_balance + mobility

    # Convert evaluation from centipawns to wdl.
    wdl = chess.engine.Cp(centipawn_evaluation).wdl(model='lichess')
    static_evaluation = chess.engine.PovWdl(wdl, chess.WHITE)

    return static_evaluation


m1 = chess.Board('8/8/7k/8/8/8/5R2/6R1 w - - 0 1') # f2h2
# WHITE can win in one move. Best move is f2-h2.

m2 = chess.Board('8/6k1/8/8/8/8/1K2R3/5R2 w - - 0 1')
# WHITE can win in two moves. Best move is e2-g2.

m3 = chess.Board('8/8/5k2/8/8/8/3R4/4R3 w - - 0 1')
# WHITE can win in three moves. Best move is d2-f2.

agent = Agent(custom_evaluation)

result = agent.mcts(m2, time_limit=30)
print(result)
````
",48504,,48504,,7/22/2021 23:26,7/22/2021 23:26,Why doesn't this Monte Carlo Tree Search algorithm work properly?,,0,0,,,,CC BY-SA 4.0 28797,1,28801,,7/23/2021 6:24,,0,180,"

Is it possible to train a DL model that will generate a full resolution 2D image based on few numbers describing this image and what type of model or architecture would that be?

What I want to achieve is that I deliver to the model some numbers for example describing positions of objects on the screen and number describing how lit the scene is and I get back a 2D image with objects in their correct positions and proper lighting, but for one set of input data values I will get always one same image (see image above). These input data also could be anything else than positions and lighting, these are only examples helping to visualize what I mean.

This all, of course, assuming that I have a lot of annotated training data that consists of images and labels of the objects' positions and scene lighting values.

EDIT: The final model would be trained on real images taken from Full HD camera, not some simple shapes like presented here, that I did only to explain better my question.

",22659,,22659,,7/23/2021 11:08,7/23/2021 14:17,Is it possible to use deep learning to generate a 2D image from a few numerical values?,,1,4,,,,CC BY-SA 4.0 28798,2,,28777,7/23/2021 8:10,,2,,"

Yes, it is a bit misleading. What it really means is input channels, so it would be: nn.Conv2d: Applies a 2D convolution over an input signal composed of several input channels.

So, why don't just use channels instead of input planes? Well, initially the major deep learning applications were used for computer vision or image processing approaches. In CV or image processing, each one of the components of the third dimension of an image tensor is called channel, so an image $I$ would be $I:H \times W \times C$ where $C$ is the number of channels (usually: $C=3$, RGB, or $C=4$, RGBA). So using the traditional terminology the last dimension of a tensor would be called channels. However this terminology is highly coupled to image processing because it assumes the 3D tensor you are processing is an image.

On the other hand, there have been an increasing number of applications where AI is used for other kind of input data (the sensor that gathers data is no longer a camera). Me for example, I use deep learning for Radar Signals. So what happens there? I happens that the image processing terminology no longer applies and, if any, it is very prone to errors (think that channels in signals can be frequency channels, wave propagation paths...).

So going back to your original question, the pytorch guys realized that and changed the terminology to a more geometrical description (which in the end is a more abstract terminology that would suit any application). So instead of referring to the last dimension of a tensor as channels, as you can see in any research paper, they took a step forward and used the geometrical description of a tensor (which in the end can encode any kind of 3D information not only images).

",26882,,,,,7/23/2021 8:10,,,,0,,,,CC BY-SA 4.0 28799,2,,28760,7/23/2021 8:20,,6,,"

XPU is a device abstraction for Intel heterogeneous computation architectures, which can be mapped to CPU, GPU, FPGA and other accelerators. The "X" from XPU is just like a variable, like in maths, so you can do X=C and you get CPU accceleration, or X=G and you get GPU acceleration... That's the intuition behind that abstract name.

In order to integrate a new accelerator you need 2 things:

",26882,,26882,,7/28/2021 8:09,7/28/2021 8:09,,,,0,,,,CC BY-SA 4.0 28800,1,,,7/23/2021 11:05,,-1,37,"

I came across a https://generated.photos/ site that claims to produce images entirely by artificial intelligence. My question is how does this program work? What mechanism and libraries should I use if I want to do a project like this?

",48760,,18758,,7/23/2021 15:46,7/23/2021 16:14,how produce image of face working with AI,,1,2,,7/24/2021 0:57,,CC BY-SA 4.0 28801,2,,28797,7/23/2021 14:17,,0,,"

Probably closely related to the problem of interest would be something akin Neural Radiance Fields (NeRF for short) https://www.matthewtancik.com/nerf.

The model takes several images from different angles and view of the scene and learns a 3d representation of the scene, that can be used to sample novel views of this scene (not present in the training data).

The knowledge of the scene is encoded in the weights of MLP. Namely, the renderer is a function that takes coordinates $(x, y, z)$ and viewing angle $(\theta, \phi)$ as an input, and outputs the color (RGB) and the density $\sigma$.

For your problem, one can modify, the network, that is takes in addition to the coordinates and angles, other properties of the scene, such as lightning (some scalar), e.t.c and outputs again (RGB) vector and density $\sigma$.

In order for procedure to be successful, I would expect, that model needs data with different lighting, and provided there are enough data points, one could synthesize the view with given lightning.

The presence of multiple objects can be probably taken into account, by passing a set of coordinates $(x_i, y_i, z_i)$ - but this seems to complicate things a lot, and the training procedure.

",38846,,,,,7/23/2021 14:17,,,,1,,,,CC BY-SA 4.0 28803,1,,,7/23/2021 15:59,,2,27,"

Just an idea I am sure I read in a book some time ago, but I can't remember the name.

Given a very large dataset and a neural network (or anything that can learn via something like stochastic gradient descent, passing a subset of samples to modify the model, as opposed to learning from the whole dataset at once), one can train a model for, say, classification.

The idea was a methodology for selecting the samples that would make the model learn the most from, so you can spare the network from learning from examples that would make the model make only small changes, reducing computing time.

I guess an easy methodology would pick at first a sample that is similar to a previous one but with another label, and pick the most similar on features and label samples at last. Does that make sense?

Is there a googleable keyword for what I am talking about?

",48769,,2444,,7/24/2021 11:47,7/24/2021 11:47,Methodologies for passing the best samples for a neural network to learn,,0,2,,,,CC BY-SA 4.0 28804,2,,28800,7/23/2021 16:14,,2,,"

According to the information from the site:

We have built a proprietary dataset by taking tens of thousands of images of people in our studio. These photos are taken in a controlled environment allowing us to make sure that each face has consistent look and quality. After shooting, photos are tagged, categorized, and added to a dataset that is used for machine learning training. In an on-going fashion we feed this dataset into generative adversarial networks to produce faces that have never existed. Further machine learning processes take place after the faces are created in order to identify and remove flawed faces. The final results are made available through our website or API integration.

The algorithm they use is a generative adversarial network. A sota architecture today is StyleGAN2 or, a new version of it, Alias-Free GAN.

Here is a live demo of StyleGAN2: https://thispersondoesnotexist.com/

And here is the official PyTorch implementation: https://github.com/NVlabs/stylegan2

",12841,,,,,7/23/2021 16:14,,,,0,,,,CC BY-SA 4.0 28806,2,,28362,7/23/2021 19:33,,1,,"

There is no label for such bounding boxes, they are simply "ignored" during training. You can assign any value for their "labels", then multiplying what ever loss these boxes generated with 0. If there is no loss, there is no gradient from these boxes.

You can do that by defining a count_boxes vector with binary values. Object and background are counted, so value is 1.0. The remaining "ignored" boxes are marked 0.0. Then pair-wise multiply this count_boxes vector with the loss vector your model generated.

",48496,,48496,,7/23/2021 19:43,7/23/2021 19:43,,,,2,,,,CC BY-SA 4.0 28807,2,,28421,7/23/2021 22:34,,1,,"

Any possible action, including changing the reward function, would be evaluated through the initial reward function. In order to avoid the scenario you described, a reward function needs to disincentivize changes to itself by giving those the lowest possible reward.

",48777,,,,,7/23/2021 22:34,,,,0,,,,CC BY-SA 4.0 28808,1,,,7/24/2021 0:56,,0,59,"

I know the title of this question may raise an eyebrow, but I can't find the technical terms to define or investigate the actual problem.

To demonstrate my problem with a simple hypothetical scenario: Let's say you have dataset pretraining to fruits!

  • The dataset contains $N$ fruits

  • Each fruit has properties ${\{p}\}$, for example, $p_1$ is type, $p_2$ is color, $p_3$ flavour. It is important to note that (i) these properties are communal across all fruits (all fruits have the properties above) and (ii) these properties are constant for each individual fruit (for a fruit,${\{p}\}_n$ stays constant over time).

  • Each fruit has a time series $\{W_t\}_n$ which relate to, for example, measured weight over time. It is important to note that the fruits aren't measured at regular, or the same intervals. Therefore, each fruit in the dataset will have a different weight time series.

  • Therefore the aggregated dataset will have $\sum_{n=1}^{N} dim(\{W_t\}_n)$ observations

  • Let's assume there is some hidden correlation between the weight for a fruit $\{W_t\}_n$ over time and the fruit properties ${\{p}\}$.

So the problem is: What model(s) can we use that it is able to predict the next weight values $\{W_{(t+1)}\}$? More formally stated $f(\{p\}_n,\{W_t\}_n) = \{W_{(t+1)}\}_n$ ?

The challenges here is:

  • We want to maintain the 'uniqueness' of each fruit, that is, we can't simply say if two fruits have the same properties ${\{p}\}$ then they will have the same weight changes over time. To conceptualize this, imagine things happen to certain fruits during their life time, the model is supposed to remember this has happened to those specific fruits and incorporate that into the prediction.
  • Our measurement device was bought at IKEA and sometimes it provides inaccurate readings, so we can't expect a linear or smooth weight time series per fruit.
  • We don't have a lot of weight measurements, let's say 10 on average, but we have a lot of fruits, let's say 100 000.

I have some experience with vanilla and stacked LSTM's. However, I struggle to consolidate my understanding of LSTM's in the abovementioned scenario.

Thank you for reading. I hope this will get the creative juices flowing, or give you a fun mental challenge at least.

",47561,,40434,,7/27/2021 21:36,7/27/2021 21:36,Time series forecasting for multiple objects with common features,,0,2,,,,CC BY-SA 4.0 28809,1,,,7/24/2021 1:14,,2,26,"

There are Nineteen types of pooling layers in PyTorch.

Almost all of the layers are provided with corresponding analytical formulae. But analytical formulae are not provided for the fractional max-pooling layers. Instead, they provided this research paper to understand about fractional max pooling. So, I am thinking that it may be complex for a newcomer to understand about fractional max pooling.

Is there any closed analytical formulae available for fractional max pooling like most of the other pooling layers? If no, is there any simple pseudo-code or visuals (diagram or animation) available for this layer?

",18758,,18758,,9/30/2021 0:43,9/30/2021 0:43,Is there any closed form analytical expression to represent fractional max pooling?,,0,1,,,,CC BY-SA 4.0 28810,1,,,7/24/2021 2:46,,1,115,"

Suppose there is a dataset $D$ of images. We have enough number $n$ of images in the dataset and all the images are of a single class.

Suppose I generated a new image $I$, which is not present in the given dataset, of the same class using a generator neural network. I want to calculate how natural the image $I$ is wrt the dataset $D$

$m(I, D) = $ how natural the image $I$ with respect to dataset $D$ of images.

I don't want metrics that are applied to a bunch of generated images. I have only one generated image.


I came up with a naive metric

$m(I, D) = \sum\limits_{x \in D} (x-I)^2 $

where $x-I$, difference between two images, is defined as the sum of pixel differences of both the images i.e., $$x-I = \sum\limits_{x_i \in x, I_i \in I} \|x_i - I_i\|$$

But, this measure shows how similar the new image $I$ w.r.t is to the set of images in my dataset at the pixel level. I want a measure of how natural it is.

",18758,,18758,,7/24/2021 9:32,7/24/2021 12:04,Is there any metric for calculating how natural a single image is given a dataset of the same class images?,,1,0,,,,CC BY-SA 4.0 28811,1,,,7/24/2021 9:29,,2,1321,"

PyTorch provides max pooling and adaptive max pooling.

Both, max pooling and adaptive max pooling, is defined in three dimensions: 1d, 2d and 3d. For simplicity, I am discussing about 1d in this question.

For max pooling in one dimension, the documentation provides the formula to calculate the output.

In the simplest case, the output value of the layer with input size $(N,C,L)$ and output $(N,C,L_{out})$ can be precisely described as:

$$out(N_i,C_j,k) = \max\limits_{⁡m=0, \cdots ,kernel\_size−1} input(N_i,C_j,stride×k+m)$$

But, adaptive max pooling has no detailed explanation in the documentation.

What is the fundamental difference between max pooling and adaptive max-pooling? max-pooling expects kernel_size and stride as input but adaptive max-pooling does not expect them as inputs and asks only for output size, does it uses kernel and stride for performing the operation? If yes, how does it calculate both?

",18758,,18758,,9/30/2021 0:39,9/30/2021 0:39,What is the fundamental difference between max pooling and adaptive max pooling used in PyTorch,,0,1,,,,CC BY-SA 4.0 28812,2,,28810,7/24/2021 12:04,,1,,"

Evaluating synthetically generated images is challenging and an active area of research. The problem is that the "how natural is an image"-task is not well-defined and subjective.

To evaluate generated images we can define two abstract properties: fidelity and diversity, as we want to generate not only a single high-quality image, but also different ones from the domain.

There are several methods for automating and standardizing the evaluation of generated samples, such as Inception Score (IS) and Fréchet Inception Distance (FID). Both approaches utilize a CNN classification model (typically Inception-v3), that is pretrained on the entire dataset.

We can then use this pretrained model to classify generated images and calculate the distribution of predicted classes, which should be uniformly distributed for high diversity, and the distribution of predictions on a single class will represent the fidelity. However, this approach does not capture how synthetic images compare to real images.

Instead of comparing images pixel-wise, we can compare their abstract features. CNNs are known to be good at extracting abstract features, so we can use a pretrained CNN for extracting a feature embedding from one of the last hidden layers. After this, we can compare the Euclidean or cosine distance between various embeddings, for instance. The better way to compare the similarity between generated and real images is FID. Here is an article on the topic for more details.

",12841,,,,,7/24/2021 12:04,,,,0,,,,CC BY-SA 4.0 28815,1,,,7/24/2021 15:07,,1,41,"

I know three Python libraries that are popular in deep learning research community: Keras, PyTorch, Tensorflow. I don't know much about Theano.

This question is not about the efficiency, flexibility or ease of the library for its users. This question is about the usage of the library by the deep learning (academic, research) community.

Which library is used by most of the contemporary researchers? Is there any comparison or stats available among the libraries, based on GitHub implementations or by some other means?

",18758,,2444,,7/25/2021 2:05,8/24/2021 3:09,Are there any stats available on the usage of libraries by deep learning researchers?,,1,1,,,,CC BY-SA 4.0 28816,2,,28815,7/25/2021 2:38,,1,,"

Something that I personally use is Google Trends. This is a very useful tool for verifying the interest of a broad public on some subject. Results can even be refined to include region and/or time span.

For instance, here you can see a comparison for the interest in Tensorflow, Keras and Pytorch over the past 12 months:

",48703,,,,,7/25/2021 2:38,,,,0,,,,CC BY-SA 4.0 28817,1,,,7/25/2021 2:49,,0,81,"

What is the advantage of RL compared with the following simple classic algorithm for the MountainCarEnv? Considering that it takes a long time to train the agent just to achieve this simple task?

import gym

envName = 'MountainCar-v0'
env = gym.make(envName)

x, v = state = env.reset()
done = False
maxPotential = False
steps = 0

def computeAction(state):
    global x, v, maxPotential, steps
    xNew, vNew = state
    action = 1
    if xNew < -1.1:
        maxPotential = True
    if not maxPotential:
        if xNew < x: action = 0
        else: action = 2
    else:
        action = 2
    x, v = xNew, vNew
    steps += 1
    return action

while not done:
    state, reward, done, info = env.step(computeAction(state))
    env.render()

print('steps', steps)
  • result: around 100 steps
",48796,,2444,,7/25/2021 11:43,7/25/2021 11:50,What is the advantage of RL compared with my simple classic algorithm for the MountainCarEnv?,,1,2,,,,CC BY-SA 4.0 28818,1,28841,,7/25/2021 4:01,,10,2685,"

Having been studying computer vision for a while, I still cannot understand what the difference between a transformer and attention is?

",48736,,2444,,7/25/2021 11:49,7/26/2021 11:41,"In Computer Vision, what is the difference between a transformer and attention?",,1,0,,,,CC BY-SA 4.0 28819,1,,,7/25/2021 5:25,,1,58,"

To be clear, I'm very uninformed on the topic of alternative learning algorithms to backprop, all my knowledge comes from articles like these: lets-not-stop-at-backprop backprop-alternatives we-need-a-better-learning-algorithm. I also don't know exactly how you would arrange a system to find the best learning algorithm it can, or if it's even possible to make something like that with reinforcement learning.

I was thinking that you could take a system and have it generate neural nets in the space of all neural nets and generate rules for how to deal with the weights in the network, and then just let the system run trying to find the best possible arrangement of neurons and training rules such that it can learn how to do x thing very very fast, with very little training.

Is this something that has already been tried or something that isn't possible?

",44117,,2444,,7/25/2021 11:54,7/25/2021 11:54,Why doesn't anyone use reinforcement learning to find the best possible alternative to backpropagation?,,0,0,,,,CC BY-SA 4.0 28820,1,,,7/25/2021 5:48,,0,76,"

Is there a way to select the most important features using PCA? I am not looking for the principal components with the highest scores but a subset of the original features.

",32517,,2444,,7/25/2021 13:51,12/17/2022 19:03,Is there a way to select the subset of most important features using PCA?,,1,0,,,,CC BY-SA 4.0 28821,2,,28817,7/25/2021 8:55,,1,,"

If your goal is to create a controller for the mountain car problem, and you have access to the model, then RL probably offers no advantage over your code. I am saying probably, because I am taking you at your word that the code performs well over multiple tests, and it doesn't matter too much if it does not because there are many equivalent solutions based on analysis of the original problem.

This is the difference:

  • RL techniques find solutions to control problems. Model-free RL techniques can do so without access to the environment model.

  • The classic control code is a solution to a given control problem, found by the code author through analysis of the problem.

The same RL agent that could solve mountain car, could solve similar environments with same state and action space, e.g. an environment similar to mountain car but with multiple hills and valleys, or with alterations to the physics model. The same classic control code would fail and need to be re-written for the new environment.

The mountain car problem is interesting in control theory because it introduces a level of abstraction - the simplest feedback-based control algorithms will fail because moving directly towards the goal does not work. However, it is still a toy problem. The solutions are well understood, and no-one needs to solve it again. Solving it with RL is not necessary, it is a demonstration of learning something through trial and error.

As control problems become more complex, with multiple levels of goals to solve, then classic control approaches become more unweidly. For example, a walking robot has many more variables to manage, and walking systems are more likely to benefit from automated search for the best controllers as opposed to analysis and classic control at all levels.

",1847,,1847,,7/25/2021 11:50,7/25/2021 11:50,,,,0,,,,CC BY-SA 4.0 28822,1,28824,,7/25/2021 11:18,,3,209,"

I am reading the paper Tracking-by-Segmentation With Online Gradient Boosting Decision Tree. In Section 2.1, the paper says

Given training examples, $\left\{\left(\mathbf{x}_{i}, y_{i}\right) \mid \mathbf{x}_{i} \in \mathbb{R}^{n}\right.$ and $y_{i} \in$ $\mathbb{R}\}_{i=1: N}, f(\cdot)$ is constructed in a greedy manner by selecting parameter $\theta_{j}$ and weight $\alpha_{j}$ of a weak learner iteratively to minimize an augmented loss function given by $$ \mathcal{L}=\sum_{i=1}^{N} \ell\left(y_{i}, f\left(\mathbf{x}_{i}\right)\right) \equiv \sum_{i=1}^{N} \exp \left(-y_{i} f\left(\mathbf{x}_{i}\right)\right) $$ where an exponential loss function is adopted ${ }^{1}$. The greedy optimization procedure is summarized in Algorithm 1.

I cannot understand the exponential loss function. In my opinion, the loss function should get the smallest value when $y_i=f(x_i)$. But the loss function in the image obtains a smaller value if $(-y_i f(x_i))$ becomes smaller.

",48804,,2444,,12/10/2021 9:33,12/10/2021 9:33,Why is the exponential loss used in this case?,,1,0,,,,CC BY-SA 4.0 28824,2,,28822,7/25/2021 14:11,,3,,"

The loss is

$$\mathcal{L}=\sum_{i=1}^{N} \ell\left(y_{i}, f\left(\mathbf{x}_{i}\right)\right) \equiv \sum_{i=1}^{N} \exp \left(-y_{i} f\left(\mathbf{x}_{i}\right)\right),$$

which can also be written as follows

$$\mathcal{L} = \sum_{i=1}^{N} e^{-y_{i} f\left(\mathbf{x}_{i}\right)} \tag{1}\label{1}$$

The important thing to note here is the $-$ in the exponent, which allows us to write \ref{1} as follows (see this)

$$\mathcal{L} = \sum_{i=1}^{N} \frac{1}{e^{y_{i} f\left(\mathbf{x}_{i}\right)}} \tag{2}\label{2}$$

So, the loss becomes smaller the higher $y_{i}$ and $f\left(\mathbf{x}_{i}\right)$ are. If, for example, $y_{i}$ is negative and $f\left(\mathbf{x}_{i}\right)$ positive (or vice-versa), then \ref{2} would be higher. If both are positive or negative, then the loss will be smaller, as we will be summing fractions of the form $\frac{1}{e^k} < 1$ for some positive constant $k$: the higher the $k$, the smaller the loss.

So, if you use this loss, it seems that you want that

  1. $y_{i}$ and $f\left(\mathbf{x}_{i}\right)$ have the same sign (either both negative or both positive)
  2. $f\left(\mathbf{x}_{i}\right)$ is as high as possible (note that you cannot change $y_{i}$), which could even be much higher (in terms of magnitude) than $y_{i}$ (not sure why they would want this)

I don't know why they chose this loss function because I didn't read the paper (yet), but it seems to me that this is how you should interpret this loss function.

",2444,,2444,,12/10/2021 9:32,12/10/2021 9:32,,,,0,,,,CC BY-SA 4.0 28825,1,28828,,7/25/2021 14:24,,0,37,"

Per google machine-learning glossary, when I have 100 training examples and update my model for each training example, if I train my model 5 epochs without early-stop, there are 500 iterations in total, is my understanding correct?

",45689,,,,,7/25/2021 15:50,Is my understanding about the number of iterations correct?,,1,0,,,,CC BY-SA 4.0 28827,2,,17084,7/25/2021 15:45,,6,,"

A relatively recent but interesting paper that discusses this topic in more detail is Reward is enough (Artificial Intelligence, 2021) by David Silver, Satinder Singh, Doina Precup, and Richard S. Sutton (so by some of the godfathers of RL, who are all at DeepMind).

Their reward-is-enough hypothesis (RIEH) (page 4) is

Hypothesis (Reward-is-Enough). Intelligence, and its associated abilities, can be understood as subserving the maximisation of reward by an agent acting in its environment.

This hypothesis is slightly different from the reward hypothesis (RH), which states that all goals can be represented by rewards and the achievement of those goals can be viewed or formulated as the maximization of rewards, because the RIEH also states that the abilities needed to achieve the main goal in the environment arise from the maximization of the reward, so the RIEH is a stronger hypothesis than the RH.

The authors give examples to explain the RIEH (emphasis mine).

Sophisticated abilities may arise from the maximisation of simple rewards in complex environments. For example, the minimisation of hunger in a squirrel’s natural environment demands a skilful ability to manipulate nuts that arises from the interplay between (among other factors) the squirrel’s musculoskeletal dynamics; objects such as leaves, branches, or soil that the squirrel or nut may be resting upon, connected to, or obstructed by; variations in the size and shape of nuts; environmental factors such as wind, rain, or snow; and changes due to ageing, disease or injury. Similarly, the pursuit of cleanliness in a kitchen robot demands a sophisticated ability to perceive utensils in an enormous range of states that includes clutter, occlusion, glare, encrustation, damage, and so on.

They also try to argue why language, perceptron, social intelligence and general intelligence could all arise from the maximization of a single reward signal (e.g. survival).

Moreover, they also say that similar sophisticated abilities associated with intelligence could arise from the maximization of different reward signals, i.e. the emergence of these abilities is robust to the choice of reward objective.

Additionally, they also talk about prior knowledge and learning, but, in my view, they should have emphasized/noted that, for example, perception, without the suitable sensors (inductive bias) cannot emerge: this is not a limitation of the RIEH, as it says nothing about how these abilities actually arise, or the nature of the agent needed for them to arise, or which specific reward signal should be maximized.

In the end, they also conjecture that RL is the main framework that could be used to find out whether these conjectures/speculations are true or not.

They do not go into philosophical arguments, such as the Chinese-Room argument or the problem consciousness: their argument to address these issues would probably be that any ability (even consciousness, if it's an ability) required to achieve the ultimate goal would arise in the process of maximization of the reward.

",2444,,,,,7/25/2021 15:45,,,,0,,,,CC BY-SA 4.0 28828,2,,28825,7/25/2021 15:50,,1,,"

Updating model for each training example means batch size of 1, aka stochastic gradient descent(SGD).

1 iteration is defined as forward propagate, calculate loss, backpropagate and finally update weights.

Since batch size is 1, running 5 epochs on 100 training examples with SGD means you will do 500 iterations, yes.

",48523,,,,,7/25/2021 15:50,,,,0,,,,CC BY-SA 4.0 28829,2,,28820,7/25/2021 15:59,,0,,"

There are better methods for selecting most important features in supervised setting. Assuming they are not an option, or you're simply interested in PCA:

Say you originally had 100 features and you applied PCA and first 10 PCs explains the 95 % of ratio.

After applying PCA, you can calculate linear correlations between top 10 PCs and original features. I assume some of your features will be highly correlated with some subset of top 10 PCs. You can draw an abstract line and choose subset of original features that are at least 0.80 linearly correlated with at least one of top 10 PCs.

",48523,,,,,7/25/2021 15:59,,,,0,,,,CC BY-SA 4.0 28830,2,,28791,7/25/2021 16:30,,0,,"

You already figured out much of the problem. You can solve it with sequence models like LSTM/GRU.

One-hot encode word-types. Assume there are types of [properNoun, adjective, noun] as you said. Then "Mike" will be represented as a vector, [1,0,0], "fast" as [0,1,0], and "airplane" as [0,0,1].

Summing up these, you will train a model that takes [[1,0,0], [0,1,0], [0,0,1]] as input and return binary classification result. Thus you can use LSTM/GRU with return_sequences = False.

An example with Keras is here.

A few things you should be careful about:

  • I have shown 2D input here but sequence models take 3D input in form of (batch_size, time_steps, number_of_features) so you will need to reshape even if you train with batch size 1. More details are here.
  • Prepad the time dimension with 0 vector, so that number of time steps will be equal for all samples. More details are here.

For multi-word type, you can create multiple data instances and train the network.

",48523,,,,,7/25/2021 16:30,,,,0,,,,CC BY-SA 4.0 28832,2,,28754,7/25/2021 16:36,,0,,"

Look for sequence-to-sequence modelling, aka, seq2seq. https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html https://en.wikipedia.org/wiki/Seq2seq

",48523,,,,,7/25/2021 16:36,,,,0,,,,CC BY-SA 4.0 28833,1,,,7/25/2021 17:46,,1,3804,"

I have just dived into deep learning for NLP, and now I'm learning how the BERT model works. What I found odd is why the BERT model needs to have an attention mask. As clearly shown in this tutorial https://huggingface.co/transformers/glossary.html:

from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-cased")

sequence_a = "This is a short sequence."
sequence_b = "This is a rather long sequence. It is at least longer than the sequence A."

encoded_sequence_a = tokenizer(sequence_a)["input_ids"]
encoded_sequence_b = tokenizer(sequence_b)["input_ids"]

padded_sequences = tokenizer([sequence_a, sequence_b], padding=True)

Output of padded sequences input ids:

padded_sequences["input_ids"]

[[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]]

Output of padded sequence attention mask:

padded_sequences["attention_mask"]
[[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]

In the tutorial, it clearly states that an attention mask is needed to tell the model (BERT) which input ids need to be attended and which not (if an element in attention mask is 1 then the model will pay attention to that index, if it is 0 then model will not pay attention).

The thing I don't get is: why does BERT have an attention mask in the first place? Doesn't model need only input ids because you can clearly see that attention_mask has zeros on the same indices as the input_ids. Why does the model need to have an additional layer of difficulty added?

I know that BERT was created in google's "super duper laboratories", so I think the creators had something in their minds and had a strong reason for creating an attention mask as a part of the input.

",40591,,2444,,7/27/2021 16:52,10/12/2021 8:50,Isn't attention mask for BERT model useless?,,1,0,,,,CC BY-SA 4.0 28837,1,,,7/26/2021 3:10,,1,22,"

Consider the following neural network with $\ell$ layers.

$$i_0 \rightarrow h_1 \rightarrow h_2 \rightarrow h_3 \cdots \rightarrow h_{\ell-1} \rightarrow o_{\ell} ,$$

where $i, h, o$ stands for input, hidden and output layer respectively.

In general, an input passes from $i_0$ to $o_{\ell}$, which is known as the forward pass. And then the weight updating happens from $o_{\ell}$ to $i_0$ which is called backward pass.

I want to know whether the following mechanism exists in literature assuming that all layers have the same input and output dimensions.

For each iteration

  1. Select a subset $L \subseteq \{0, 1, 2, 3, \cdots, \ell\}$ randomly.
  2. Input passes through layers whose indices are present in $L$ only i.e, forward pass happens by dropping some layers.
  3. Update weights for layers whose indices are in $L$ i.e., update the weights of layers which are participated in step (2).

What is the name of the technique mentioned above, if it is present in literature?

",18758,,2444,,7/26/2021 12:28,7/26/2021 12:28,Is there any existing mechanism that allows us to pass input from randomly selected layers of neural network per iteration?,,0,4,,,,CC BY-SA 4.0 28838,1,,,7/26/2021 6:57,,0,23,"

I read the tutorial of LSTM from here. However, I have certain doubts that I need to address.

  1. Since we use true labels and do not remove anything from the original data, then how is it possible for the LSTM model's predicted output to match the real labels as it throws data?

  2. And how do we determine the number of output neurons?

According to my understanding, in word-to-word prediction, one cell's outputs are the number of words (exiting in vocabulary).

",41756,,2444,,7/26/2021 11:16,7/26/2021 11:16,How will actual labels be matched with predicted labels when LSTM discards data even from current time stamp input data?,,0,5,,,,CC BY-SA 4.0 28840,1,,,7/26/2021 9:29,,1,67,"

I am working on a task of generating synthetic data to help the training of my model. This means that the training is performed on synthetic + real data, and tested on real data.

I was told that batch normalization layers might be trying to find weights that are good for all while training, which is a problem since the distribution of my synthetic data is not exactly equal to the distribution of the real data. So, the idea would be to have different 'copies' of the weights of batch normalization layers. So that the neural network estimates different weights for synthetic and real data, and uses just the weights of real data for evaluation.

My question is, how to perform batch normalization in the aforementioned case? Is there already an implementation of batch norm layers in PyTorch that solves the problem?

",48816,,48816,,7/29/2021 14:24,7/29/2021 14:24,Batch normalization for multiple datasets?,,0,8,,,,CC BY-SA 4.0 28841,2,,28818,7/26/2021 11:41,,8,,"

The original transformer is a feedforward neural network (FFNN)-based architecture that makes use of an attention mechanism. So, this is the difference: an attention mechanism (in particular, a self-attention operation) is used by the transformer, which is not just this attention mechanism, but it's an encoder-decoder architecture, which makes use of other techniques too: for example, positional encoding and layer normalization. In other words, the transformer is the model, while the attention is a technique used by the model.

The paper that introduced the transformer Attention Is All You Need (2017, NIPS) contains a diagram of the transformer and the attention block (i.e. the part of the transformer that does this attention operation).

Here's the diagram of the transformer.

Here's the picture of the attention mechanism (as you can see from the diagram above, the transformer used the multi-head attention on the right).

One thing to keep in mind is that the idea of attention is not novel to the transformer, given that similar ideas had already been used in previous works and models, for example, here, although the specific attention mechanisms are different.

Of course, you should read the mentioned paper for more details about the transformer, the attention mechanism and the diagrams above.

",2444,,,,,7/26/2021 11:41,,,,0,,,,CC BY-SA 4.0 28842,1,28844,,7/26/2021 12:23,,2,175,"

I have been trying to adjust a neural network to a simple function: the mass of an sphere. I have tried with different architectures, for example, a single hidden layer and two hidden layers, always with 128 neurons each, and training them for 5000 epochs. The code is the usual one. Just in case, I publish one of them

model = keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])
                        ,keras.layers.Dense(128, activation="relu")
                        ,keras.layers.Dense(1, activation="relu")])
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
history = model.fit(x, y, validation_split=0.2, epochs=5000)

The results are shown in the graphs.

I suspect that I am making an error somewhere, because I have seen that deep learning is able to match complex functions with much less epochs. I shall appreciate any hint to fix this problem and obtain a good fit with the deep learning function.

In order to make it clear I post the graph's code.

rs =[x for x in range(20)]
def masas_circulo(x):
    masas_circulos =[]
    rs =[r for r in range(x)]
    for r in rs:
        masas_circulos.append(model.predict([r])[0][0])

   return masas_circulos

 masas_circulos = masas_circulo(20) 
 masas_circulos
 esferas = [4/3*np.pi*r**3 for r in range(20)]
 import matplotlib.pyplot as plt
 plt.plot(rs,masas_circulos,label="DL")
 plt.plot(rs,esferas,label="Real");
 plt.title("Mass of an sphere.\nDL (1hl,128 n,5000 e) vs ground_truth")
 plt.xlabel("Radius")
 plt.ylabel("Sphere")
 plt.legend();
",33566,,33566,,7/26/2021 13:14,8/27/2021 19:53,Not able to find a good fit for a simple function with neural networks,,2,3,,,,CC BY-SA 4.0 28844,2,,28842,7/26/2021 13:32,,2,,"

You're trying to learn a cubic function that explodes in values and your issue is scaling. I have been able to learn a better approximation by scaling data and using tanh as activation function.

Code and result are as below:

Convergence around X=100 happens because of tanh activation. Relu will not work better because of negative values that is the result of scaling. You can try playing with Leaky Relu activation and various alpha values.

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

from tensorflow import keras
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split

def mass_of_sphere(R):
    return (4/3) * np.pi * (R**3)

X = np.linspace(1, 120, 500000)
y = [mass_of_sphere(x) for x in X]

X = np.array(X).reshape(-1, 1)
y = np.array(y)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)

scaler_X = StandardScaler()
X_train = scaler_X.fit_transform(X_train)
X_test = scaler_X.transform(X_test)

scaler_y = StandardScaler()
y_train = scaler_y.fit_transform(y_train.reshape(-1, 1))
y_test = scaler_y.transform(y_test.reshape(-1, 1)).reshape(-1)

model = keras.Sequential([keras.layers.Dense(1),
                          keras.layers.Dense(128, activation = "tanh"),
                          keras.layers.Dense(1, activation = "tanh")])

early_stopping = keras.callbacks.EarlyStopping(monitor = 'val_loss', patience = 5)
model.compile(optimizer = 'rmsprop', loss = 'mse')
history = model.fit(X_train, y_train, epochs = 100, callbacks=[early_stopping], 
                    batch_size = 2048, validation_data=(X_test, y_test))

y_hat = scaler_y.inverse_transform(model.predict(X_test)).reshape(-1)
y_test = scaler_y.inverse_transform(y_test).reshape(-1)

f, ax = plt.subplots(figsize = (12, 4))
ax.plot(sorted(scaler_X.inverse_transform(X_test).reshape(-1)), sorted(y_test), color = 'blue', label = 'Real')
ax.plot(sorted(scaler_X.inverse_transform(X_test).reshape(-1)), sorted(y_hat), color = 'orange', label = 'DL')
ax.legend()
",48523,,48523,,7/26/2021 13:41,7/26/2021 13:41,,,,4,,,,CC BY-SA 4.0 28845,1,,,7/26/2021 14:19,,0,63,"

I am currently designing a Model which takes Input 3D Grid and Model Output at $t-1$. The model figure is described below

I have two thoughts in training the model for above situation.

  • Feed output $t-1$ from ground truth with some noise. And, during testing feed output of the model as previous output at $t-1$. Maybe we can fine tune with model output when training loss is sufficiently low.

  • Feed model output to next stage as $t-1$ output. But I am not sure if this works.

The situation is similar to RNNs but I am using 3D CNN’s here. I don’t know if RNN can be used here. How can I train such a model.

",48819,,18758,,7/26/2021 14:21,7/26/2021 14:21,Feeding the output back to input in 3D CNN model,,0,2,,,,CC BY-SA 4.0 28846,1,,,7/26/2021 15:05,,1,98,"

Is there a notion of exploration-exploitation tradeoff in dynamic programming (or model-based RL)?

",46214,,2444,,7/26/2021 15:12,7/26/2021 18:54,Is there a notion of exploration-exploitation tradeoff in dynamic programming (or model-based RL)?,,1,2,,,,CC BY-SA 4.0 28848,2,,28846,7/26/2021 18:54,,1,,"

I think there is an implicit notion of it in dynamic programming; say, if you have to make some sort of search over a subset of a state space and you are deciding whether to use BFS, breath first search, or DFS, depth first search, you are at least implicitly thinking on the best way to explore/exploit the state space.

As for model based RL, yes. There is explicit algorithms that mediate exploration and exploitation. One of them is UCB, uper confidence bound. One of the best examples of a model based reinforcement learning algorithm is AlphaGo. The algorithm uses a variation of UCB to explore the state space.

",37510,,,,,7/26/2021 18:54,,,,0,,,,CC BY-SA 4.0 28850,1,31550,,7/26/2021 22:06,,2,207,"

This paper uses image augmentation to improve RL algorithms. It contains the following paragraph - "Our approach, DrQ, is the union of the three separate regularization mechanisms introduced above:

  1. transformations of the input image (Section 3.1).
  2. averaging the Q target over K image transformations (Equation (1)).
  3. averaging the Q function itself over M image transformations (Equation (3))."

I do not understand how part 2 and 3 (Equation 1 and 3) and would highly appreciate some detailed elaboration on it.

Here are the equations -

",31755,,49455,,9/5/2021 0:50,9/6/2021 17:51,"What do equations 1 and 3 describe in the ""Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels"" paper?",,1,1,0,,,CC BY-SA 4.0 28851,1,,,7/26/2021 23:04,,0,229,"

I am training a segmentation model on 3D data, after around 170 epochs which took around 4 days, I notice the model is no more learning and the dice score is at 0.51. What is the best approach at this point to keep model learning?
Learning rate: 1e-4
Batch size: 6
optimizer: adamw
loss: generalized dice loss
PS: there are more augmentation in training data than validation data, this might be the reason, the validation loss curve is below training loss curve

",38120,,,,,7/26/2021 23:04,What to do when model stops learning after some epochs,,0,3,,,,CC BY-SA 4.0 28852,1,,,7/26/2021 23:06,,1,24,"

Padding is a technique used in some of the domains of artificial intelligence.

Data is generally available in different shapes. But in order to pass the data as input to a model in deep learning, the model allows only a particular shape of data to pass through it. And hence there is a need to allow padding in case if the input data shape contains dimensions that are less than the dimensions of the input of the model under consideration. For example, we pad input sentences in RNN to match the input shape of the RNN model. Sometimes we pad the input data in order to make a desired shape output. For example, padding is used in convolution operation to keep the size of feature maps intact.

Is handling this type of shape issues is the only purpose of padding? If no, what are the other purposes of padding that are not related to the shaping requirements of data?

",18758,,2444,,7/27/2021 16:46,7/27/2021 16:46,Is reconciling shape discrepancies the only purpose of padding?,,1,0,,,,CC BY-SA 4.0 28853,1,,,7/27/2021 1:18,,1,272,"

Generative Adversarial networks (aka GANs) are used for image generation. The phrase image synthesis is also used in literature.

I know that the phrase image generation stands for

An act of generating an image

The formal definition for image synthesis is given by

Image synthesis is the process of artificially generating images that contain some particular desired content.

The only difference I can notice is that image synthesis is a focused image generation. The focus is on the parts of the image generated.

But, I have an issue with the word synthesis. The word "synthesis" has the following meanings

  1. The combination of components or elements to form a connected whole. Often contrasted with analysis
  2. The production of chemical compounds by reaction from simpler materials.
  3. (in Hegelian philosophy) the final stage in the process of dialectical reasoning, in which a new idea resolves the conflict between thesis and antithesis.

It gives me a sense that I need to use the phrase "image synthesis" if I am generating an image by combining some simpler elements, which is not exactly the same as the focused sense given in the formal definition of image synthesis.

Why does the word "synthesis" is used in the phrase "image synthesis"? Are we synthesizing/combining anything that does not happen in other variants of image generation?

",18758,,2444,,7/27/2021 16:39,7/27/2021 16:39,"Is there any difference between ""image generation"" and ""image synthesis""?",,0,0,,,,CC BY-SA 4.0 28854,2,,28852,7/27/2021 1:31,,1,,"

Learning "border effects" is another reason to use padding at least in convolutional neural networks. This paper specifically looks at 2D CNNs for image processing. In my experience, I use pre-padding with 1D CNNs for NLP so my model can learn morphological affixes.

",19703,,,,,7/27/2021 1:31,,,,1,,,,CC BY-SA 4.0 28855,1,28858,,7/27/2021 4:08,,2,367,"

We need to use a loss function for training the neural networks.

In general, the loss function depends only on the desired output $y$ and actual output $\hat{y}$ and is represented as $L(y, \hat{y})$.

As per my current understanding,

Regularization is nothing but using a new loss function $L'(y,\hat{y})$ which must contain a $\lambda$ term (formally called as regularization term) for training a neural network and can be represented as

$$L'(y,\hat{y}) = L(y, \hat{y}) + \lambda \ell(.) $$

where $\ell(.)$ is called regularization function. Based on the definition of function $\ell$ there can be different regularization methods.

Is my current understanding complete? Or is there any other technique in machine learning that is also considered a regularization technique? If yes, where can I read about that regularization?

",18758,,2444,,7/27/2021 16:42,7/27/2021 18:47,Does regularization just mean using an augmented loss function?,,2,0,,,,CC BY-SA 4.0 28856,1,,,7/27/2021 5:04,,2,90,"

Say I have these equations:

$$x_1 = x_2 + 2y_1 + b$$ $$x_2 = y_2 + c$$ $$y_1 = z + a$$ $$y_2 = y_3 + d$$ $$z = z_1 + e$$

$x_1$ depends on $x_2$ (depends on $y_2$ (depends on $y_3$)) and $y_1$ (depends on $z$ (depends on $z_1$)).

$x_1$ is my final equation and $y_3$ and $z_1$ are my initial variables.

How do I represent them in a neural network? My final aim is to backtrack from $x_1$, and see what change of an amount $n$ in $x_1$ resulted from which of $y_3$ or $z_1$.

All these variables are item prices in the real world.

My inputs are $z_1$ and $y_3$ and my output is $x_1. z_1$ and $y_3$ are prices and the final output $x_1$ is also a price.

",48833,,18758,,7/27/2021 7:27,7/27/2021 9:09,How to create a neural network from a set of equations?,,1,2,,,,CC BY-SA 4.0 28857,1,,,7/27/2021 6:17,,1,25,"

For my application I am considering a learning problem where I simulate a bunch of episodes say '$n$' first, and than carry out the recursive least squares update. Similar to $TD(1)$.

I know that RLS can be used to update parameters being learned as they arrive. This can be done efficiently for single data point and the derivations are easily available online and also easy to understand.

However for my case I am looking for same equations when data arrive as a mini batch and not a single data point at a time. I could not find any material regarding RLS for mini batches.

According to my understanding the same equations can be also used by appropriately considering matrix dimensions. However I do not know if this is valid.

What are the alternatives to be used?

",48837,,18758,,7/27/2021 7:39,7/27/2021 7:39,Recursive Least squares (RLS) for mini batch,,0,0,,,,CC BY-SA 4.0 28858,2,,28855,7/27/2021 8:08,,3,,"

Regularization is not limited to methods like L1/L2 regularization which are specific versions of what you showed.

Regularization is any technique that would prevent network from overfitting and help network to be more generalizable to unseen data. Some other techniques are Dropout, Early Stopping, Data Augmentation, limiting the capacity of network by reducing number of trainable parameters.

",48523,,,,,7/27/2021 8:08,,,,1,,,,CC BY-SA 4.0 28859,2,,28856,7/27/2021 9:09,,1,,"

All modern frameworks for deep learning (PyTorch, Jax, Tensorflow) support automatic differentation. These operations can be easily implemented. Here I write, how it would look like in PyTorch:

class Net(nn.Module):

    def __init__(self):
         super().__init__()

         self.a = nn.Parameter(torch.randn(1))
         self.b = nn.Parameter(torch.randn(1))
         self.c = nn.Parameter(torch.randn(1))
         self.d = nn.Parameter(torch.randn(1))
         self.e = nn.Parameter(torch.randn(1))

   def forward(self, z1, y3):
       z = z1 + self.e
       y2 = y3 + self.d
       y1 = z + self.a
       x2 = y2 + self.c
       x1 = x2 + 2 * y1 + self.b
       return x1

And the use case is the following, say:

net = Net()
net(torch.ones(1), 2 * torch.ones(1))
",38846,,,,,7/27/2021 9:09,,,,0,,,,CC BY-SA 4.0 28860,2,,28740,7/27/2021 14:05,,0,,"

Very interesting paper, I did not know you could get such results using traditional image processing.

Question 1

From the paper:

Since only average feature vector values of $R_1$ and $R_2$ need to be found, we use the integral image approach as used in [14] for computational efficiency. A change in scale is affected by scaling the region $R_2$ instead of scaling the image. Scaling the filter instead of the image allows the generation of saliency maps of the same size and resolution as the input image

So the saliency maps at different scales are just saliency maps with different $R_2$ filter size. So they vary the sizes as they say in:

For an image of width w pixels and height h pixels, the width of region R2, namely wR2 is varied as: $w/2 \geq (w_{R_2}) w/8$

So basically you run the same algorithm for different values of $w_{R_2}$ that will give you different saliency maps of different scales ($R_2$ scales).

Question 2

From the paper:

At a given scale, the contrast based saliency value $c_{i,j}$ for a pixel at position $(i, j)$ in the image is determined as the distance D between the average vectors of pixel features of the inner region $R_1$ and that of the outer region $R_2$

So the coordinates $(i, j)$ are referenced to the whole image, it is almost a convention, everybody uses those indexes to refer the whole image, I do not know why, maybe it was inherited from matrix notation.

So for each pixel in the image you overlap on top $R_1$ and then $R_2$ on top of $R_1$, then you compute the distance $D$ for those 2 regions to get the saliency value of that pixel, then slide the $R_1$ and $R_2$ regions in a sliding window manner (which is basically telling you to implement it with convolution operation)

Question 3

"Bin" is just one of the groups you divide an histogram into. The authors say to compute one histogram (it is used to approximate probability density functions) and then select the value of the biggest bin (the range of values with more occurrences.

So if you compute 1 histogram (search how to do it in google, there are plenty of implementations I use the openCV one) per saliency map, you could say you are computing d-dimensional histogram (one dimension per saliency maps)

",26882,,,,,7/27/2021 14:05,,,,6,,,,CC BY-SA 4.0 28861,1,28876,,7/27/2021 14:43,,0,32,"

I have a set of students (~20) that will work on annotating data for an NLP project.

The annotation task will be as in the following:

text: I like this piza place.
label: [pos, neg]
comments: 
text fluency: [1,2,3,4,5]

The students will need to correct the text first (e.g. correcting piza word), and then fill the fields below.

Is there an online solution to add the data in this format and then to share the link with the students?

I tried to do this in Google forum, but I wasn't able to; I don't know actually if it's possible there.

I am looking for a solution that can allow saving the edits after annotating # instances, as there are many instances and the students won't be able to annotate everything at once. I know that a good solution would be building a website, but I am looking for something that already exists.

",40251,,,,,7/28/2021 4:04,An online editor that allows data labeling format,,1,0,,9/19/2021 0:50,,CC BY-SA 4.0 28862,2,,28855,7/27/2021 18:47,,1,,"

Also, keep in mind that not just any augmentation of the loss function is a regularization.

For example, you can add terms to a loss function that enforce constraints on the solution but do not prevent overfitting nor facilitate generalization.

",48165,,,,,7/27/2021 18:47,,,,0,,,,CC BY-SA 4.0 28863,1,,,7/27/2021 19:18,,2,306,"

I set up a transformer model that embeds positional encodings in the encoder. The data is multi-variate time series-based data.

As I just experiment with the positional encoding portion of the code I set up a toy model: I generated a time series that contains the log changes of a sine function and run a classification model that predicts whether the subsequent value is positive or negative. Simple enough. I also added a few time series with random walks to try to throw off the model.

Predictably, the model very quickly reaches a categorical accuracy of around 99%. Without positional encoding that happens already in the 3rd epoch. However, with positional encoding (I use the same implementation as proposed in the "Attention is all you need" paper), it takes over 100 epochs to reach a similar accuracy level.

So, clearly, all else being equal, learning with positional encoding takes much longer to reach an equal accuracy level than without positional encoding.

Has anyone witnessed similar observations? Apparently adding the positional encodings to the actual values seems to confuse the model. I have not tried concatenations yet. Any advice?

Edit: Or does it simply mean that learned positional encodings perform better than sin/cos encodings? I have not made any special provisions to encourage learned positional encodings, I simply either added the positional encodings to the actual values or I did not.

",12127,,,,,7/27/2021 19:18,Positional Encoding in Transformer on multi-variate time series data hurts performance,,0,2,,,,CC BY-SA 4.0 28864,1,,,7/27/2021 20:19,,1,48,"

I implemented a simple neural network with 1 hidden layer. I used ReLU as activation function for the hidden layer and the output layer just uses the linear function. To check my implementation I tested my neural network with following architecture:

Input Layer: 5 nodes
Hidden Layer: 2 nodes (ReLU)
Output Layer: 1 node (Linear Combination)

Learning Algorithm: Batch Gradient Descent
Error: Squared Error

I trained the neural network for 1000 times over the same input and target output:

Input: [[1, 2, 3, 4, 5], [1, 2, 3, 4, 6]]
Target Output: [[15], [16]]

I expected the network to learn the sum function. However, the network ended up learning a constant function i.e weights were all negative for first layer and the bias values were negative numbers, thus application of ReLU function to it resulted in all 0's. Thus the output was simply the bias values for output layer which was 15.5

How should I interpret the above written results? I could think of a few reasons:

  1. Should I consider that the network converged to a local optimum?
  2. My test dataset (synthetic) was very poor. Had there been negative numbers, I could have ended up with better results?

I tried to verify the 2nd point but it so happened that the results became no better. I used:

Input: [[1, 2, 3, 4, 5], [-1, -2, -3, -4, -6]]
Target Output: [[15], [-16]]

It so happened that the neural network was able to evaluate both the training inputs accurately i.e 15 and -16. However, still it outputs 15 for case [1, 2, 3, 4, 6] instead of expected 16 as the weights for first layer are negative.

This made me believe that my training dataset is poor but then I tried training on a 1000 random test inputs, and the results were very poor. The weights became very large. I really can't understand what the problem is. I doubt that there might be some error in my implementation.

Another observation was: I initialized the weights and biases to optimal values i.e values that correspond to sum function:

 'W': [[ 1., -1.],
       [ 1., -1.],
       [ 1., -1.],
       [ 1., -1.],
       [ 1., -1.]]
 'b':  [0., 0.]

 'W':  [[ 1.],
        [-1.]]
 'b':  [0.]

I ran the training on that 1000 length training set but there was no effect on parameters as the error was in any case 0. Why wasn't my neural network able to learn these parameters.

For reference this is my code for neural network (hard coded for 3 layer network):

class NeuralNetwork:
    def __init__(self, layers, alpha):
        self.num_layers = len(layers) # has to be 3
        self.layers = layers
        self.alpha = alpha
        self.weights = [{'W': None, 'b': None} for i in range(self.num_layers - 1)]
        for i in range(self.num_layers - 1):
            self.weights[i]['W'] = np.array([[np.random.normal(0, np.sqrt(2/layers[i])) for ii in range(layers[i+1])] for jj in range(layers[i])])
            self.weights[i]['b'] = np.array([np.random.normal(0, np.sqrt(2/layers[i])) for ii in range(layers[i+1])])
    
    def evaluate(self, input_feature):
        psi = input_feature @ self.weights[0]['W'] + self.weights[0]['b']
        x = np.maximum(psi, 0)
        y = x @ self.weights[1]['W'] + self.weights[1]['b']
        return y
    
    def update_weights(self, training_input, target_output):
        training_output = self.evaluate(training_input)
        
        dely = target_output - training_output
        
        db1  = np.sum(dely, axis = 0)
        dw1  = np.sum(a*dely, axis = 0).T
        
        da   = dely @ (self.weights[1]['W'].T)
        z    = training_input @ self.weights[0]['W'] + self.weights[0]['b']
        dz   = np.maximum(z, 0) * da
        
        db0  = np.sum(dz, axis = 0).T
        dw0  = training_input.T @ dz
        
        self.weights[0]['W'] += self.alpha * dw0
        self.weights[0]['b'] += self.alpha * db0
        self.weights[1]['W'] += self.alpha * dw1
        self.weights[1]['b'] += self.alpha * db1
",35926,,35926,,7/28/2021 8:07,7/28/2021 8:07,ReLU function converging to local optimum in one case and diverging in the other one,,0,0,,,,CC BY-SA 4.0 28874,1,28875,,7/28/2021 2:12,,0,71,"

Machine learning model was created by reading an Excel file where data was stored. I applied RandomForestRegressor to create a model that predicts the size of the sieve particles according to pressure, but the value of R2 is too large negative. I found out through googling that R2 can be negative, but I don't know what it means to have such a large negative. When I applied the same amount of different data to the designed model, R2 showed a result that was close to 1, but I don't know why this data is only large negative. RMSE and SCORE scores are good, but I don't understand if only R2 scores are bad... I would appreciate it if you could let me know what is the problem and what to consider.

My Data(Capture Image):

My Code:

import pandas as pd
import numpy as np
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
from google.colab import drive 
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score 

drive.mount('/gdrive', force_remount=True)

data = pd.read_csv(r"/gdrive/MyDrive/Coal_Inert_Case/Inert_Case_1.csv") 

x =data[['press1', 'press2', 'press3', 'press4', 'press5', 'press6', 'press7', 'press8', 'press9', 'press10', 'press11', 'press12', 'press13',
 'press14', 'press15', 'press16', 'press17', 'press18', 'press19', 'press20', 'press21',
 'press22', 'press23', 'press24', 'press25', 'press26', 'press27', 'press28', 'press29',
 'press30', 'press31', 'press32', 'press33', 'press34', 'press35', 'press36', 'press37',
 'press38', 'press39', 'press40', 'press41', 'press42', 'press43', 'press44', 'press45',
 'press46', 'press47', 'press48', 'press49', 'press50', 'press51', 'press52', 'press53']]

       
y = data[['Sieve 16000', 'Sieve 11000', 'Sieve 8000', 'Sieve 5600', 'Sieve 4000',
       'Sieve 2800', 'Sieve 2000', 'Sieve 1400', 'Sieve 1000', 'Sieve 710',
       'Sieve 500', 'Sieve 355', 'Sieve 250', 'Sieve 180', 'Sieve 125',
       'Sieve 90', 'Sieve 63', 'Sieve 44', 'Sieve 31', 'Sieve 0']] 

X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.3,random_state= 42)

forest = RandomForestRegressor(n_estimators=1000,random_state= 42) 
forest.fit(X_train, y_train) 
y_pred = forest.predict(X_test)

mse = mean_squared_error(y_test, y_pred) 
rmse = np.sqrt(mse)
r2_y_predict = r2_score(y_test, y_pred)

print("RMSE:", rmse)
print("R2 : ", r2_y_predict)

The output (RMSE, R2) is:

RMSE: 0.6913667737213217
R2 :  -7294765918428.414
",48840,,,,,7/28/2021 3:44,What is the meaning of R2 appearing as a negative in the RandomForestRegressor?,,1,0,,,,CC BY-SA 4.0 28875,2,,28874,7/28/2021 3:44,,0,,"

Take a look at either of these great posts about negative R2 values.

  1. What does negative R-squared mean?
  2. When is R squared negative?

TLDR is that your model is poorly fit to the data.

From looking at the code you attached I would try reducing the number of x features you are using. It is possible that there is multicollinearity or some feature just are not useful. Using sklearn you can use recursive feature elimination RFE or recursive feature elimination and cross-validated RFECV.

You would just need to do something like this.

from sklearn.feature_selection import RFE

# fit the selector
selector = RFE(forest, n_features_to_select=5, step=1)
selector.fit(X_train, y_train) 

# print the top features 
selector.support_
",48878,,,,,7/28/2021 3:44,,,,0,,,,CC BY-SA 4.0 28876,2,,28861,7/28/2021 4:04,,1,,"

The good folks behind Spacy have their paid product called Prodigy which is a data labeling tool. I haven't used it but it appears you can host it somewhere and then you would just have to send the link to the students. It is a little pricey but you get a lifetime license...

A free alternative might be Label Studio but I am not sure how easy it is to host it somewhere.

Hope this helps!

",48878,,,,,7/28/2021 4:04,,,,0,,,,CC BY-SA 4.0 28877,1,,,7/28/2021 4:55,,3,389,"

Loss functions are used in training neural networks.

I am interested in knowing the mathematical properties that are necessary for a loss function to participate in gradient descent optimization.

I know some possible candidates that may decide whether a function can be a loss function or not. They include

  1. Continuous at every point in $\mathbb{R}$
  2. Differentiable at every point in $\mathbb{R}$

But, I am not sure whether these two properties are necessary for a function to become a loss function.

Are these two properties necessary? Are there any other mathematical properties that are necessary for a function to become a loss function to participate in gradient descent optimization?

Note that this question is not asking for recommended properties for a loss function. Asking only the mandatory properties in a given context.

",18758,,2444,,7/28/2021 11:46,7/28/2021 11:46,What are the necessary mathematical properties to be a loss function in gradient based optimizations?,,1,0,,,,CC BY-SA 4.0 28878,1,,,7/28/2021 7:03,,1,61,"

I encountered the phrase "fusing features" several times in the literature. I am providing an excerpt from a research paper to provide context for usage of the word fusion.

The reason is that the signals measured by multiple sensors are disordered and correlated with multiple sources. Those methods that are proposed with an attempt to use multiple data sources are called data fusion techniques. Upon the position where the fusion operation is conducted, there are three general approaches: signal-level fusion, feature-level fusion, and decision-level fusion.

I am guessing that "fusing features" refers to an act of combining several features, from different domains, and then generating new features that serves the purpose of fusing.

If yes, the word "fusion" here refers to its common English usage

The process or result of joining two or more things together to form a single entity.

That is,we need to combine multiple features in any manner and then coming up with new features that are good enough to perform our AI task.

Or does it have any formal definition and requirements based on the input or output features? Is there any formal definition for fusion operator?

",18758,,2444,,12/29/2021 12:04,12/29/2021 12:04,"Does ""fusion"" in ""feature fusion"" has any formal definition?",,1,3,,,,CC BY-SA 4.0 28881,1,,,7/28/2021 9:46,,1,72,"

In several courses and tutorials about neural networks, people often say that the learning rate (LR) should be the first hyper-parameter to be tuned before we tweak the others. For example, in this lecture (minute 59:55), the lecturer says that the learning rate is the first hyper-parameter that he tunes.

However, is it possible that the optimal learning rate is different for different architectures (for example, a different number of layers and neurons)? Or maybe the LR is architecture-independent and it depends only on the characteristics of the particular dataset we train our model on?

Moreover, should the LR be searched in the same process (e.g. grid-search) as the other hyper-parameters?

",22659,,2444,,7/28/2021 21:08,8/28/2021 3:07,Can the optimal learning rate differ for different architectures?,,1,0,,,,CC BY-SA 4.0 28882,1,,,7/28/2021 10:03,,0,84,"

I had no idea that there is a stack exchange community for A.I. :-/ So I repost this question here in hope of some guidelines. I tried to delve into the materials discussed in AI: A Modern Approach course book, I am struggling to wrap my head around the model I'm trying to build without some code examples to aid me fill some gaps.

I have a hard time understanding how to combine a rule-based decision making approach for an agent in an agent-based model I try to develop.

The interface of the agent is a very simple one.

public interface IAgent
{
   string ID { get; }

   void Execute(IAgentMessage message,
                IActionScheduler actionScheduler);
}

For the sake of the example, let's assume that the agents represent Vehicles which traverse roads inside a large warehouse, in order to load and unload their cargo. Their route (sequence of roads, from the start point until the agent's destination) is assigned by another agent, the Supervisor. The goal of a vehicle agent is to traverse its assigned route, unload the cargo, load a new one, receive another assigned route by the Supervisor and repeat the process.

The vehicles must also be aware of potential collisions, for example at intersection points, and give priority based on some rules (for example, the one carrying the heaviest cargo has priority).

As far as I can understand, this is the internal structure of the agents I want to build:

So the Vehicle Agent can be something like:

public class Vehicle : IAgent
{
  public VehicleStateUpdater { get; set; }

  public RuleSet RuleSet { get; set; }

  public VehicleState State { get; set; }

  public void Execute(IAgentMessage message, IActionScheduler actionScheduler)
  {
    VehicleStateUpdater.UpdateState(VehicleState, message);
    Rule validRule = RuleSet.Match(VehicleState);
    VehicleStateUpdater.UpdateState(VehicleState, validRule);
    validRule.Fire(this, VehicleState, actionScheduler);
  }
}

For the Vehicle agent's internal state I was considering something like:

public class VehicleState
{
  public Route Route { get; set; }

  public Cargo Cargo { get; set; }

  public Location CurrentLocation { get; set; }
}

For this example, 3 rules must be implemented for the Vehicle Agent.

  1. If another vehicle is near the agent (e.g. less than 50 meters), then the one with the heaviest cargo has priority, and the other agents must hold their position.
  2. When an agent reaches their destination, they unload the cargo, load a new one and wait for the Supervisor to assign a new route.
  3. At any given moment, the Supervisor, for whatever reason, might send a command, which the recipient vehicle must obey (Hold Position or Continue).

The VehicleStateUpdater must take into consideration the current state of the agent, the type of received percept and change the state accordingly. So, in order for the state to reflect that e.g. a command was received by the Supervisor, one can modify it as follows:

public class VehicleState
{
  public Route Route { get; set; }

  public Cargo Cargo { get; set; }

  public Location CurrentLocation { get; set; }

  // Additional Property
  public RadioCommand ActiveCommand { get; set; }
}

Where RadioCommand can be an enumeration with values None, Hold, Continue.

But now I must also register in the agent's state if another vehicle is approaching. So I must add another property to the VehicleState.

public class VehicleState
{
  public Route Route { get; set; }

  public Cargo Cargo { get; set; }

  public Location CurrentLocation { get; set; }

  public RadioCommand ActiveCommand { get; set; }

  // Additional properties
  public bool IsAnotherVehicleApproaching { get; set; }

  public Location ApproachingVehicleLocation { get; set; }
}

This is where I have a huge trouble understanding how to proceed and I get a feeling that I do not really follow the correct approach. First, I am not sure how to make the VehicleState class more modular and extensible. Second, I am not sure how to implement the rule-based part that defines the decision making process. Should I create mutually exclusive rules (which means every possible state must correspond to no more than one rule)? Is there a design approach that will allow me to add additional rules without having to go back-and-forth the VehicleState class and add/modify properties in order to make sure that every possible type of Percept can be handled by the agent's internal state?

The examples I've seen in the Artificial Intelligence: A Modern Approach course book and in other sources are too simple for me to "grasp" the concept in question when a more complex model must be designed.

I would be grateful if someone can point me in the right direction concerning the implementation of the rule-based part.

I am writing in C# but as far as I can tell it is not really relevant to the broader issue I am trying to solve.

An example of a rule I tried to incorporate:

public class HoldPositionCommandRule : IAgentRule<VehicleState>
{
    public int Priority { get; } = 0;

    public bool ConcludesTurn { get; } = false;


    public void Fire(IAgent agent, VehicleState state, IActionScheduler actionScheduler)
    {
        state.Navigator.IsMoving = false;
        //Use action scheduler to schedule subsequent actions...
    }

    public bool IsValid(VehicleState state)
    {
        bool isValid = state.RadioCommandHandler.HasBeenOrderedToHoldPosition;
        return isValid;
    }
}

A sample of the agent decision maker that I also tried to implement.

public void Execute(IAgentMessage message,
                    IActionScheduler actionScheduler)
{
    _agentStateUpdater.Update(_state, message);
    Option<IAgentRule<TState>> validRule = _ruleMatcher.Match(_state);
    validRule.MatchSome(rule => rule.Fire(this, _state, actionScheduler));
}
",48888,,,,,12/23/2022 16:01,How to implement a rule-based decision maker for an agent-based model?,,1,0,,,,CC BY-SA 4.0 28883,1,,,7/28/2021 10:08,,2,69,"

If the validation set is used to tune the hyperparameters and the training set adjusts the weights, why don't they be one thing as they have a similar role, as in improving the model?

",48889,,18758,,7/28/2021 10:23,7/29/2021 1:57,Why not make the training set and validation set one if their roles are similar?,,2,0,,,,CC BY-SA 4.0 28884,2,,28877,7/28/2021 10:46,,2,,"

Summary: the loss needs to be differentiable, with some caveats.


I will introduce some notation, which I hope is clear: if not I am happy to clarify.

Consider a neural network with parameters $\theta \in \mathbb{R}^d$, which is usually a vector of weights and biases. The gradient descent algorithm seeks to find parameters $\theta_\mathrm{min}$ which minimise the loss function $$\mathcal{L} \colon \mathbb{R}^d \to \mathbb{R}.$$


If this seems abstract, suppose $f(x; \theta)$ is the neural network and $S = \{(x_i, y_i)\}_{i = 1}^n$ is the training set. In binary classification we could have the loss function

$$\mathcal{L}(\theta) = \sum_{i = 1}^n \mathbb{1} \{f(x_i; \theta) \ne y_i\} $$ where $\mathbb{1}$ is the indicator function which is $1$ if the condition is satisfied and zero otherwise. I consider the loss function to be a function of the parameters and not the data, which is fixed.


Gradient descent is performed by the update rule $$ \theta_n \leftarrow \theta_{n - 1} - \gamma \nabla \mathcal{L}(\theta_{n - 1}),$$ yielding new parameters $\theta_n$ which should give a smaller loss $\mathcal{L}(\theta_n)$. The quantity $\gamma$ is the familiar learning rate.

The gradient descent rule requires the gradient $\nabla \mathcal{L}(\theta_{n - 1})$ to be defined, so the loss function must be differentiable. In most texts on calculus or mathematical analysis you'll find the result that if a function is differentiable at a point $x$, it is also continuous at $x$. Obviously there is no hope that we could perform this procedure without knowing the gradient!

In principle, differentiability is sufficient to run gradient descent. That said, unless $\mathcal{L}$ is convex, gradient descent offers no guarantees of convergence to a global minimiser. In practice, neural network loss functions are rarely convex anyway.

I have omitted discussion on stochastic gradient descent, but it does not change the requirements for the loss function. There are alternative techniques such as the proximal gradient method for non-differentiable functions.

An unfortunate technicality I have to mention is that, strictly speaking, if you use the $\mathrm{ReLU}$ activation function, the neural network function $f$ becomes non-differentiable. I discuss this further in this answer. In practice we can assign a value and "pretend" $\mathrm{ReLU}$ is differentiable everywhere.

",44413,,,,,7/28/2021 10:46,,,,0,,,,CC BY-SA 4.0 28886,2,,28883,7/28/2021 11:40,,1,,"

Idea is to optimize with regards to unseen data in each step in order to avoid overfitting and data leakage so that the final network will be most generalizable to novel data.

First, you initialize your network weights randomly. For those weights, training data is unseen so network is optimized with regards to loss function that is calculated using training data. This was the first step.

Second, you would like to optimize hyperparameters, the parameters you use to train your network in first step. The network already worked hard to do its best in the first step while learning weights. If you use the same dataset in this step, it will have even more flexibility to fit even better to training data. But this will result in high variance, and network will perform poorly on unseen data.

For this reason, you split your data into train, dev and test. Train network with train data, optimize it with dev data and finally, evaluate with test data, never touching it until the very last step.

",48523,,,,,7/28/2021 11:40,,,,3,,,,CC BY-SA 4.0 28887,2,,28878,7/28/2021 12:13,,2,,"

With this link I could read the paper. Thanks.

So there is this discipline called sensor fusion. It is very sounded in the field of Autonomous Vehicles where in order to take one decision (whether to break or not) you have to take into account information for multiple sources: car mounted cameras, LIDAR, ultrasound, radar...

So the term "fusion" refers to the operation of aggregating the information from multiple sources (that has its problems as the paper says: the signals measured by multiple sensors are disordered and correlated with multiple sources). In order to perform this fusion or aggregation you can aggregate the information in different levels of the processing pipeline: close to raw data (signal fusion), close to high level information (decision fusion) or something in the middle (feature fusion).

Normally when you have a signal (image, radar, electromagenic...) you proces it somehow (normally using filters). The output of those filters is a feature map (in case of 2D images) or feature vectors (in case of 1D signals). Usually when you have those signals processed you use a decision module to extract high level information (a classification head, a SVM, a regressor...).

Basically you can aggregate information at those 3 levels. The authors refer as feature fusion as to aggregate information in the middle step. You do not have raw data, but you do not have refined data either. They do so expecting they remove the noisy part of the signals (filters) or the non relevant parts (PCA) but without removing the nuances lost when using the decision modules.

The name "feature fusion" comes from the deep learning terminology in which when you have a signal and you process it somehow the output of it is a feature. When reading a paper on image detection / classification you would see "feature maps" but when reading a paper on sensor or audio you would read "feature vector" hence then name

",26882,,,,,7/28/2021 12:13,,,,0,,,,CC BY-SA 4.0 28888,1,30015,,7/28/2021 14:34,,2,367,"

I have a certain scheduling problem and I would like to know in general whether I can use Reinforcement learning (and if so what kind of RL) to solve it. Basically my problem is a mixed-integer linear optimization problem. I have a building with an electric heating device that converts electricity into heat. So the action vector (decision variable) is $x(t)$ which quantifies the electrical power of the heating device. The device has to take one decision for every minute of the day (so in total there are $24$ hours $\times 60$ minutes $= 1440$ variables). Each of those variables is a continuous variable and can have any value between $0$ and $2000 W$.

The state space contains several continuous variables:

  • External varying electricity price per minute: Between $0$ Cents and $100$ Cents per kWh (amount of energy)
  • Internal temperature of the building: Basically between every possible value but there is a constraint to have the temperature between $20 °C$ and $22 °C$
  • Heat demand of the building: Any value between $0 W$ and $10.000 W$
  • Varying "efficiency" of the electrical heating device between $1$ and $4$ (depending on the external outside temperature)

The goal is to minimize the electricity costs (under a flexible electricity tariff) and to not violate the temperature constraint of the building. As stated before, this problem can be solved by mathematical optimization (mixed-integer linear program). But I would like to know if you can solve this also with reinforcement learning? As I am new to reinforcement learning I would not know how to do this. And I have some concerns about this.

Here I have a very large state space with continuous values. So I can't build a comprehensive $Q-$table as there are to many values. Further, I am not sure whether the problem is a dynamic programming problem (as most/all?) of the reinforcement problems. From an optimization point of view it is a mixed-integer linear problem.

Can anyone tell me if and how I could solve this by using RL? If it is possible I would like to know which type of RL method is suitable for this. Maybe Deep-Q-Learning but also some Monte-Carlo policy iteration or SARSA? Shall I use model-free or model-based RL for this?

Reminder: Does nobody know whether and how I can use reinforcement learning for this problem? I'd highly appreciate every comment.

Can nobody give me some more information on my issue? I'll highly appreciate every comment and would be quite thankful for more insights and your help.

",48758,,48758,,8/4/2021 7:17,8/5/2021 9:43,Reinforcement learning applicable to a scheduling problem?,,1,4,,,,CC BY-SA 4.0 28890,1,29932,,7/28/2021 23:02,,0,745,"

Tensor is an ordered collection of elements. The elements are generally real numbers. Tensors are used in deep learning for storing data.

There is a wide usage of the word "axis" related to tensor. Axes are not the same as indices, which are used to access the elements of a tensor. An axis is not the same as an element of a tensor.

What exactly is an axis in a tensor? Is it also a (sub-)tensor obtained from the actual tensor? Or is it any other indexing mechanism? If yes, why it is used?

Suppose $a =[[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12],[13, 14, 15, 16]]$ is a tensor. Then what does axis do for $a$?

",18758,,18758,,1/18/2022 8:10,6/16/2022 20:29,What is meant by an axis of a tensor?,,2,0,,,,CC BY-SA 4.0 29890,2,,28883,7/29/2021 1:57,,1,,"

I think this is best explained using an analogy. Also you seen to have the misconception that you don't tune hyper-parameters for training data. You want to increase the accuracy of the training set AND validation set at the same time, but the validation set is more important so you want to maximise that accuracy more.

Imagine you had a toddler, and you were trying to teach them what an apple looks like. You have 10 pictures of apples, and for 45 years you sit them in a room and show them these apples. After all this time they will get to know the apples so well, that even the minutest differences in the photos would be noticed. When you try to see how the toddler generalised (how well it can use what it saw in training to evaluate real examples), it's absolutely terrible, because after all that time how could anything be an apple but the 10 it had seen?

So to combat this, you might think to reduce the number of years you spent showing the toddler the training set (the 10 apples), maybe that will allow them to generalise better (prevent over-fitting)? But you need a way to validate that this is actually helping, on unseen data (novel apples). That's where the 5 unseen apples come in to play (the validation set). You measure the accuracy of this unseen data to get an actual idea of how the toddler has learnt, because in a real example, the toddler isn't going to have seen the apple before, so it's important to know how it will handle unseen data.

That brings us finally to the testing set. The issue with what I've described above is maybe there's some kind of bias in the validation set, maybe those apples are slightly more round than usual. Of course you want the maximum accuracy on the validation set, so you tune everything to increase this accuracy, but this incurs a bias to the validation set. Maybe the parameters you chose are good only for this validation set. To make sure this isn't the case, you try and increase your accuracy as much as possible on the validation set, then after that you test the true accuracy on the testing set. This ensures you can't just tune your hyper-parameters to the validation set and call it a day. You need real generalisation to increase both validation AND testing set accuracy.

",26726,,,,,7/29/2021 1:57,,,,0,,,,CC BY-SA 4.0 29891,2,,28881,7/29/2021 2:11,,1,,"

Yes, the optimal learning rate will differ for every change you make in the network. In fact finding the optimal learning rate is very computationally expensive, so you will normally only get a rough guess anyway.

The learning rate is used to traverse an N dimensional loss landscape that changes drastically with even the smallest differences. If you add one more training data point, the optimal learning rate will change. If you add one more neuron, it will change.

What you tune first is up to you, but once you have settled on an architecture, you should tune the learning rate first because it will have the largest effect on other hyper-parameters (normally, not always). For example tuning the number of epochs to train for first will be worthless if you start changing the learning rate because the network will learn at a different speed. That's why the learning rate is usually tuned first (after the architecture).

",26726,,,,,7/29/2021 2:11,,,,2,,,,CC BY-SA 4.0 29893,2,,28890,7/29/2021 4:27,,1,,"

Imagine the tensor as a some generalized $n$-dimensional hyperrectangle sliced into $n$-dimensional hypercubes. Each element of the tensor is labeled by the position along the given axis, say $(x_1, x_2, \ldots)$.

Axis is not a property of tensor, rather the tensor is embedded in a $n$-dimensional space, where the axes are chosen along the sides of the hyperrectangle corresponding to the tensor.

There are many operations, that can be applied axiswise to tensor. Several examples:

  • Mean along the axis (choose $0$ without loss of generality). Given the $n$-dimensional tensor $x_{i_1, i_2 \ldots i_n}$ , the result will be $n-1$-dimensional tensor $\frac{1}{N_0} \sum_{i_1} x_{i_1 i_2 \ldots i_n}$, where $N_0$ is the number of elements of the tensor along the $0$-th axis (height).
  • Standard deviation along the axis. For each index $i_2 \ldots i_n$, calculate $\sqrt{\frac{1}{N_0} \sum_{i_1} (\bar{x}_{i_2 \ldots i_n} - x_{i_1 i_2 \ldots i_n})^2}$, where $\bar{x}$ is the mean from previous point, and the result will be again $n-1$-dimensional tensor.

For your example $a$ is a $2$-dimensional tensor with $2$ axes. $0$-th axis corresponds to rows, $1$-st axis corresponds to columns.

",38846,,46001,,6/16/2022 20:29,6/16/2022 20:29,,,,0,,,,CC BY-SA 4.0 29894,1,29895,,7/29/2021 6:05,,0,602,"

I have just run an MAE calculation for my machine learning models and the results show:

SVM MAE = 28.850 deg.

Random Forest MAE = 33.832 deg.

How do I know what a good MAE value is? What is the range of the MAE?

",48902,,2444,,7/29/2021 12:41,7/29/2021 12:41,How do I know what a good mean absolute error value is?,,1,1,,7/30/2021 17:07,,CC BY-SA 4.0 29895,2,,29894,7/29/2021 6:21,,1,,"

Mean Absolute Error is nothing but the mean of absolute errors.

If your model gave $n$ predictions $\{\hat{y}_i\}_{i = 1}^{n}$ against $n$ ground truths $\{y_i\}_{i = 1}^{n}$, then MAE is defines as follows

$$MAE_{model} = \dfrac{\sum\limits_{i = 1}^{n} |y_i - \hat{y}_i|}{n} $$.

Thus, MAE gives the average amount of error. So, the machine learning model with minimum MAE should be considered as a good model since it is giving minimum error.

In your task, SVM should be considered as a good model.

The range of $MAE$ is $[0,\infty)$. The lower bound zero is achieved if all your model's predictions are exactly correct, else it will give positive value depending on how worse your model's predictions are.

",18758,,,,,7/29/2021 6:21,,,,1,,,,CC BY-SA 4.0 29896,1,29920,,7/29/2021 6:44,,0,249,"

Consider the following details regarding Softplus activation function

$$\text{Softplus}(x) = \dfrac{\log(1+e^{\beta x})}{\beta}$$

SoftPlus is a smooth approximation to the ReLU function and can be used to constrain the output of a machine to always be positive.

It says that Softplus is a smooth approximation to the ReLU function. Let us consider the analytical form and plot of the RELU function.

$$\text{ReLU}(x)=(x)^+=\max(0,x)$$

The plot of Softplus function is

If we observe both plots, we can see the Softplus is almost similar to ReLU. There is a property for Softplus that ReLU does not have. ReLU is not differentiable at zero and the derivative of ReLU is also not continuous.

If we observe the behavior of Softplus, it is $n-$times continuously differentiable and hence a smooth function.

Since Softplus is both a smooth function and approximates ReLU, it is considered as a smooth approximation of ReLU.

Is my interpretation correct? if no, then what is meant by "smooth approximation" here?

",18758,,18758,,11/7/2021 0:04,11/7/2021 0:04,"Is my understanding on ""smooth approximation"" correct?",,1,0,,,,CC BY-SA 4.0 29899,1,29907,,7/29/2021 8:35,,2,384,"

Activation functions, in neural networks, are used to introduce non-linearity. Many activation functions that are used in neural networks have the term "Linear Unit" in their full form. "Linear unit" can be abbreviated as LU.

For example, consider some activation functions

ELU - Exponential Linear Unit

ReLU - Rectified Linear Unit

................................................

Why does the function name contain the term "Linear Unit"? What is meant by Linear Unit here? Is it saying anything about the nature of function under consideration?

",18758,,2444,,9/17/2021 16:24,9/17/2021 16:24,"What does ""linear unit"" mean in the names of activation functions?",,1,0,,,,CC BY-SA 4.0 29901,1,,,7/29/2021 9:37,,1,845,"

In a project for college I created a simple turn based game, with up to 4 players that can either move or attack the opponents. The players are playing over the network, meaning the clients are supposed to be programmed AIs. The client itself is fully functional, meaning it has all the game logic and can simulate complete games.

Now my task is to create a RL-Agent with a Deeq-Q-Network that learns to play the game. However, I don't really find any source to how that should be done. I was able to create an Agent with a DQN for the CartPole environment of OpenAI gym with PyTorch. Now my guess would be to create my own environment with the gym framework, but since the game itself is already implemented I was thinking if it was possible to feed data in the DQN without having to create the gym environment. As a state it would get the gamestate (which is a 2d grid, with information about the players and their remaining hitpoints) and all the possible moves in the current state as the action space. And since the game is network based, it would save the network after each game and reload it when the next starts during the training. For the training I would start the games over and over with a script and let it train for a while.

As I'm quite new to Machine Learning it seems really blurry as of how to tackle this problem and was hoping to get led in some direction on how to start.

",48893,,,,,12/26/2021 13:01,Creating DQN Learning Agent without Gym environment for a custom project,,1,0,,,,CC BY-SA 4.0 29902,2,,29901,7/29/2021 10:11,,1,,"

Most OpenAI gym environments are thin wrappers around existing games and libraries. You could do the same with your game. See e.g. https://towardsdatascience.com/beginners-guide-to-custom-environments-in-openai-s-gym-989371673952 for a tutorial. There are many others, you can search "open ai gym custom environment" for more

",1847,,,,,7/29/2021 10:11,,,,2,,,,CC BY-SA 4.0 29903,1,,,7/29/2021 10:18,,0,395,"

When compared to an RNN seq-to-seq model, people always say the Transformer is parallelizable. In the original Attention Is All You Need paper, it also said that

Recurrent models typically factor computation along the symbol positions of the input and output sequences. Aligning the positions to steps in computation time, they generate a sequence of hidden states $h_t$, as a function of the previous hidden state $h_{t−1}$ and the input for position $t$. This inherently sequential nature precludes parallelization within training examples

I use the The illustrated Transformer to help to explain my question here. It said (You can search those sentences):

Here we begin to see one key property of the Transformer, which is that the word in each position flows through its own path in the encoder. There are dependencies between these paths in the self-attention layer. The feed-forward layer does not have those dependencies, however, and thus the various paths can be executed in parallel while flowing through the feed-forward layer.

However, actually in the self-attention layer, in order to calculate the select V, it needs the key values of all time steps! So each "time step" is not fully independent. There exist an operation in the layer that depends on the output, here is the key from all "time step".

In the original paper, the same block is repeated 6 times. That means there are at least 6 points where the flow of independent operation of each "time step" or each token to wait for the others. Yes, it is better, but why do they call it parallelizable?

",48907,,2444,,7/30/2021 12:16,8/30/2021 17:01,Why people always say the Transformer is parallelizable while the self-attention layer still depends on outputs of all time steps to calculate?,,1,0,,,,CC BY-SA 4.0 29904,1,,,7/29/2021 10:23,,4,2665,"

The following is the abstract for the research paper titled Improved Training of Wasserstein GANs

Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only poor samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models with continuous generators. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.

Here, the critic stands for discriminator of the GAN. I understood that the discriminator must obey Lipschitz constraint and hence weight clipping is generally done before this paper. The paper provides an alternative way, penalizing the norm of the gradient of the critic with respect to its input, to enforce the desired Lipschitz constraint.

What actually is Lipschitz constraint and why is it mandatory for a discriminator to obey it?

",18758,,2444,,7/31/2021 13:08,8/31/2021 8:06,What is Lipschitz constraint and why it is enforced on discriminator?,,2,3,,,,CC BY-SA 4.0 29906,1,31771,,7/29/2021 10:44,,0,303,"

I have a general question about supervised ANNs that map inputs to outputs. It is possible to vary the length of the input and output vectors by inserting some dummy variables that will not be considered in the mapping (or will be mapped to other dummy variables). So basically the mapping should look like this (v: value, d: dummy)

Input vector 1 $[v,v,v,v,v] \rightarrow$ Output vector 1 $[v,v,v,v,v]$

Input vector 2 $[v,v,v,v,v]\rightarrow$ Output vector 2 $[v,v,v,v,v]$

Input vector 3 $[v,v,v,d,d] \rightarrow$ Output vector 3 $[v,v,v,d,d]$

Input vector 4 $[v,v,d,d,d] \rightarrow$ Output vector 4 $[v,v,d,d,d]$

Input vector 5 $[v,d,d,d,d] \rightarrow$ Output vector 5 $[v,d,d,d,d]$

The input and output vectors have a length of 5 with 5 values. However, sometimes only a vector of size e.g. 3 (which is basically a vector of length 5 with 2 dummy variables) should be mapped to an output vector of length 3. So after training the ANN should know that if it for example gets an input vector of length 3 it should produce an output vector of length 3.

Is something like this generally possible with ANNs or other machine learning approaches? If so, what type of ANN or machine learning approach can be used for this? I'll appreciate every comment.

Reminder: Can anybody give me more insights into this?

",48758,,18758,,12/17/2021 0:19,12/17/2021 0:19,Mapping input vectors of variable length to output vectors of variable lengths with dummy variables,,1,15,,,,CC BY-SA 4.0 29907,2,,29899,7/29/2021 13:27,,6,,"

Have a look at these graphics showing popular linear units (image taken from Clevert et al. 2016):

You can see that these functions are linear functions for $x > 0$, that's why they are called Linear Units.

For example, the ELU is defined as

$$ ELU(x) = \begin{cases} x &\text{if } x > 0\\ \alpha (\exp(x)-1) & \text{if } x \leq 0. \end{cases} $$ These functions introduce the nonlinearity around zero, each in its own way, which can be used for different problems.

",41804,,,,,7/29/2021 13:27,,,,0,,,,CC BY-SA 4.0 29910,1,29931,,7/29/2021 17:28,,1,80,"

I have a heuristic solution to a problem which works quite well when certain environmental parameters are known and unchanging. However, in a real world setting these parameters will not be known and are likely to fluctuate over the course of an episode. I'm hoping to use deep RL to develop a policy that will be similar to the heuristic, but robust to these unknowns.

My question is: does the RL agent need to be trained "from scratch" as one would typically do or is there a way to leverage the existing policy to jump start the training progress?

In the latter case, what would this looks like? I've had a couple of thoughts, but I'm not sure how well any of them would work.

  1. Reward actions that the heuristic would take in an environment with static parameter values, then gradually make the environment more complex and set a new reward function based on what I'm actually interested in.

  2. Instead of taking random actions in the exploration stage, take actions dictated by the heuristic.

",48856,,,,,7/30/2021 15:27,How to use a heuristic policy to increase sample efficiency of a deep reinforcement learning agent?,,1,0,,,,CC BY-SA 4.0 29911,1,,,7/29/2021 20:07,,2,76,"

I'd like to ask if it is, generally, better to model a problem that naturally appears as a Contextual Multi-Armed Bandit like Recommender Systems as a Markov Decision Process with a non-zero discount factor (otherwise it's just an MDP with one step episodes) or is it better to treat it as it is; a Contextual Multi-Armed Bandit (MDP with a zero discount factor)

I'm thinking about some problems like Recommender Systems where we can't define well the dynamics of the environment and so using a non-zero discount factor wouldn't make much sense since we'll take into account the recommendations for users that are independent of each other.

",44965,,2444,,12/21/2021 10:07,12/21/2021 10:07,Is it better to model a Contextual Multi-Armed Bandit problem as an MDP with a non-zero discount factor than treating it as it is?,,0,0,,,,CC BY-SA 4.0 29912,1,,,7/30/2021 0:18,,0,448,"

The research paper titled Improved Training of Wasserstein GANs proposed a gradient penalty in order to avoid undesired behavior due to weight clipping of the discriminator.

We now propose an alternative way to enforce the Lipschitz constraint. A differentiable function is 1-Lipschtiz if and only if it has gradients with norm at most 1 everywhere, so we consider directly constraining the gradient norm of the critic’s output with respect to its input. To circumvent tractability issues, we enforce a soft version of the constraint with a penalty on the gradient norm for random samples $\hat{x} \sim P_\hat{x}$. Our new objective is

$$L = \mathop{\mathbb{E}}\limits_{\tilde x \sim \mathbb{P}_g} [D(\tilde x)] - \mathop{\mathbb{E}}\limits_{ x \sim \mathbb{P}_r} [D(x)] + \mathop{\mathbb{E}}_{\hat{x} \sim P_{\hat{x}}} [ (\| \triangledown_{\hat{x}} D(\hat{x})\|_2 - 1 )^2 ]$$

The last term in the discriminator's loss function is related to the gradient penalty. It is easy to calculate the first two terms. Since discriminator, in general, gives value in range $[0, 1]$, the first two terms are just the average of the sequence of probability values given by discriminator on generated and real images respectively.

But, how to calculate $\triangledown_{\hat{x}} D(\hat{x})$ for a given image $\hat{x}$?

",18758,,2444,,7/31/2021 17:16,7/31/2021 17:16,"How to calculate the gradient penalty proposed in ""Improved Training of Wasserstein GANs""?",,1,0,,,,CC BY-SA 4.0 29913,1,,,7/30/2021 2:07,,0,48,"

I am pretty confused about the concept of "image channels".

I want material that explains the concept of channels from scratch to whatever is required to understand their role in machine learning. I think that it is a small concept and possibly present as a chapter in good textbooks.

Where can I read about channels of an image in detail?

",18758,,18758,,7/30/2021 2:13,7/30/2021 15:19,"Material(s) for understanding ""image channels""",,1,0,,,,CC BY-SA 4.0 29914,2,,29912,7/30/2021 2:19,,2,,"

First of all, the discriminator in WGAN does not give a value in the range $[0,1]$. Compared to the traditional discriminator, it has a linear activation in the output layer. Therefore, the authors call it critic instead.

To calculate the penalty, we sample an image that lies on the line between the real and the generated image. This is done by sampling a real image $x$, generating an image $\tilde{x}$, and mixing these images $\hat{x} = \alpha \widetilde{x}+(1-\alpha)x$ with $\alpha \sim U(0,1)$. That is, $\hat{x}$ is uniformly sampled from the lines between real and fake images, which can be illustrated as follows:

We then feed $\hat{x}$ into the critic and calculate the gradient norm of the discriminator's output with respect to its input, $(\| \triangledown_{\hat{x}} D(\hat{x})\|_2 - 1 )^2$. Here is a snippet code for PyTorch:

def gradient_penalty(D, real_data, generated_data, device):
    batch_size = real_data.shape[0]

    # Calculate interpolation
    alpha = torch.rand(batch_size, 1, 1, 1)
    alpha = alpha.expand_as(real_data).to(device)
    # getting x hat
    interpolated = alpha * real_data + (1 - alpha) * generated_data

    dis_interpolated = D(interpolated)
    grad_outputs = torch.ones(dis_interpolated.shape).to(device)

    # Calculate gradients of probabilities with respect to examples
    gradients = autograd.grad(outputs=dis_interpolated, inputs=interpolated,
                           grad_outputs=grad_outputs, create_graph=True, retain_graph=True)[0]

    # Gradients have shape (batch_size, num_channels, img_width, img_height),
    # so flatten to easily take norm per example in batch
    gradients = gradients.view(batch_size, -1)

    # Derivatives of the gradient close to 0 can cause problems because of
    # the square root, so manually calculate norm and add epsilon
    gradients_norm = ((torch.sqrt(torch.sum(gradients ** 2, dim=1) + 1e-12) - 1) ** 2).mean()
    return gradients_norm
",12841,,,,,7/30/2021 2:19,,,,0,,,,CC BY-SA 4.0 29915,1,,,7/30/2021 3:15,,0,55,"

Let us assume your dataset has $n$ training samples each of size $s$ and you divided them into $k$ batches for training. Then each batch has $n_k = \dfrac{n}{k}$ training samples.

Batch normalization can be applied to any input or hidden layer in a neural network. So, assume that I am applying batch normalization at every possible place I can.

Now, consider a particular batch normalization layer (say $b$) of a hidden layer $\ell$. Now, I am confused about the working frequency of $b$.

Will it be activated only after every $n_k - 1$ forward passes i.e, once per batch at the end of the batch? If no, then how $b$ calculates the mean and standard deviation for every forward pass while training if $n_k$ output vectors of $\ell$ are not available at that instant?

Will $b$ calculates the mean and standard deviated, for every forward pass, based on the outputs of $\ell$ that are calculated so far? If yes, then why it is called batch normalization?

To put it concisely, are batch normalization layers active for every iteration? If yes then how they are normalizing a "batch" of vectors?


You can check here which says

The mean and standard-deviation are calculated per-dimension over the mini-batches

",18758,,18758,,7/31/2021 23:52,7/31/2021 23:52,When does a batch normalization layer becomes active?,,0,7,,,,CC BY-SA 4.0 29916,1,,,7/30/2021 7:08,,0,77,"

I'm trying to train a Neural Network in a particular situation -- similar to a genetic algorithm domain as far as I know. I have to run a simulation with a length of $K$ steps. I have a neural network $N$ that at each time step is used to produce an output, so that: $$ o_{t+1} = N(i_{t}) $$ $i_t$ is a feature vector built upon $o_{t-1}$, and $i_0$ is given. My ground-truth value is $o_k$, namely the right value at the end of the simulation. So, I can evaluate the loss (e.g. MSE) only at the end of the simulation. Suppose to fix k to 3, the evaluation is: $N(N(N(i_0))$ because: $$ o_1 = N(i_o) \\ o_2 = N(i_1) \\ o_3 = N(i_2) $$ So my questions are:

  • does it have any sense to apply backpropagation in these settings?
  • if yes, what happens to the gradients?

Practically, in some simple situations, the backpropagation seems to work, but in others, the gradients explode or vanishes

",46855,,18758,,7/30/2021 7:15,6/8/2022 1:04,Backpropagation after N sequential input-output pass,,1,2,,,,CC BY-SA 4.0 29917,1,,,7/30/2021 8:14,,0,33,"

Many researchers in deep learning research come up with new CNN architectures.

The architectures are (just) combinations of a few existing layers.

Along with their mathematical intuition, in general, do they visualize intermediate steps by execution and then (do trial and error) brute force for achieving the state-of-art architectures?

Visualizing intermediate steps refers to printing outputs in the proper format for analyzing them. Intermediate steps may refer to feature maps in CNN, hidden states in RNN, outputs of hidden layers in MLP, etc.

",18758,,18758,,12/23/2021 10:33,12/23/2021 10:33,Do deep learning researchers generally visualize intermediate steps?,,0,3,,,,CC BY-SA 4.0 29918,2,,28767,7/30/2021 10:02,,1,,"

Channels can be thought of as alternate numbers in the same space.

As an example, the three colour channels of a typical image are often values for amount of red, green or blue light received from each position within the picture.

Your 1D convolution example has one input channel and one output channel. Depending on what the input represents, you might have additional input channels representing other values measured in the same input space. For all but the most simple problems, you will have multiple output channels. The number of channels in each layer may vary, similarly to how the size of hidden layers in a fully connected neural network can vary.

The term "feature map" means the same as channel, and is typically used to describe the outputs of hidden layers.

To map from N input channels to M output channels requires $N \times M$ filters. Each of the M outputs is connected by a filter to each of the N inputs, and the results of running those N convolutions is summed and passed through a nonlinear activation function to generate an output channel.

Although in the abstract, a channel is a type of dimension, channels are considered as entirely separate to the space which is being processed. So adding channels to your 1D example does not make it a 2D convolutional neural network.

",1847,,,,,7/30/2021 10:02,,,,1,,,,CC BY-SA 4.0 29919,1,,,7/30/2021 13:00,,1,36,"

What is the difference between these two situations? are they the same ?

#1 : train a model 20 epochs on the whole dataset

#2 : divide dataset into n-parts then train the model 20 epochs on each part

20 is a random number just for clarification. do we get the same result (accuracy) between these two situations? and why ?

Side note: this question was raised in my mind when I faced a problem: dataset is bigger than the storage space. So I want to divide it into 4 parts and train the model on each part. But does this effect on accuracy ? does this method of training is correct ?

",48938,,18758,,7/31/2021 4:36,7/31/2021 4:36,Training on the dataset in parts vs training on the whole dataset,,0,0,,,,CC BY-SA 4.0 29920,2,,29896,7/30/2021 13:08,,1,,"

Your interpretation is definitely correct. As you correctly pointed out, the derivative of softplus is continuous and $n$-times differentiable, that makes the function smooth, which is not the case for ReLU.

What is quite interesting here is why softplus can be called an approximation to ReLU.

If we break down the definition of softplus, we note that the parameter $\beta$ controls the smoothness of the function. In fact, we can define a temperature $T = \frac{1}{\beta}$ (much like the softmax function). Let's denote with $f_T$ a softplus function with temperature $T$. We can easily see that as $T \rightarrow 0$ (or equivalently $\beta \rightarrow \infty$), $f_T(x) \rightarrow \max(0, x)$. In fact, let's compute the limit of the softplus for all $x > 0$:

$\lim_{\beta \rightarrow \infty} \frac{1}{\beta} ln(1 + e^{\beta x})$

Rewrite $ln(1+e^{\beta x})$ as $ln(e^{-\beta x} + 1) +ln(e^{\beta x})$ we get:

$\lim_{\beta \rightarrow \infty} \frac{1}{\beta}[ln(e^{-\beta x} + 1) +ln(e^{\beta x})] = \\ = \lim_{\beta \rightarrow \infty} \frac{1}{\beta}[ln(0 + 1) + ln(e^{\beta x})] = \\ = \lim_{\beta \rightarrow \infty} \frac{1}{\beta}[0 + ln(e^{\beta x})] = \\ = \lim_{\beta \rightarrow \infty} \frac{1}{\beta}\beta x = \\ = x$

Computing the limit for $x \leq 0 $ is straightforward and the result is $0$. So we see that for high value of $\beta$ (or low value of $T$), the softplus is actually the $\max(0, x)$ function!

Therefore, softplus is a family of functions $f_T$ which approximate ReLU with high precision as $T$ gets to 0 (less smoothness), and low precision with high $T$ (more smoothness).

Among all, we usually use softplus with $beta = 1$ (i.e, $f_1$), and this is why we call it a smooth approximation of the ReLU. Because $f_1(x)$ is smooth ($T=1$) and approximates $f_\infty(x) = \max(0, x) = \text{ReLU}(x)$.

Hope this was useful.

",37576,,18758,,7/30/2021 13:19,7/30/2021 13:19,,,,0,,,,CC BY-SA 4.0 29922,1,,,7/30/2021 13:41,,1,119,"

I am asking this question for a better understanding of the concept of channels in images.

I am aware that a convolutional layer generates feature maps from a given image. We can adjust the size of the output feature map by proper padding and regulating strides.

But I am not sure whether there exist kernels for a single convolution layer that are capable of changing an {RGBA, RGB, Grayscale, binary} image into (any) another {RGBA, RGB, Grayscale, binary} image?

For example, I have a binary image of a cat, is it capable to convert it into an RGBA image of a cat? If no, can it at least convert a binary cat image into an RGBA image?

I am asking only from a theoretical perspective.

",18758,,11539,,8/1/2021 11:15,12/24/2022 19:02,"Is a convolutional layer capable of converting, for example, a binary image into an RGBA image?",,1,8,,,,CC BY-SA 4.0 29923,1,,,7/30/2021 14:11,,1,31,"

I'm trying to make a program that takes a surface designed by the user, and different 3D geometries from a dataset as inputs and gives a good approximation of the surface using only the objects found in the dataset. This program shouldn't do any warping, and should avoid geometries to collide, even though cutting them could be acceptable, but again with as little loss as possible.

I thought about hardcoding this, but I can't find any good way to optimize the surface coverage without brute-forcing it. I'm wondering what ML techniques would be best for this, and how to find a good balance between precision and speed.

",48940,,,,,12/27/2021 23:07,Cover a surface with smaller predefined objects,,1,0,,,,CC BY-SA 4.0 29924,1,34278,,7/30/2021 14:15,,2,205,"

Regarding the AlphaZero paper, it is not clear to me when the Monte Carlo Tree Search (MCTS) results will be cleaned up.

I assume this has to happen at some point, since mixing results could lead to lower quality results? Imagine in the self-play the Neural Network (NN) is updated to a new version and evaluates certain patterns differently by detecting a new trick. Many iterations must follow to outperform the old best choice (visit-count). I imagine discarding old MCTS results should be done about between an episode and the next NN weight updates.

I feel that a wrong decision here could have a strong negative impact on the overall learning process.

",48917,,11539,,8/1/2021 11:39,10/11/2022 20:07,At what point are MCTS results discarded in AlphaZero Training?,,1,0,,,,CC BY-SA 4.0 29926,1,,,7/30/2021 14:45,,0,37,"

Suppose I have a simple linear layer $y = xA^T + b$ that is part of a neural network trained on some dataset. The weight matrix $A$ for this layer has the shape [num_outputs, num_inputs].

For each layer input, I would like to find a value between 0 and 1, based on the weight matrix, representing the significance of that input to the layer output.

Intuitively, if the values in the i-th column of the weight matrix are close to 0, then the significance of i-th inputh should also be close to 0. Conversely, if the values are close to maximum or minimum of the entire weight matrix, the significance should approach 1.

This statistic should also adequately recognize cases where the vast majority of values in a column are close to 0, but at least one is not. Then the significance of such an input should not be close to 0, because it is important for a single neuron that detects, for example, an edge case.

Can anyone point me in the right direction?

",31988,,,,,7/30/2021 14:45,How to measure the significance of an input feature for the output of a linear layer in a neural network,,0,2,,,,CC BY-SA 4.0 29927,2,,29904,7/30/2021 15:05,,4,,"

The Lipschitz constraint is essentially that a function must have a maximum gradient. The specific maximum gradient is a hyperparameter.

It's not mandatory for a discriminator to obey a Lipschitz constraint. However, in the WGAN paper they find that if the discriminator does obey a Lipschitz constraint, the GAN works much better.

A perfect discriminator would perfectly accept all the real samples (output=1 with low gradient) and perfectly reject all the fake samples (output=0 with low gradient), but this provides hardly any gradient information to help train the generator. By limiting the gradient of the discriminator, they force it to be a worse discriminator, but provide more gradient information which helps train the generator.

You can see this in figure 2 (page 9) of the WGAN paper. The red line is a good discriminator but its gradient is nearly 0 at most points. The cyan line (which for some reason is presented upside-down) is clearly much worse as a discriminator, but is much better for training the generator because its gradient is not zero.

The way they limit the discriminator's overall gradient in the WGAN paper is by separately limiting (clipping) each of the weights in the discriminator. They are aware this is a "terrible" idea, and leave better ideas for further research. The paper you are asking about is one of those better ideas.

Note that since back-propagation works backwards, enforcing a maximum gradient (Lipschitz continuity) on the discriminator in the forward direction causes it to have a minimum gradient in the backward direction, which is what we want when training the generator.

",28406,,28406,,8/31/2021 8:06,8/31/2021 8:06,,,,3,,,,CC BY-SA 4.0 29928,1,30293,,7/30/2021 15:07,,3,359,"

According to a blog post by DeepMind, AlphaZero doesn't have a real rollout.

AlphaGo Zero does not use "rollouts" - fast, random games used by other Go programs to predict which player will win from the current board position. Instead, it relies on its high quality neural networks to evaluate positions.

Instead, I assume it just interprets the winner at a given state by the NN values head result. This replaces the rollout. So the computation time saved could be used for many expansions instead. Evaluating a state from a root node would then be the best action derived from the visit count in MCTS, which is only based on the predictions of the NN value heads. (no current score, no policy?)

With policy, I mean the NN's policy head (softmax).

This would mean that the NN policy is only used in the loss calculation and nowhere else?

",48917,,2444,,7/31/2021 17:14,8/21/2021 6:25,What does it mean there is no rollout in AlphaZero's training?,,1,1,,,,CC BY-SA 4.0 29929,2,,29922,7/30/2021 15:12,,0,,"

No, because each output from a convolution layer only looks at a local region of the image. A convolution layer cannot do any global transformation, only local ones. Convolution layers must have translation invariance which means if it converts an eyeball to a tail at one position, it'll also convert the same eyeball to the same tail if it's found at a different position. If it's not overfitted, it will also convert similar eyeballs to similar tails. If you want only some eyeballs to become tails, you can't do that without introducing overfitting, or expanding the convolution size until the layer can see enough context to distinguish which eyeballs should become tails and which ones shouldn't.

If you want to change one image into a specific other image, and don't care what happens to all other images, it might be possible to create a convolution layer that does this transformation. The input image has to be different wherever the output image is different, or else the convolution layer won't be able to produce that difference in the output image. You would be teaching it to recognize the specific pixel patterns in the input image and generate the specific pixels in the output image. This would be an extreme case of overfitting and wouldn't work for any other input images.

The number of channels in the input and output image is irrelevant, except that more channels means the network has more data to learn from, obviously.

",28406,,,,,7/30/2021 15:12,,,,10,,,,CC BY-SA 4.0 29930,2,,29913,7/30/2021 15:19,,1,,"

Image channels have nothing to do with machine learning, they are just part of computer image processing.

A channel is a number per pixel. So most colour images are stored with red, green and blue channels, as you probably know. Some images are stored in greyscale with just one white channel.

A RGB image is stored like this: pixel 0 red amount, pixel 0 green amount, pixel 0 blue amount, pixel 1 red amount, pixel 1 green amount, pixel 1 blue amount, pixel 2 red amount, ...

They could also be rearranged like this: pixel 0 red amount, pixel 1 red amount, pixel 2 red amount, ....., pixel 99999 red amount, pixel 0 green amount, pixel 1 green amount, ....., pixel 99999 green amount, pixel 0 blue amount, ....., pixel 99999 blue amount.
But that is not common.

A greyscale image only has one channel and it's stored like this: pixel 0 white amount, pixel 1 white amount, pixel 2 white amount, pixel 3 white amount, ...

A black-and-white image also has only one channel (a white channel) but that channel can only be 0 brightness or maximum brightness. They can be stored with just 1 bit per pixel.

Alpha is an extra transparency channel that some pictures have. Alpha 0 means fully transparent. Maximum alpha means fully opaque. Half-maximum alpha means the image is partially see-through at that pixel. Things like photos don't have alpha, but computer graphics that are designed to be displayed on top of other pictures often do.

There are also more exotic systems like YCbCr, where you have a white channel, a blue-versus-green channel, and a red-versus-green channel. Mostly we just convert those to RGB before processing.

",28406,,,,,7/30/2021 15:19,,,,3,,,,CC BY-SA 4.0 29931,2,,29910,7/30/2021 15:27,,2,,"

What you're looking for is called Imitation Learning (IL), in which we are interested in learning an expert policy $\pi_*$ which we assume to be optimal.

However, there are many different ways we can approach such learning setting. Just to give some examples, we might be interested in Behavioural Cloning, where our parametrised policy $\pi_\theta$ (the RL agent) is trained in a supervised fashion to mimic the expert behaviour on a set of demonstrations $\mathcal{D} = \{(s_i, \pi_*(s_i))\}^N_{i=1}$:

$\theta^* = argmin_\theta \sum_{s \in \mathcal{D}}\mathcal{L}(\pi_*(s), \pi_\theta(s))$

where $s_i$ represents a state and $\pi(s_i)$ the action taken by the policy.

Other approaches involve learning a reward function $\mathcal{R}_\phi$ from $\mathcal{D}$ and use it to reward our agent (Inverse Reinforcement Learning) or GAIL (Generative Adversarial Imitation Learning). I suggest you take a look at this brief overview.

So, to answer your question: yes, you can leverage expert behaviour in your RL algorithm. Should you do so? It depends. IL is not always advisable as the agent might get stuck in a suboptimal policy, especially if the expert you're learning from is poor.

You can also combine IL and RL by starting with the former early on the training and then switch to the latter for later epochs.

Unfortunately, no way of knowing which works best a priori.

Hope this was useful.

",37576,,,,,7/30/2021 15:27,,,,0,,,,CC BY-SA 4.0 29932,2,,28890,7/30/2021 16:27,,3,,"

In machine learning, a tensor is a multi-dimensional array (i.e. a generalization of a matrix to more than 2 dimensions), which has some properties, such as the number of dimensions or the shape, and to which you can apply operations (for example, you can take the mean of all elements across all dimensions). So, a scalar is a 0-d tensor (no dimensions), a vector is a 1d tensor (1 dimension), a matrix a 2d tensor (2 dimensions), a cube a 3d tensor, and so on.

The name TensorFlow really comes from the fact that TensorFlow manipulates tensors all the time. The documentation of TensorFlow describes tensors as follows

Tensors are multi-dimensional arrays with a uniform type (called a dtype)

If you're familiar with NumPy, tensors are (kind of) like np.arrays.

In this context, you can think of the axis as an abstraction for dealing with or manipulating the dimensions (aka shape) of a tensor or to apply certain operations "across dimensions". It's not exactly a synonym for dimension because, in some software libraries (e.g. Keras and NumPy), you can also pass special arguments such as -1 to the axis parameter, which does not correspond to any dimension (there's no $-1$ dimension of a tensor). It's a way to apply certain operations to the tensor only across e.g. the first dimension (axis 0) or the second dimension (axis 2), and so on.

For example, let's say you have a matrix (2d tensor) $A$. You want to compute the average (aka mean) of the rows, i.e. sum all elements of row $i$, then divide by the number of elements (which corresponds to the number of columns in row $i$), and do this for all rows $i=1, \dots, K$. You can do this in NumPy by specifying that you want to apply the mean operation to axis=1. So, after this operation, you will get a 1d tensor (a vector) with $K$ elements (i.e. the numbers of rows of the original matrix): this is what I mean by "across dimensions".

Here's a NumPy example that illustrates the concept.

import numpy as np

A = np.array([[1, 2, 3],
              [1, 1, 1],
              [0, 1, -1]])

# (1 + 2 + 3 + 1 + 1 + 1 + 0 + 1 + (-1)) / 9
m = np.mean(A)
print("Mean of all elements of A =", m)

# [(1 + 2 + 3)/3, (1 + 1 + 1)/3, (0 + 1 + (-1))/3]
m1 = np.mean(A, axis=1)
print("Mean of each row =", m1)

# [(1 + 1 + 0)/3, (2 + 1 + 1)/3, (3 + 1 + (-1))/3]
m2 = np.mean(A, axis=0)
print("Mean of each column =", m2)

Why would you want to compute (in this case) the mean of each row? Because, for example, each row $i$ may correspond to some data associated with a user $i$, so you may want to compute e.g. the average salary for each user for all months (in this case, there are only 3 columns, so 3 months).

Tensors did not originate in machine learning. In fact, in mathematics, tensors are well-known objects, which have some properties, which may be different than the properties associated with the tensors implemented in libraries like TensorFlow, which are really just multi-dimensional arrays with the necessary properties and methods for machine/deep learning. Tensors are also everywhere in quantum computing.

This answer that I wrote a while ago could also be useful.

",2444,,2444,,7/30/2021 16:35,7/30/2021 16:35,,,,0,,,,CC BY-SA 4.0 29933,2,,29923,7/30/2021 20:38,,1,,"

a subset of your problem is pallet-loading (you can find a survey here about it). Yet, there is not any polynomial algorithm for an even simpler case of your problem such as pallet-loading. However, different planners in automated planning contexts can be helpful and give you some heuristics to solve the problem, some planners like Fast Forward.

",4446,,,,,7/30/2021 20:38,,,,0,,,,CC BY-SA 4.0 29934,1,,,7/30/2021 23:38,,1,165,"

While training a neural network, we can follow three methods: batch gradient descent, mini-batch gradient descent and stochastic gradient descent.

For this question, assume that your dataset has $n$ training samples and we divided it into $k$ batches with $\dfrac{n}{k}$ samples in each batch. So, it can be easily understood the word "batch" is generally used to refer to a portion of the dataset rather than the whole dataset.

In batch gradient descent, we pass all the $n$ available training samples to the network and then calculates the gradients (only once). We can repeat this process several times.

In mini-batch gradient descent, we pass $\dfrac{n}{k}$ training samples to the network and calculates the gradient. That is, we calculate the gradient once for a batch. We repeat this process with all $k$ batches of samples to complete an epoch. And we can repeat this process several times.

In stochastic gradient descent, we pass one training sample to the network and calculates the gradient. That is, we calculate the gradient once for iteration. We repeat this process with all $n$ times to complete an epoch. And we can repeat this process several times.

Batch gradient descent can be viewed as a mini-batch gradient descent with $k = 1$ and stochastic gradient descent can be viewed as a mini-batch gradient descent with $k = n$.

Am I correct regarding the usage of terms in the context provided above? If wrong then where did I go wrong?

If correct, I am confused about the usage of the word "batch" in "batch gradient descent". In fact, we do not need the concept of batch in batch gradient descent since we pass all the training samples before calculating gradient. In fact, there is no need for batch gradient descent to partition the training dataset into batches. Then why do we use the word "batch" in batch gradient descent? Similarly, we are using the word "mini-batch" in "mini-batch gradient descent". In fact, we are passing a batch of samples before calculating the gradient. Then why it is called "mini-batch" gradient descent instead of "batch" gradient descent?

",18758,,18758,,8/2/2021 2:03,8/2/2021 2:03,"Why is it called ""batch"" gradient descent if it consumes the full dataset before calculating the gradient?",,1,0,,,,CC BY-SA 4.0 29935,1,,,7/31/2021 1:55,,1,138,"

I was recently considering training an agent that perform a task by reinforcement learning. Both the state and actions are continuous, but could be discretized if needed. The problem is that in my case the state observations and reward will be quite noisy, so given the same state and action, the next state and received reward will be different on each run, and the noise cannot be described by canonical probability distributions.

Up to now I have tried deep Q-network, stochastic policy gradient and deep deterministic policy gradient. While I could successfully implemented these algorithms in the CartPole game, they all failed to learn my particular task.

I hope to know are there any reinforcement learning methods that can deal with noisy state observations?

",37482,,37482,,7/31/2021 2:05,7/31/2021 2:05,Reinforcement learning algorithms that deal with noisy state observations,,0,2,,,,CC BY-SA 4.0 29936,1,,,7/31/2021 3:17,,2,157,"

Consider the following explanations regarding batch normalization layers in PyTorch

#1: one dimensional batch normalization

class torch.nn.BatchNorm1d(.........)

Applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D inputs with optional additional channel dimension)............

#2: Second dimensional batch normalization

class torch.nn.BatchNorm2d(..........)

Applies Batch Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension)

#3: Third dimensional batch normalization

class torch.nn.BatchNorm3d(..............)

Applies Batch Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension)

All these say that there can be an extra element to each vector undergoing batch normalization and is related to channel.

Is the channel referred here is same as the channels of the image? If yes, then what does it contain? Does it contain the number of channels in that particular layer?

Else, what does this additional channel contain?.

",18758,,18758,,1/17/2022 8:27,1/17/2022 8:27,"What is an ""additional channel dimension"" contain in batch normalization?",,0,0,,,,CC BY-SA 4.0 29937,2,,28598,7/31/2021 7:15,,-2,,"

Decision tree can be utilized for both classification(categorical) and regression(continuous) type of problems. The decision criterion of decision tree is different for continuous feature as compared to categorical.

The algorithm used for continuous feature is Reduction of variance. For continuous feature, decision tree calculates total weighted variance of each splits. The minimum variance from these splits is chosen as criteria to split.

look into this decision tree basics article, section 3

",48391,,,,,7/31/2021 7:15,,,,1,,,,CC BY-SA 4.0 29938,2,,28882,7/31/2021 10:57,,0,,"

For this question:

Is there a design approach that will allow me to add additional rules without having to go back-and-forth the VehicleState class and add/modify properties in order to make sure that every possible type of Percept can be handled by the agent's internal state?,

I think that you can implement a class called policy or stragegy, with different subclasses, and each object could have a different strategy.

To construct a kind of mindset refering to multi-agent simulations I think that you can find some inspiration in http://cormas.cirad.fr/.

I have worked in some related problems, and it should be useful to look at https://agritrop.cirad.fr/541188/

",33566,,33566,,7/31/2021 11:14,7/31/2021 11:14,,,,0,,,,CC BY-SA 4.0 29940,2,,29904,7/31/2021 14:36,,3,,"

Optimizing a traditional GAN is equivalent to minimizing KL-divergence, which is known to have limitations. Wasserstein metric promises to remedy these limitations, and is defined as follows:

$$ W(P_r, P_g) = \inf_{\gamma \in \prod(P_r ,P_g)} \mathbb{E}_{(x, y) \sim \gamma}\big[\:\|x - y\|\:\big] $$

However, computing $\prod(P_r, P_g)$ for all possible joint probability distributions is extremely intractable. Thus the authors proposed a smart transformation of the formula based on the Kantorovich-Rubinstein duality, defined as:

$$ W(P_r, P_\theta) = \sup_{\|f\|_L \leq K} \mathbb{E}_{x \sim P_r}[f(x)] - \mathbb{E}_{x \sim P_\theta}[f(x)] $$

For this to work, $\| f \|_L \leq K$ must hold, which means that $f$ must be K-Lipschitz continuous. $f$ is called K-Lipschitz continuous if there exists a real constant $K \geq 0$ such that, for all $x_1, x_2 \in \mathbb{R}$,

$$\lvert f(x_1) - f(x_2) \rvert \leq K \lvert x_1 - x_2 \rvert$$ ​ Here $K$ is known as a Lipschitz constant for the function $f$. Lipschitz continuity ensures that the derivative of $f$ is less than or equal to K everywhere (or to 1 for 1-Lipschitz). This can be illustrated as follows:

For a Lipschitz continuous function, there exists a double cone (white) whose origin can be moved along the graph so that the whole graph always stays outside the double cone

Thus, we have a parametrized family of functions $\{f_w\}_{w \in W}$ that are all K-Lipschitz for some $K$. The function $f$ has the role of a non-linear feature map that maximally enhances the differences between the samples coming from the two distributions. The role of the Lipschitz constraint is to prevent $f$ from arbitrarily enhancing small differences. The constraint assures that if two input images are similar the output of $f$ will be similar as well. Without this constraint, the result would be zero when $P_r$ is equal to $P_g$ and $\infty$ otherwise, since the effect of any minor difference can be arbitrarily enhanced by an appropriate feature map.

The supremum over $K$-Lipschitz functions is still intractable, but now $\{f_w\}$ can be approximated using a neural network. For optimization purposes, it is not necessary to know what $K$ is. It is enough to know that it exists and that it is fixed throughout the training process (read this article for more details).

",12841,,,,,7/31/2021 14:36,,,,0,,,,CC BY-SA 4.0 29941,2,,29903,7/31/2021 16:13,,1,,"

An RNN processes words one by one. For example on the sentence "man eats dog", it will:

  1. Fully process "man", producing an output $y_1$ and hidden units $h_1$.
  2. Fully process "eats", now using also the previous output and/or hidden units.
  3. Finally process "dog", again using the previous output and/or hidden units $y_2$ and $h_2$.

Since the outcome of the first word is used as an input to the second, we must wait until it's done before starting computation on the second, and so on.

In self-attention, the output of say word 3 still depends on the previous (and subsequent) words as you say. However the dependence is much simpler: to obtain the key $k_i$ of the $i$th word, we multiply its embedding vector $e_i$by a fixed matrix $M$: $$k_i = M e_i$$ In fact we can first concatenate the embedding vectors into a matrix $E$ with components $E_{ki}$ being the $k$th component of the embedding vector $e_i$ of the $i$th word. This way all key vectors can be obtained at once as $$K = M E$$ with $K$ decomposing into components in the same way as $E$.

Note that matrix multiplication is highly parallelizable: a given component in the matrix $K$ depends on only one row of $M$ and one column of $E$.

So in short, the reason transformers are parallelizable while RNNs are not is not that they do not depend on earlier (or later) words, but rather that the dependence is linear, while for an RNN it is highly nonlinear (i.e. several layers of an affine transformation followed by a nonlinearity).

",48953,,,,,7/31/2021 16:13,,,,0,,,,CC BY-SA 4.0 29942,1,,,7/31/2021 17:11,,1,298,"

I have some plans in working with Reinforcement Learning in order to predict the stock price movement. For a stock like TSLA

some training features might be the pivot price values and the set of the difference between two consecutive pivot points.

I would like that my model captures the general essence of the stock market. In other words, if I want my model to predict the stock price movement for TSLA, then my dataset will be built only on TSLA stock. If I try to predict the price movement on FB stock using that model, then it won't work for many reasons. So, if I want my model to predict the price movement of any stock, then I have to build a dataset using all types of stock prices.

For the purpose of this question, instead of taking an example of the dataset using all the stocks, I will use only three stocks, i.e. TSLA, FB, and AMZN. So, I will generate the dataset for two years for TSLA, two years of FB, and two years of AMZN, and then pass it back to back to my model. So, in this example, I pass 6 years of data to my model for training purposes. If I start with FB, then the model will learn and memorize some patterns from the FB features. The problem is when the model is made to train on the AMZN features, it already starts to forget the information of the training on the FB dataset.

Is there a way to parallelise the training on multiple stocks to avoid the memory issue?

Instead of my action being a real value, it will be an action vector where the size is depending on the number of parallel stocks.

",42628,,2444,,12/11/2021 20:53,12/11/2021 20:53,Is there a way to parallelise the RL training on multiple stocks to avoid the memory issue?,,1,0,,,,CC BY-SA 4.0 29943,2,,29934,7/31/2021 19:10,,3,,"

You are correct, but requires final words:

In Batch GD, we take the average of all training data to update the parameters, hence, one step per epoch. That's very valid if you have a convex problem (i.e. smooth error).

On the other hand, in the Stochastic GD, we take one training sample to go one step towards the optimum, then repeat the latter for every training sample, hence updating the parameters once per sample sequentially in every epoch (no average here). As you can expect, the training will be noisy and the error will be fluctuating.

Lastly, the mini-batch GD, is somehow in between the first two methods, that is: the average of a different portion of the data, every time. This method would take the benefits of the previous two, not so noisy, yet can deal with less smooth error manifold.


Personally, I memorize them in my mind by creating the following map:

  1. Batch GD ≡ Average of All per Step ≡ More suitable for Convex Problems at the Risk of Converging directly to Minima = Heavyweight.
  2. Stochastic GD ≡ Fluctuating & Noisy ≡ Converges on the Long Run especially for Large Dataset ≡ Lightweight but Slower ≡ No Vectorization Possible (because one sample per time).
  3. Mini-Batch GD ≡ Portion of Data per Step ≡ Mixture of Stochastic and Batch GD ≡ Less Fluctuation + Would work for Less Smooth Error Manifolds ≡ Faster Computation.

Regarding the naming convention, I would understand the word "Batch" as a "Set" or "Collection", hence the whole "Dataset". Consequently, "Mini" would go with the flow, to mean a "Part of the Set".

",38224,,38224,,7/31/2021 21:16,7/31/2021 21:16,,,,1,,,,CC BY-SA 4.0 29944,2,,29942,7/31/2021 19:47,,2,,"

Take a look at:

Deep Reinforcement Learning for Automated Stock Trading where the 30 Dow Jones stocks are trained using OpenAI Gym.

The code is here and the published paper is here.

An excerpt from the paper:

We train a deep reinforcement learning agent and obtain an ensemble trading strategy using three actor-critic based algorithms: Proximal Policy Optimization (PPO), Advantage Actor Critic (A2C), and Deep Deterministic Policy Gradient (DDPG). The ensemble strategy inherits and integrates the best features of the three algorithms, thereby robustly adjusting to different market situations.

",5763,,5763,,7/31/2021 19:52,7/31/2021 19:52,,,,0,,,,CC BY-SA 4.0 29946,1,,,8/1/2021 2:02,,1,183,"

Consider the following statement from the abstract of the paper titled Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model $G$ that captures the data distribution, and a discriminative model $D$ that estimates the probability that a sample came from the training data rather than $G$.

Let $D'$ be a dataset of $n$ digital and discrete* images, each of size $C \times H \times W$. Suppose the generative adversarial network is trained on the dataset $D'$.

Our sample space (or data set) is $D' = \{I_1, I_2, I_3, \cdots, I_n\}$, where $I_j$ is the $j^{th}$ image for $1 \le j \le n$

The random variables are $X_1, X_2, X_3, \cdots, X_{CHW}$ where

$$X_i \in \{a, a+1, a+2, \cdots, b\} =\text{ intensity of }i^{th} \text{ pixel;} \text{ for } 1 \le i \le CHW$$

Since we have dataset $D'$, we can calculate the joint distribution $p_{data\_set}$ having $(b-a+1)^{CHW}$ parameters calculated from the dataset. But the parameters of the original image distribution $p_{ground_{D^{'}}} $is not equal to the $p_{D'}$


Simple example:

Suppose I flipped an unbiased coin 100 times and I got 45 heads, 55 tails, then $P_{data\_set}(H) = \dfrac{45}{100}$ and $P_{data\_set}(T) = \dfrac{55}{100}$. So, $P_{data\_set} = \{\dfrac{45}{100}, \dfrac{55}{100}\}$

but the ground truth probability distribution is $P_{ground}(H) = \dfrac{50}{100}$ and $P_{ground}(T) = \dfrac{50}{100}$. So, $P_{ground} = \{\dfrac{50}{100}, \dfrac{50}{100}\}$


Which distribution is our generator capturing by the end? Is it the probability distribution calculated based on our dataset $p_{data\_set}$ of images or the actual probability distribution $p_{ground}$?

How to understand the act of capturing here? Does it only mean to be behaving (generating instances) in the same way as data distribution?


Suppose I design a machine that is capable to generate equal number of $0$'a as equal number of $1$'s for sufficiently large trails, then I can say that my machine captured the coin toss probability distribution?


  • discrete images refers to the images whose pixel values takes finite number of integer values.
",18758,,18758,,8/5/2021 3:33,8/5/2021 3:33,Which probability distribution a generator in Generative Adversarial Network (GAN) is capturing: dataset or ground truth?,,0,0,,,,CC BY-SA 4.0 29947,1,,,8/1/2021 2:54,,0,35,"

Although the main stream research is on Generative Adversarial Networks(GANs) using Multi Layer Percepteons (MLPs). The original paper titled Generative Adversarial Nets clealry says, in abstract, that GAN is possible with out MLP also

In the space of arbitrary functions $G$ and $D$, a unique solution exists, with $G$ recovering the training data distribution and $D$ equal to $\dfrac{1}{2}$ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation.

Are there any research papers that uses models other than MLPs and are comparatively successful?

",18758,,18758,,8/1/2021 3:00,8/1/2021 3:00,Are there any Generative Adversarial Networks without Multi Layer Perceptrons?,,0,9,,,,CC BY-SA 4.0 29948,1,,,8/1/2021 4:45,,1,83,"

Consider the following statement (from the paper Generative Adversarial Nets) about the success of discriminative models

So far, the most striking successes in deep learning have involved discriminative models, usually those that map a high-dimensional, rich sensory input to a class label. These striking successes have primarily been based on the backpropagation and dropout algorithms, using piecewise linear units which have a particularly well-behaved gradient.

The piece-wise linear units they are referring to are, I guess, the activation functions. The primary purpose of activation functions is to introduce non-linearity, and there are no other mathematical requirements, such as continuity, differentiability, etc. But not all the activation functions may work well, and some are preferred over others, based on their nature, as well as the task under consideration.

After reading the quoted paragraph, one can conclude that the activation functions that have well-behaved gradient are showing better results than the others, at least in discriminative tasks.

What does "well-behaved" in this context stand for? Can we have some mathematical properties of the gradient in order to recognize it as a well-behaved gradient? Or the usage of the phrase "well-behaved" is highly dependent on the discriminative task under consideration?

",18758,,2444,,11/16/2021 21:10,12/12/2022 0:03,"What is meant by ""well-behaved gradient"" in this context?",,1,0,,,,CC BY-SA 4.0 29949,1,,,8/1/2021 5:51,,-1,148,"

Most of the notations in Artificial Intelligence are borrowed from the mathematics.

$x$ stands for input (vector), $y$ stands for output (vector) etc., and the list is long.

But, I am not sure whether $z$ has any (widely used) role in mathematics.

Is there any reason behind the usage of letter $z$ to represent a noise vector? Or is it just selected randomly without any reason?

",18758,,18758,,8/31/2021 4:53,8/31/2021 4:53,Why is noise vector represented by letter $z$?,,1,2,,8/3/2021 10:35,,CC BY-SA 4.0 29950,1,29954,,8/1/2021 6:14,,1,209,"

The following is a way to use tilde (∼) in context of random variables or random vectors.

In statistics, the tilde is frequently used to mean "has the distribution (of)," for instance, $X∼N(0,1)$ means "the stochastic (random) variable $X$ has the distribution $N(0,1)$ (the standard normal distribution). If X and Y are stochastic variables then $X∼Y$ means "$X$ has the same distribution as $Y$.

Consider the following usage of tilde in the paper titled Generative Adversarial Nets

$$x ∼ p_{data}(x)$$ $$z ∼ p_z(z)$$

I am thinking that the following is the standard (and possibly correct) notation

$$x ∼ p_{data}$$ $$z ∼ p_z$$

$p_{data}$ is a probability distribution and $p_{data}(x)$ is not a probability distribution and it is a value in $[0, 1]$. It is same in case of noise probability distribution.

Is it an abuse of notation to use in such a way or is it also a standard and allowed notation to use?

",18758,,18758,,8/1/2021 11:30,8/2/2021 12:10,Is it abuse of notation to use tilde operator in this context?,,1,3,,,,CC BY-SA 4.0 29953,1,,,8/1/2021 11:21,,1,102,"

I am trying to understand why mean is used for expectation in training Generative Adversarial Networks.

The answer tells that it is due to the law of large numbers which is based on the assumption that random variables are independent and identically distributed.


If I have a dataset of all possible $32 \times 32$ grayscale images. Then my sample space consists of $256^{32 \times 32}$ elements. Suppose I define 1024 random variables as

$$X_i = \text{ intensity of } i^{th} \text{ pixel for } 1 \le i \le 1024$$

Then it is clear that all the random variables are iid since

  1. $X_i \perp X_j$ for all $i, j$ such that $i \ne j$ and
  2. $p(X_i = k) = \dfrac{1}{256}$ for all $i$

But these properties do not hold if I take a dataset of (say flower) images since pixel intensities are not independent of each other and the intensity values are not uniformly distributed as well.

Then how can the law of large numbers be applicable for GAN as the dataset (sample space) does not cover all the possible elements? If I am wrong, then what is the sample space they are considering and what are the random variables they are using implicitly that leads to the satisfaction of iid condition and then the law of large numbers?

",18758,,18758,,8/1/2021 13:58,1/15/2022 0:16,What are the iid random variables for a dataset in the GAN framework?,,1,6,,,,CC BY-SA 4.0 29954,2,,29950,8/1/2021 12:01,,2,,"

The notation $p(x)$ is widely used in machine learning (e.g. here) and even statistics (e.g. here). People often use $p(x)$ to refer to a probability distribution (either pmf, pdf, or cdf) rather than just $p$. There is also the notation $p_x$ (or things like $p_{x \mid y}$ for conditional p.d.s), which you will find in some statistics books.

Of course, if you interpret the notation $p(x)$ as the evaluation of e.g. the pdf at $x$, it's a density value, so not a p.d. However, I think, in your case, $p(x)$ is used as $p_x$, i.e. to emphasize that the pd $p$ is associated with the r.v. $x$ or that it's a function of one variable (in this case denoted by $x$ to remind the reader that this pd is associated with the r.v. $x$). That's how I would interpret that notation in this case.

This can be useful if you use the letter $p$ to denote multiple p.d.s associated with different r.v.s (which is often the case); so, by using $p_x$ or $p(z)$, you make it clear which r.v. $p$ is associated with.

I looked at the GAN paper, and sometimes they use $z \sim p_z(z)$ and other times $z \sim p_z$, so there are some inconsistencies in the paper, which is not so uncommon in machine learning papers (sometimes people do that just to avoid verbosity).

One that knows the mathematical definition of an r.v. will also know that an r.v. is a function. So, if $p$ is e.g. a pdf and $x$ in $p(x)$ is an r.v., then $p(x)$ would be a composition of functions. I don't think that people use the notation $p(x)$ for that case, as this can become more complicated: we can easily start to talk about measure theory or pushforward measures (which I only barely heard of). I think that many people in the machine learning community do not have a formal statistics or mathematical background, so they probably just use $p(x)$ because everyone else does.

In this answer that I wrote a while ago, I also say that $p(x)$ could be a shorthand for $p(x = x_i)$, i.e. the probability that $x = x_i$ (some event). This is also common, but that's not how I would interpret $p(x)$ in your case.

",2444,,2444,,8/2/2021 12:10,8/2/2021 12:10,,,,0,,,,CC BY-SA 4.0 29956,1,,,8/1/2021 13:57,,2,551,"

I have enrolled in a course that uses only one hidden layer, and that is the only layer that has activation functions. The model can be visualized as follows:

and here is a PyTorch implementation:

class MnistModel(nn.Module):
    """Feedfoward neural network with 1 hidden layer"""
    def __init__(self, in_size, hidden_size, out_size):
        super().__init__()
        # hidden layer
        self.linear1 = nn.Linear(in_size, hidden_size)
        # output layer
        self.linear2 = nn.Linear(hidden_size, out_size)
        
    def forward(self, xb):
        # Flatten the image tensors
        xb = xb.view(xb.size(0), -1)
        # Get intermediate outputs using hidden layer
        out = self.linear1(xb)
        # Apply activation function
        out = F.relu(out)
        # Get predictions using output layer
        out = self.linear2(out)
        return out

shouldn't the output layer also have activation functions?

",48889,,12841,,8/2/2021 12:09,8/2/2021 17:05,Does the output layer in a deep neural network need an activation function?,,2,5,,,,CC BY-SA 4.0 29958,2,,29953,8/1/2021 20:38,,1,,"

Independent and identically distributed random variables share the same probability distribution and each item doesn’t influence or provide insight about the value of the next item you measure. The most common example is a coin toss: as you flip the coin, one outcome does not influence or predict the next one.

As for a dataset of flowers, we assume that the photos were made independently and all kinds of flowers are evenly represented, which in practice is not true. Each flower won't be represented in the same lighting condition and camera view.

Each pixel in an image is highly dependent on the neighboring pixels (and sometimes distant ones, e.g., when generating faces as they are usual symmetric) and the main goal of GAN is to discover this dependence by exploring a latent features manifold that represents abstract attributes of the real data, and to find a mapping between these features and output pixels. Of course, if some of the features are poorly represented, the network will not be able to generate them correctly. For instance, to increase the fidelity, there is the so-called truncation trick, where we sample the latent features vector $z$ from a truncated normal distribution, which ignores rare features. But on the other hand, it reduces diversity.

As for the loss function,

$$\nabla_{\theta_{d}} \frac{1}{m} \sum_{i=1}^{m}\left[\log D\left(\boldsymbol{x}^{(i)}\right)+\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right]$$

the mean is associated with the loss rather than the iid of the random variables and is used to obtain the average gradient of a batch. Here are related questions: one, tow.

",12841,,18758,,1/15/2022 0:16,1/15/2022 0:16,,,,0,,,,CC BY-SA 4.0 29961,1,,,8/2/2021 0:06,,2,417,"

I came across the term "Parzen" while reading the research paper titled Generative Adversarial Nets. It has been used in the research paper in two contexts.

#1: In phrase "Parzen window"

We estimate probability of the test set data under $p_g$ by fitting a Gaussian Parzen window to the samples generated with $G$ and reporting the log-likelihood under this distribution.

#2: In phrase "Parzen density estimation"

Evaluating $p(x)$ in Generative autoencoders and Adversarial models: Not explicitly represented, may be approximated with Parzen density estimation

Is there any definition for the word Parzen and how is it related to the probability distributions?

",18758,,18758,,8/2/2021 1:44,8/2/2021 8:12,What exactly is a Parzen?,,1,0,,,,CC BY-SA 4.0 29963,1,,,8/2/2021 2:21,,5,2554,"

Loss functions are useful in calculating loss and then we can update the weights of a neural network. The loss function is thus useful in training neural networks.

Consider the following excerpt from this answer

In principle, differentiability is sufficient to run gradient descent. That said, unless $L$ is convex, gradient descent offers no guarantees of convergence to a global minimiser. In practice, neural network loss functions are rarely convex anyway.

It implies that the convexity property of loss functions is useful in ensuring the convergence, if we are using the gradient descent algorithm. There is another narrowed version of this question dealing with cross-entropy loss. But, this question is, in fact, a general question and is not restricted to a particular loss function.

How to know whether a loss function is convex or not? Is there any algorithm to check it?

",18758,,2444,,8/3/2021 13:04,8/3/2021 13:38,How to check whether my loss function is convex or not?,,2,1,,,,CC BY-SA 4.0 29965,1,,,8/2/2021 5:18,,0,166,"

The value function on which convergence has been proved by the original paper of GAN is

$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$$

and the loss function used in training are

$$\max L(D) = \frac{1}{m} \sum_{i=1}^{m}\left[\log D\left(\boldsymbol{x}^{(i)}\right)+\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right]$$

$$\min L(G) = \frac{1}{m} \sum_{i=1}^{m}\left[\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right]$$

where $\{z^{(1)}, z^{(2)}, z^{(3)}, \cdots, z^{(m)}\}$ and $\{x^{(1)}, x^{(2)}, x^{(3)}, \cdots, x^{(m)}\}$ ate the noise samples and data samples for a mini-batch respectively.

I found after analyzing some questions 1, 2 on our main site that the loss function used for training is just an approximation of the value function and are not same in formal sense.

Is it true? If yes, what is the reason behind the disparity? Is the loss function used for implementation also ensures convergence?

",18758,,18758,,8/4/2021 23:36,12/28/2022 2:02,Does average loss function in GAN training is just an approximation of value function and does not ensure convergence of generator and discriminator?,,1,10,,,,CC BY-SA 4.0 29966,1,,,8/2/2021 8:04,,2,29,"

I have been working on a computer vision problem with the use of cnns, but quite frustratingly I'm often in the situation of not knowing what to do to improve my results. It seems to me that most of the time I am mostly making random changes and experimenting in hope that this change will bring some improvement. I notice how this is different from non-AI software development where debugging can be performed by trying to pinpoint where exactly in the code lies an unexpected behaviour.

I wonder if there is a technique that could better orient the research effort.

",48816,,,,,8/2/2021 8:04,Is there a systematic way of conducting deep learning experiments?,,0,3,,,,CC BY-SA 4.0 29967,2,,29961,8/2/2021 8:12,,2,,"

Parzen was a statistician, who worked in spectral analysis and stochastic processes. I don't know if he invented them, but those windows and probability density esimation methods are named after him.

See also his Wikipedia entry.

",2193,,,,,8/2/2021 8:12,,,,0,,,,CC BY-SA 4.0 29968,1,,,8/2/2021 9:46,,0,42,"

I want to give weights to some data points

Specifically, these are points related to anomalies

(I'm implementing one-class SVM for anomaly detection)

Exactly, I want to consider some data points that are likely to be anomalies as more important data points

Is it possible in one-class SVM ?

",38808,,,,,8/2/2021 9:46,How can I weight each point in one-class SVM?,,0,4,,,,CC BY-SA 4.0 29969,2,,29949,8/2/2021 10:06,,3,,"

I don't think there's any rationale behind the usage of the letter $z$ to denote the noise (which sometimes is also denoted by $\epsilon$ in other contexts), apart from the fact that $x$ and $y$ are already being used and that the letters $x$, $y$, $z$ and $w$ are often used to denote variables in mathematics. In particular, in machine learning, $x$ and $y$ are often used to denote the inputs and outputs (or labels) respectively, while $w$ often denotes the parameters (although $\theta$ is also used for that).

In other words, it's just a convention (e.g. $z$ is also used to denote the hidden variable in the VAE paper). Even if it wasn't and someone used $z$ as a mnemonic letter or for some particular reason, I wouldn't lose too much time on this issue, as another author may very well use any other letter to refer to the same concept. It's important to be a bit flexible when it comes to notation in mathematics, otherwise, you may easily get lost.

",2444,,2444,,8/2/2021 10:11,8/2/2021 10:11,,,,0,,,,CC BY-SA 4.0 29970,1,,,8/2/2021 10:14,,0,515,"

Let us imagine $x$ as a tensor containing 1000 RGB images, each of size $64 \times 32$.

>>> x = torch.randn(1000, 3, 64, 32)
>>> print(x.shape)
torch.Size([1000, 3, 64, 32])

I am using a 2d convolutional layer that converts RGB images to single channel (say grayscale) images

>>> in_ch = 3
>>> out_ch = 1
>>> m = nn.Conv2d(in_ch, out_ch, 3, 1, 1)
>>> print(m)
Conv2d(3, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

I passed the tensor $x$ in the convolutional layer and obtained another tensor of 1000 grayscale images, each of size $64 \times 32$.

>>> output = m(x)
>>> print(output.shape)
torch.Size([1000, 1, 64, 32])

Now, I can say that my convolutional layer converted an RGB image into a grayscale image using 2d kernel.

How it is doing?

RGB image has 3 planes each of size $64 \times 32$. If a kernel of 2 dimensions is used, then we will get 3 planes in output, corresponding to R, G, and B. How is it possible to convert an image with 3 channels into an image with one channel using 2d kernel?

I can visualize easily if I use a 3d kernel since the kernel considers three channels simultaneously and produces a single feature map for an RGB image.

",18758,,18758,,10/1/2021 7:37,10/1/2021 11:03,Confusion about conversion of RGB image to grayscale image using a convolutional layer with 2-dimensional filters,,1,1,,,,CC BY-SA 4.0 29971,2,,29948,8/2/2021 10:35,,0,,"

"Piecewise linear" appears to describe ReLu and LRelu activation functions, whose gradients are just simple step functions.

",16378,,,,,8/2/2021 10:35,,,,0,,,,CC BY-SA 4.0 29972,2,,29956,8/2/2021 12:17,,2,,"

A neural network layer with no activation function is the same as "linear activation" i.e. $f(x) = x$

This is often used for the output layer in regression problems, where a constrained output like that of sigmoid, hyperbolic tangent or ReLU may not be appropriate.

For an output layer, this is fine, and does not conflict with any theory behind neural networks. It will often be combined with a mean squared error (MSE) loss function that makes the gradient calculation simple at the output layer.

For hidden layers, skipping the activation function can be a problem, since a purely linear layer in the middle of a multi-layer network is redundant - it could be replaced or even removed with no impact. That is because two directly connected linear layers are functionally equivalent to a single linear layer with different parameters, and every hidden layer consists of a linear component plus an activation function. So even one missing activation function on a hidden layer directly connects two linear sub-components, making one of them redundant.

In the case of a classifier, some libraries (notably including PyTorch which you are using) require you to have two variants of your neural network - a trainable version without a final sigmoid or softmax layer, and the "full" version which adds sigmoid or softmax on top (usually just a function that calls the training network and add this last activation). This is done to allow you to use more stable gradient calculations that are based on the logits, the values before applying an activation function.

",1847,,1847,,8/2/2021 13:54,8/2/2021 13:54,,,,5,,,,CC BY-SA 4.0 29973,2,,29965,8/2/2021 12:24,,0,,"

Expected value can be thought of as a weighted average of outcomes. Thus, expectation and mean are the same thing, if each outcome has the same probability (which is $\frac{1}{m}$), so we can replace it with a sum divided by $m$. We can rewrite the equation: $$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$$

First, we sample minibatch of size $m$ for $\boldsymbol{x} \sim P_{data}$ and $\boldsymbol{z} \sim \mathcal{N(0, 1)}$. Now we can replace the expectation with the sum:

$$ \begin{align*} \min_G \max_DV(D, G) &= \sum_{i=1}^{m}\left[p(\boldsymbol{x}^{(i)})\log D(\boldsymbol{x}^{(i)})\right] + \sum_{i=1}^{m}\left[p(\boldsymbol{z}^{(i)})log (1 - D(G(\boldsymbol{z}^{(i)})))\right] \\ &= \sum_{i=1}^{m}\left[\frac{1}{m}\log D(\boldsymbol{x}^{(i)})\right] + \sum_{i=1}^{m}[\frac{1}{m}log (1 - D(G(\boldsymbol{z}^{(i)})))]\\ &=\frac{1}{m}\sum_{i=1}^{m}\left[\log D(\boldsymbol{x}^{(i)}) + log (1 - D(G(\boldsymbol{z}^{(i)})))\right] \end{align*} $$


Binary cross entropy defined as follows:

$$H(p, q) = \operatorname{E}_p[-\log q] = H(p) + D_{\mathrm{KL}}(p \| q)=-\sum_x p(x)\log q(x)$$

Since we have a binary classification problem (fake/real), we can define $p \in \{y,1-y\}$ and $q \in \{\hat{y}, 1-\hat{y}\}$ and rewriting coros entropy as follows:

$$H(p, q)=-\sum_x p_x \log q_x =-y\log \hat{y}-(1-y)\log (1-\hat{y})$$

which is nothing but logistic loss. Since we know the source of our data (either real or fake), we can replace labels $y$ for real and fake with 1. We then get: $$\min_G\max_D L = \frac{1}{m} \sum_{i=1}^{m}\left[1\cdot\log D\left(\boldsymbol{x}^{(i)}\right)+1\cdot\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right] $$

This is the original loss. The first term in the equation gets always real images, while the second gets only generated. Hence, both terms have corresponding true labels. Read this article for more details.

Since the first term does not depend on $G$, we can rewrite it as follows:

$$\max L(D) = \frac{1}{m} \sum_{i=1}^{m}\left[\log D\left(\boldsymbol{x}^{(i)}\right)+\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right]$$

$$\min L(G) = \frac{1}{m} \sum_{i=1}^{m}\left[\log \left(1-D\left(G\left(\boldsymbol{z}^{(i)}\right)\right)\right)\right]$$

",12841,,12841,,8/2/2021 12:39,8/2/2021 12:39,,,,4,,,,CC BY-SA 4.0 29974,2,,29956,8/2/2021 14:49,,2,,"

I'd like to add more information to Neal's answer.

A network implemented in the example does not include activation for the output, because it will be applied during the training:

# ...
def training_step(self, batch):
    images, labels = batch 
    out = self(images)
    # apply activation and calculate loss
    loss = F.cross_entropy(out, labels)
    return loss
# ...

The problem considered in the example is multiclass classification, which is usually solved with softmax activation and cross-entropy loss. F.cross_entropy combines log_softmax and nll_loss in a single function, which is numerically more stable as softmax and NLL loss. See this discussion for more details, and here is an article about different implementations.

",12841,,48889,,8/2/2021 17:05,8/2/2021 17:05,,,,6,,,,CC BY-SA 4.0 29977,2,,29970,8/2/2021 16:39,,1,,"

Yes, the kernel is 3D in this case - or 4D as in 3x3x3x1. In the general case you can have multiple output channels, making it 3x3x3x8 for example. The number of channels isn't a convolution dimension because the filter does not "slide"/"translate"/"move" over this dimension. It's still a 2D convolution, and then the channel part of this operation is thought of separately from the convolution part. The 4D kernel is a bunch of 2D kernels. If you have 1 input channel and 1 output channel then it's just one 2D kernel. Or you can think of it as a bunch of 3D kernels if you like. Or a single 4D kernel. These are just arrays of numbers... you can slice an array up however you like, if you can find a way to think about it.


Note the groups parameter of Conv2d, which affects how the channels are convolved. The default is 1, which means:

At groups=1, all inputs are convolved to all outputs.

If you set it to 3 (and 3 output channels) then the Conv2d layer would maintain the channel separation.

",28406,,28406,,10/1/2021 11:03,10/1/2021 11:03,,,,6,,,,CC BY-SA 4.0 29978,2,,29963,8/2/2021 18:40,,1,,"

It is the same as other functions. You can use Theorem 2 in this lecture (from Princeton University):

(ii) condition is about the first-order condition for convexity and (iii) is the second-order. You can also find more detail in chapter 3 of this book ("Convex Optimization" by Stephen Boyd and Lieven Vandenberghe).

",4446,,,,,8/2/2021 18:40,,,,8,,,,CC BY-SA 4.0 29982,1,30018,,8/2/2021 21:49,,2,356,"

In the formula to calculate output shape of tensor after convolution operation $$ W_2 = (W_1-F+2P)/S + 1, $$ where:

  • $W_2$ is the output shape of the tensor
  • $W_1$ is the input shape
  • $F$ is the filter size
  • $P$ is the padding
  • $S$ is the stride.

Why do we add $1$? It gets us to the correct answer, but how is this formula derived?

Source: https://cs231n.github.io/convolutional-networks/#pool

",46686,,2444,,8/4/2021 13:05,10/8/2021 12:15,Why do we add 1 in the formula to calculate the shape of the output of the convolution?,,1,0,,,,CC BY-SA 4.0 29983,1,29991,,8/2/2021 23:06,,0,157,"

Recently I asked a question on how a convolution 2d layer changes an RGB image into a grayscale image. Assume that our task is to convert an RGB image into a grayscale image. I use to believe that filter and kernel are one and the same.

Consider Conv2d in PyTorch.

class torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)

The parameters in_channel, out_channel and kernel_size are key to our discussion here.

I have no doubt about in_channel. It simply says the number of channels in the input image. It is 3 for our task since we have RGB images as input.

The doubt is regarding the parameters out_channel and kernel_size. out_channel refers to the number of channels in the output image. It is 1 for our task since we want grayscale images as output. It is also equal to the number of filters we are needing. So, we just use one filter to convert an RGB image into a grayscale image. kernel_size is the size of the kernel which is showing $3 \times 3$ in our case. Now, my convolution layer is

>>> in_ch = 3
>>> out_ch = 1
>>> m = nn.Conv2d(in_ch, out_ch, 3, 1, 1)
>>> print(m)
Conv2d(3, 1, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))

Since I have doubt about the conversion of RGB image to grayscale image using a single filter and whose size is showing $3 \times 3$, I checked the shape of weights in the filter and realized that the single filter is a 3-dimensional filter of size $ 3 \times 3 \times 3$

>>> print(m.weight.shape)
torch.Size([1, 3, 3, 3])

Now, the filter size is $3 \times 3 \times 3$ and kernel_size is $3 \times 3$.

So, can I safely conclude that the filter is different from the kernel? Can I conclude that kernel is just a part of filter and filter may comprise several kernels? Or is it true that the usage in PyTorch is a bit misleading since I found that our site is also using the same tag for both filter and kernel?

",18758,,18758,,10/1/2021 7:09,10/1/2021 7:09,"Is ""kernel"" different from ""filter"" in convolutional neural networks?",,1,0,,,,CC BY-SA 4.0 29984,2,,24097,8/3/2021 1:48,,1,,"

I doubt whether your derivation is correct. Your are trying to apply binary-cross entropy for data on each label separately, which is not the correct way to do.

The procedure for calculating binary cross entropy is as follows

  1. Pass the input $x$ whose label is $y \in \{0, 1\}$ to your model $M$.
  2. Obtain $\hat{y} \in [0, 1]$ as output of your model $M$ instead of actual label $y$.
  3. Calculate binary cross-entropy loss using the equation

$$L_{CE} = y \log \hat{y} + (1-y) \log (1 - \hat{y})$$

It is true that there are two types of inputs to a discriminator: genuine and fake. Genuine data is labelled by 1 and fake data is labelled by 0. Use the variable $x'$ to represent the input to the discriminator module $D$. If the input $x'$ is genuine then its label is 1 and if your input $x'$ is fake then its label is 0. Note that it is better to avoid the unnecessary details regarding the generator or noise vector while formulating the binary cross-entropy loss of discriminator. Just see discriminator as a module taking two classes of inputs: genuine and fake. Suppose the discriminator outputs $\hat{y} \in [0, 1]$ for the input $x'$ instead of actual label $y \in \{0, 1\}$ then the binary cross-entropy loss is given by

$$L_{CE} = y \log \hat{y} + (1-y) \log(1-\hat{y})$$ $$\implies L_{CE} = y \log D(x') + (1-y) \log(1 - D(x'))$$

Suppose the input $x'$ is a genuine one $x$ then $y = 1$ and

$$\implies L_{CE} = \log D(x)$$

Suppose the input $x'$ is a fake one $G(z)$ then $y = 0$ and

$$\implies L_{CE} = \log (1-D(G(z)))$$

Since the labels are clear from the input of the discriminator $D$, we can write the binary cross-entropy loss for $2m$ samples $\{x_1, x_2, x_3, \cdots, x_m, z_1, z_2, z_3, \cdots, z_m\}$ as

$$\implies L_{CE}^{2m} = \dfrac{1}{2m} \sum\limits_{i = 1}^{m} \log D(x_i) + \sum\limits_{i = 1}^{m} \log (1-D(G(z_i)))$$

Later, we need to perform some mathematical analysis, which I am not sure about whether it is due to law of large numbers or some other 1, 2 we equate the mean to the actual expectations on probability distribution and hence

$$ L_{CE}^{2m} = \dfrac{1}{2} \sum\limits_{i = 1}^{m} \dfrac{1}{m} \log D(x_i) + \sum\limits_{i = 1}^{m} \dfrac{1}{m} \log (1-D(G(z_i)))$$

$$ = \dfrac{1}{2} {\LARGE(} \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))] {\LARGE)}$$

Since removing $\dfrac{1}{2}$ does not matter while optimizing the loss function, the final loss function is given by

$$ L_{D} = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] +\mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$$

",18758,,18758,,9/6/2021 21:59,9/6/2021 21:59,,,,5,,,,CC BY-SA 4.0 29985,1,,,8/3/2021 2:15,,2,124,"

It has been mentioned in the research paper titled Generative Adversarial Nets that generator need to maximize the function $\log D(G(z))$ instead of minimizing $\log(1 −D(G(z)))$ since the former provides sufficient gradient than latter.

$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(z))]$$ In practice, the above equation may not provide sufficient gradient for $G$ to learn well. Early in learning, when $G$ is poor, $D$ can reject samples with high confidence because they are clearly different from the training data. In this case, $\log(1 −D(G(z)))$ saturates. Rather than training $G$ to minimize $\log(1 −D(G(z)))$ we can train G to maximize $\log D(G(z))$. This objective function results in the same fixed point of the dynamics of $G$ and $D$ but provides much stronger gradients early in learning.

A gradient is a vector containing the partial derivatives of outputs w.r.t inputs. At a particular point, the gradient is a vector of real numbers. These gradients are useful in the training phase by providing direction-related information and the magnitude of step in the opposite direction. This is my understanding regarding gradients.

What is meant by sufficient or strong gradient? Is it the norm of the gradient or some other measure on the gradient vector?

If possible, please show an example of strong and weak gradients with numbers so that I can quickly understand.

",18758,,18758,,8/7/2021 23:29,8/9/2021 14:07,What does it mean by strong or sufficient gradient for training in this context?,,1,2,,,,CC BY-SA 4.0 29989,1,,,8/3/2021 8:15,,1,148,"

I have two questions about the structure of attention modules:

Since I work with imagery I will be talking about using convolutions on feature maps in order to obtain attention maps.

  1. If we have a set of feature maps with dimensions [B, C, H, W] (batch, channel, height, width), why do we transform our feature maps before we calculate their affinity/correlation in attention mechanisms? What makes this better than simply taking the cosine distance between the feature vectors (e.g. resizing the maps to [B, C, HW] and [B, HW, C] and multiplying them together). Aren't the feature maps already in an appropriate feature/embedding space that we can just use them directly instead of transforming them first?

  2. Most of the time, attention mechanisms will take as input some stack of feature maps (F), and will apply 3 transformations on them to essentially produce a "query", "key" and "value". The query and key will be multiplied together to get the affinity/correlation between a given feature vector and all other feature vectors. In computer vision these transformation will typically be performed by the different 1x1 convolutions. My question is, how come we use 3 different 1x1 convolutions? Wouldn't it make more sense to apply the same 1x1 convolution to the input F? My intuition tells me that since we want to transform/project the feature maps F into some embedding/feature space that it would make the most sense if the "query", "key" and "value" were all obtained by using the same transformation. To illustrate what I mean lets pretend we had a 1x1 feature map and we wanted to see how well the pixel correlates with itself. Obviously it should correlate 100% because it is the same pixel. But wouldn't applying two sets of 1x1 convs to the pixel lead to the chance that the pixel would undergo a different transformation and in the end would have a lower correlation than it should?

",46069,,28406,,8/13/2021 23:52,9/8/2022 2:01,"Attention mechanism: Why apply multiple different transformations to obtain query, key, value",,1,3,,,,CC BY-SA 4.0 29991,2,,29983,8/3/2021 10:14,,2,,"

The term "filter" is (usually) a synonym for "kernel" in the context of convolutional neural networks and image processing.

The reason why the kernel_size is specified as $3 \times 3$ and then you see that the actual size of the kernel (aka filter) is 3d is that the depth of the kernel can be automatically inferred from in_channels, the depth of the input to the convolutional layer: the depth of the kernel is exactly equal to the depth of the input to that 2d convolutional layer. This would not necessarily be the case in the 3d convolutional layer.

As a side note, in other contexts, like support vector machines, the word "kernel" refers to something different, but I will not dwell on this topic here.

",2444,,2444,,8/3/2021 10:39,8/3/2021 10:39,,,,0,,,,CC BY-SA 4.0 29996,2,,29963,8/3/2021 13:22,,1,,"

For the case of at least twice differentiable functions, the answer is given by @OmG - you need to look at the eigenvalues of Hessian.

For the 1-dimensional case the picture is rather intuitive:

If the function grows faster then linear with deviation from the minimum, then the function is convex.

For multidimensional case for any projection on a plane, cutting the loss hypersurface you need to have the same picture as for 1-dimensional case. This is satisfied, provided, that the Hessian matrix is positive definite.

If the function is not twice differentiable, the situation becomes more complicated, and I am not aware of any general algorithm.

However, for the case of piecewise-linear function, you can imagine something like $|x|$, or a more general function like: $$ f(x) = \begin{cases} \alpha x & \alpha > 0, x > 0 \\ \beta x & \beta < 0, x < 0 \end{cases} $$

If any section of the loss hypersurface in the vicinity of such a singularity has this form - the function will be convex as well.

",38846,,18758,,8/3/2021 13:38,8/3/2021 13:38,,,,0,,,,CC BY-SA 4.0 29997,1,30006,,8/3/2021 13:23,,0,34,"

I'm new to machine learning and I've been working through a dataset of ~3000 records with ~100 features. I've been hand rolling Python and R scripts to analyse the data. For example, plotting the distribution of each feature to see how normal it is, identify outliers, etc. Another example is plotting heatmaps of the features against themselves to identify strong correlations.

Whilst this has been a useful learning exercise, going forward I suspect there are tools that automate a lot of this data analysis for you, and produce the useful plots and possibly give recommendations on transforms, etc? I've had a search around but don't appear to be finding anything, I guess I'm not using the right terminology to find what I'm looking for.

If any useful open source tools for this kind of thing spring to mind that would be very helpful.

",49015,,,,,8/3/2021 22:58,Data analysis before feeding to ML pipeline,,1,1,,,,CC BY-SA 4.0 29998,1,31937,,8/3/2021 13:24,,1,159,"

I have a segmentation which outputs only one channel image (2 class segmentation). I have used dice score for most of the time, but now higher powers in my team want me to expand evaluation metrics for segmentation model (if it's even possible). I have done some research and as far as right now I have found mainly that everybody uses dice score, and sometimes pixel to pixel binary accuracy, but for the latter seems not the best idea.

If anybody knows something exciting or useful, I'd be glad to hear from them.

",41591,,,,,10/31/2022 1:14,"Aside from dice score, what other good metrics are used to evaluate segmentation models?",,2,0,,,,CC BY-SA 4.0 29999,1,30000,,8/3/2021 13:40,,3,667,"

I came across the phrase "numerical stability" several times. But almost in the same context.

I encountered this word mostly in the analytical formula for batch normalization.

$$y = \dfrac{x - \mathbb{E}[x]}{\sqrt{Var[x]+\epsilon}}* \gamma + \beta$$

eps – a value added to the denominator for numerical stability. Default: 1e-5

Is the phenomenon of "numerical instability" happens during the training of neural networks? Or is it a general one in other models also? What is the reason for its occurrence?

",18758,,2444,,8/3/2021 14:09,8/3/2021 14:11,What is numerical stability?,,1,1,,,,CC BY-SA 4.0 30000,2,,29999,8/3/2021 14:11,,4,,"

You can find a definition for "numerical stability" in mathworld wolframe:

Numerical stability refers to how a malformed input affects the execution of an algorithm. In a numerically stable algorithm, errors in the input lessen in significance as the algorithm executes, having little effect on the final output. On the other hand, in a numerically unstable algorithm, errors in the input cause a considerably larger error in the final output.

In your example, suppose $Var(x)$ is little and truncated to zero in the corresponding computing machine. In that case, you will get INF as a result and it can be problematic to continue the computation.

Therefore, they add a small value such as $\epsilon$ to the division to prevent such a case. As $Var(x) \geqslant 0$, $Var(x) + \epsilon$ will be always greater than zero for any machine with higher precision than the $\epsilon$. So, here, an interpretation of the numerical stability can be like this!

",4446,,,,,8/3/2021 14:11,,,,0,,,,CC BY-SA 4.0 30001,2,,29989,8/3/2021 15:03,,1,,"

I assume you're talking about this design: (image source)

But wouldn't applying two sets of 1x1 convs to the pixel lead to the chance that the pixel would undergo a different transformation and in the end would have a lower correlation than it should?

Yes, that's the point. We are not trying to measure a pixel's correlation with itself. Rather we are trying to allow it to query different related data. We are giving it freedom to change both the data and the queries.

It is true that the space for the queries and the keys is the same - but we shouldn't use the same transformation for both, or else each instance of the attention layer is just trying to fetch its own value! Generally the purpose of an attention layer is to query different parts of the input.

The first half of your question was essentially "why do we have a convolution at all?" and I think this has the same answer: you'd just be able to detect similar pixels, you wouldn't be able to pay attention to noses whenever eyes are detected.

It is also true that you could probably skip the convolution on the h(x) input. It looks like this one is somewhat redundant because the convolutions on h(x) and v(x) apply in series - which makes it a two-layer convolution, not quite the same as a one-layer convolution, but perhaps only one layer is needed.

It is possible that if you removed the conv layer on either the keys or the queries (but not both) the model would learn to generate the keys directly as the features, but this would hinder it because it would be unable to output any data in the values and queries that wasn't part of the keys (or vice versa). Seems silly. Don't do that.

",28406,,,,,8/3/2021 15:03,,,,0,,,,CC BY-SA 4.0 30002,1,30005,,8/3/2021 16:04,,1,97,"

In the official PyTorch documentation there is the following calculation (here):

$$ J^{T} \cdot \vec{v}=\left(\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}} \\ \vdots & \ddots & \vdots \\ \frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} \end{array}\right)\left(\begin{array}{c} \frac{\partial l}{\partial y_{1}} \\ \vdots \\ \frac{\partial l}{\partial y_{m}} \end{array}\right)=\left(\begin{array}{c} \frac{\partial l}{\partial x_{1}} \\ \vdots \\ \frac{\partial l}{\partial x_{n}} \end{array}\right) $$

However, I am wondering why the result isn't as follows:

$$ J^{T} \cdot \vec{v}=\left(\begin{array}{ccc} \frac{\partial y_{1}}{\partial x_{1}} & \cdots & \frac{\partial y_{m}}{\partial x_{1}} \\ \vdots & \ddots & \vdots \\ \frac{\partial y_{1}}{\partial x_{n}} & \cdots & \frac{\partial y_{m}}{\partial x_{n}} \end{array}\right)\left(\begin{array}{c} \frac{\partial l}{\partial y_{1}} \\ \vdots \\ \frac{\partial l}{\partial y_{m}} \end{array}\right)=\left(\begin{array}{c} m\frac{\partial l}{\partial x_{1}} \\ \vdots \\ m\frac{\partial l}{\partial x_{n}} \end{array}\right) $$

As the matrix is multiplied by a vector and the $\partial y_{x}$ terms cancel out it is should be $m$ times $\frac{\partial l}{\partial x_{x}}$.

I know that the official PyTorch docs are probably right but I just can't get my head around why.

",21299,,,,,8/3/2021 21:40,Is this calculation of the vector-Jacobian product in the PyTorch documention wrong?,,1,0,,,,CC BY-SA 4.0 30003,2,,22385,8/3/2021 16:20,,0,,"

This image have mathematical proof that how "RECALL cannot be greater than PRECISION for all the classes in multi class classification and vice versa. Let me know if you have got any confusion regarding to it. (https://i.stack.imgur.com/vgeHB.jpg)

",49019,,,,,8/3/2021 16:20,,,,2,,,,CC BY-SA 4.0 30005,2,,30002,8/3/2021 21:40,,1,,"

I think your error is assuming that $$\frac{\partial y_1}{\partial x_1} \frac{\partial l}{\partial y_1} = \frac{\partial l}{\partial x_1}.$$ While 'cancelling' derivatives like this works sometimes, it is not true in general, and treating these quantities as fractions is not really correct in the usual formulation of calculus in terms of limits.

If you run through the calculation using the chain rule with Jacobian matrices, you should recover the correct product. If you're not clear on the compositions, write the functions $y_i(x_1, \dots, x_n)$ and $l(y_1, \dots, y_m)$. If it helps define $y(x_1, \dots, x_n) = (y_1, \dots, y_m)$ then use the chain rule to compute $l(y(x_1, \dots, x_n))$. If it would help to see the calculation in more detail, do let me know. Otherwise, it's a good exercise.

",44413,,,,,8/3/2021 21:40,,,,0,,,,CC BY-SA 4.0 30006,2,,29997,8/3/2021 22:58,,1,,"

This is a great question because indeed there are many tools out there to make this part of the process faster. I have used as I usually stick to the following two:

You can also search for alternatives Hopefully, the community can help complement my answer.

",46594,,,,,8/3/2021 22:58,,,,0,,,,CC BY-SA 4.0 30007,1,,,8/3/2021 22:59,,1,29,"

Suppose the following is the neural network I want to train and assume that there is a batch normalization layer for each layer of the neural network

.

My focus is on the activity of batch normalization layer of the hidden layer. Assume that mini-batch size is $3$. The following are the first four outputs of the hidden layer without using batch normalization layer.

$$\left(\begin{array}{c} 4 \\ 5 \\ 2 \\ 1 \\ \end{array}\right), \left(\begin{array}{c} 3 \\ 4 \\ 6 \\ 0 \\ \end{array}\right), \left(\begin{array}{c} 1 \\ 4 \\ 7 \\ 9 \\ \end{array}\right), \left(\begin{array}{c} 3 \\ 5 \\ 7 \\ 9 \\ \end{array}\right)$$

So the first mini-batch is$\left(\begin{array}{c} 4 \\ 5 \\ 2 \\ 1 \\ \end{array}\right), \left(\begin{array}{c} 3 \\ 4 \\ 6 \\ 0 \\ \end{array}\right), \left(\begin{array}{c} 1 \\ 4 \\ 7 \\ 9 \\ \end{array}\right)$ and has the following statistics

mean = $\mathbb{E}[X] = \left(\begin{array}{c} \dfrac{8}{3} \\ \dfrac{13}{3} \\ 5 \\ \dfrac{10}{3} \\ \end{array}\right), Var(X) = \left(\begin{array}{c} \dfrac{14}{9} \\ \dfrac{2}{9} \\ \dfrac{14}{3} \\ \dfrac{146}{9} \\ \end{array}\right)$

So, I am thinking that the batch normalization can be applied during from fourth iteration since mini-batch of outputs of hidden layer are available and so fourth vector $\left(\begin{array}{c} 3 \\ 5 \\ 7 \\ 9 \\ \end{array}\right)$ will be mapped to $\left(\begin{array}{c} 0.26 \\ 1.41 \\ 0.92 \\ 1.4 \\ \end{array}\right)$

Is it true that if we apply batch normalization the outputs of neural network are $$\left(\begin{array}{c} 4 \\ 5 \\ 2 \\ 1 \\ \end{array}\right), \left(\begin{array}{c} 3 \\ 4 \\ 6 \\ 0 \\ \end{array}\right), \left(\begin{array}{c} 1 \\ 4 \\ 7 \\ 9 \\ \end{array}\right),\left(\begin{array}{c} 0.26 \\ 1.41 \\ 0.92 \\ 1.4 \\ \end{array}\right)$$

I am thinking that I may be totally wrong because the batch normalization layer is not active till the completion of first mini-batch and also using the stats of first mini batch on the outputs of next mini-batch. If wrong, I want to know what is wrong or a simple numerical example by at-least running the batch normalization layer of neural network for mini-batch + 1 number of iterations.

",18758,,18758,,8/4/2021 8:32,8/4/2021 8:32,Example for batch normalization in an aritifical neural network,,0,0,,,,CC BY-SA 4.0 30008,1,,,8/3/2021 23:40,,0,343,"

Batch normalization in neural networks uses $\beta$ and $\gamma$ for scaling. The analytical formula is given by

$$\dfrac{x - \mathbb{E}[x]}{\sqrt{Var(X)}}* \gamma + \beta$$

Conditional batch normalization uses multi-layer perceptrons to calculate the values of $\gamma$ and $\beta$ instead of giving fixed values to them.

Is it only the difference between them or is there any other fundamental difference between them in terms of functionality?

",18758,,,,,8/3/2021 23:40,Is there any difference between conditional batch normalization and batch normalization except the usage of MLPs for predicting $\beta$ and $\gamma$?,,0,2,,,,CC BY-SA 4.0 30011,1,30021,,8/4/2021 9:37,,2,62,"

I am trying to understand the concept of model-free and model-based approaches. As far as I understand, having a model of the environment does not mean that an RL agent has to be model-based. It is about the policy. However, if we can model the environment, why should we want to employ a model-free algorithm? Isn't it better to have a model and expectation about the next reward and state? If you have a better understanding of all these, can you explain them to me as well?

",49031,,2444,,8/4/2021 9:53,8/4/2021 17:53,"If we can model the environment, wouldn't be meaningless to use a model-free algorithm?",,1,2,,,,CC BY-SA 4.0 30012,1,,,8/4/2021 11:26,,2,110,"

Most of the practical research in AI that includes neural networks deals with higher dimensional tensors. It is easy to imagine tensors up to three dimensions.

When I ask the question How do researchers imagine vector space? on Mathematics Stack exchange, you can read the responses

Response #1:

I personally view vector spaces as just another kind of algebraic object that we sometimes do analysis with, along the lines of groups, rings, and fields.

Response #2

In research mathematics, linear algebra is used mostly as a fundamental tool, often in settings where there is no geometric visualization available. In those settings, it is used in the same way that basic algebra is, to do straightforward calculations.

Response #3:

Thinking of vectors as tuples or arrows or points and arrows... is rather limiting. I generally do not bother imagining anything visual or specific about them beyond what is required by the definition... they are objects that I can add to one another and that I can "stretch" and "reverse" by multiplying by a scalar from the scalar field.

In concise, mathematicians generally treat vectors as objects in vector space rather than popular academic/beginner imaginations such as points or arrows in space.

A similar question on our site also recommends not to imagine higher dimensions and to treat dimensions as degrees of freedom.

I know only two kinds of treatments regarding tensors:

  1. Imagining at most up to three-dimensional tensors spatially.

  2. Treating tensors as objects having shape attribute which looks like $n_1 \times n_2 \times n_3 \times \cdots n_d$

Most of the time I prefer the first approach. But I am feeling difficulty with the first approach when I try to understand codes (programs) that use higher dimensional tensors. I am not habituated with the second approach although I think it is capable enough to understand all the required tasks on tensors.

I want to know:

  • How do researchers generally treat tensors?
  • If it is the second approach I mentioned: Is it possible to understand all the high dimensional tensor-related tasks?
",18758,,18758,,10/21/2021 1:39,1/17/2022 23:11,Do researchers generally treat tensors just as mathematical objects with certain shape?,,1,3,,,,CC BY-SA 4.0 30015,2,,28888,8/4/2021 12:28,,2,,"

Details matter, and it is possible that your problem is best solved using classic control (solving the state equations) or operations research style optimisation. However, RL is also a good choice because it can be made to learn a controller that is not brittle when things go wrong.

One thing you will have to accept with RL is that the constraints will be soft constraints, even if you penalise them heavily. That is, you can expect that the internal temperature could drift outside of bounds. It definitely will during learning. A major design concern when framing the problem for reinforcement learning is how to weight the different rewards that represent your goals. You can weight your strict constraints higher, but at least initially they need to be low enough that the cost saving is not completely swamped.

I would suggest that your worst constraint failure penalty is slightly larger than the highest possible electricity cost for a single time step. That would mean the agent is always incentivised to spend money if it has to, as opposed to break the constraints, whilst still being able to explore what happens when it does break the constraints without having to cope with predicting large numbers.

There are lots of types of RL. Some are better at different kinds of problems. I would characterise your problem as you have described it as:

  • Episodic - but only for convenience of describing the problem. In fact your agent with a 24 hour episode will be incentivised to allow internal temperature to drop at the end of the 24 hours to save money, when it does not care what might happen immediately afterwards. Depending on price of electricity at that point, it could well be more optimal to spend more. This may only be a small difference from truly optimal behaviour, but you might play to strong point of RL by re-framing the problem as a continuing one (where mixed-integer linear optimisation may be harder to frame).

  • Continuous state space, with low dimensionality.

    • If prices are known in advance, you may want to augment the state space so that the agent knows how long it has at current price and whether the next price will be higher or lower. Alternatively, if they always follow the same time schedule, you could add the current time as a state variable. Either way, that allows the agent to take advantage of the temperature bounds. For instance, it could load up on cheap heating before a price hike, or allow the temperature to drop to minimum acceptable if cheaper electricity is about to become available.
  • Large, possibly continuous action space. You might want to consider approximating this to e.g. 21 actions (0 W, 100 W . . . 2000 W) as optimising this simpler variant will be easier to code (a DQN could do it), whilst it may not significantly affect optimality of any solution.

I don't think you could simplify your state space in order to use Q tables. So the DQN agent is probably the simplest that you could use here, provided you are willing to discretise the action space.

If you don't want to discretise the action space, then you will want to use some form of policy gradient approach. This will include a policy network that takes current state as input and then output a distribution of power level choices - e.g. a mean and standard deviation for the power choice. In production use you may be able to set the standard deviation to zero and use the mean as the action choice. A method like A3C can be used to do train such an agent.

I suggest that you discretise the actions and use a DQN-based agent to learn an approximate optimal policy for your environment. If that returns a promising result, you could either stop there or try to refine it further using continuous action space and A3C.

Also, you will want to practice using DQN on a simpler problem before diving in to your main project. Two reasonable learning problems here might be Cartpole-v1 and LunarLander-v2 which also has a continuous actions variant. Learning enough about setting up relevant RL methods to solve these toy problems should put you on a good footing to handle your more complex problem.

Keras documentation includes an example DQN for Atari Breakout, that you may be able to use as the basis for building your own code.

",1847,,1847,,8/5/2021 9:43,8/5/2021 9:43,,,,16,,,,CC BY-SA 4.0 30016,2,,30012,8/4/2021 12:48,,3,,"

I would say they are treated as multidimensional arrays of numbers. They are not visualized in their actual dimension. Sometimes small ones will be visualized when someone is trying to explain a concept that requires it.

You may have, for example, a variable uint8 training_batch[100][200][400][3];. This is a batch of 100 RGB images with 200x400 pixels in each image. A pixel is an array of [3] numbers; an image is an array of [200][400] pixels; a batch is an array of [100] images. There's no more structure than that. You don't have to try to imagine a 4D array of numbers. (In this particular case you could easily imagine an array of images though)

What is useful to imagine is what each dimension means. The first dimension is the image within the batch. The 2nd and 3rd dimensions are the pixel position in the image. The 4th dimension is the R/G/B channel.

If I reduce a tensor along a dimension, I wouldn't think of it as flattening, but rather as using up a dimension. If I want to compute the average colour of each image, I reduce the 2nd and 3rd dimensions and get another tensor of shape [100][3]. Now there's no width or height dimension anymore, just image and channel.

If you reshape the vector to [100][240000] so you can compute a matrix multiplication for a dense layer, now the 1st dimension is still the batch number, and the 2nd dimension is essentially meaningless but you have 240000 arbitrarily-indexed numbers per image. You could also reshape it to [100][80000][3] and have 80000 arbitrarily-indexed pixels, but still, be able to use the channel number.

Disclaimer: I'm not actually a researcher.

",28406,,18758,,1/17/2022 23:11,1/17/2022 23:11,,,,0,,,,CC BY-SA 4.0 30018,2,,29982,8/4/2021 13:03,,1,,"

In a few words, we add $1$ to account for the initial position of the kernel.

You can easily see this if you let $s = 1$ (unit stride) and $p = 0$ (i.e. no padding), so your formula simplifies to

\begin{align} W_2 = (W_1 - F) + 1, \label{1}\tag{1} \end{align}

So, in this simplified case and, for simplicity, assuming squared inputs and kernels, the width (or height) of the output of the convolutional layer is the number of steps that we slide the kernel horizontally (or vertically, respectively), for example, starting from the top left of the input, plus the initial position of the kernel. In this case, $(W_1 - F)$ is the number of times we slide the kernel horizontally (or vertically) and $+1$ is to account for the initial position of the kernel.

Here's an animation of this simplified case for $W_1 = 4$ and $F = 3$ (the green matrix is the output, while the blue one is the input).

So, if we apply the formula \ref{1}, we should get $W_2 = 2$, as we can see from the animation above, which produces a $2 \times 2$ matrix. In fact, $(4 - 3) + 1 = 2 = W_2$. You can see from the animation that we slide the kernel only once in each of the axes, but the shape of the output matrix is $2 \times 2$ because we compute the dot product between the kernel and the submatrix of the input that corresponds to the initial position of the kernel.

A detailed explanation of this (section 2.4, relationship 6) and other simpler formulas to compute the size of the output of a convolutional layer can be found in the report A guide to convolution arithmetic for deep learning, which has also many images that illustrate the concepts (you can find animations here too).

",2444,,2444,,10/8/2021 12:15,10/8/2021 12:15,,,,0,,,,CC BY-SA 4.0 30019,1,30086,,8/4/2021 14:21,,1,337,"

Suppose we have a two player game like Tic Tac Toe where the two players take turns to play their moves. It is my understanding that in the game tree that MCTS builds, consecutive levels in the tree correspond to different player's turns.

So, for instance, in the root node it is Player1's turn to play, in the children of the root node it is Player2's turn to play, in the children of those children it is Player1's turn again, etc.

Is that correct?

If so, is it really prudent to treat nodes where it's the enemy's turn to play the same as those where we choose the next action (i.e. by averaging rollout results in backpropagation). Since, it's not us choosing the next action but the enemy, shouldn't we "pick" the minimum "return" (like in minimax) in those cases instead of the average like we do for nodes where we get to pick the next action?

By picking I mean to only count the win ratio of that child node (i.e. the minimum win ratio).

I suspect I am missing something (e.g. that might mess up exploration vs exploitation with UCT) but I can't put my finger on it.

What do you guys think about this?

Edit: Maybe a solution to this is only considering good moves for the opponent? But then again.. how do we define good? Heuristics?

",46020,,46020,,8/4/2021 16:39,8/19/2021 15:06,How does one handle different player turns in MCTS?,,2,0,,,,CC BY-SA 4.0 30021,2,,30011,8/4/2021 17:00,,2,,"

However, if we can model the environment, why should we want to employ a model-free algorithm?

Depends what you mean by "model the environment". There are two kinds of model:

  • Distribution model, which provides full access to a function like $p(r,s'|s,a)$, the probability of observing reward $r$ and next state $s'$ given starting state $s$ and taking action $a$.

  • Sample model, which can process a single step within the environment on demand. That should provide access to a function like $\text{step}(s,a)$ that returns a single $(r, s')$ pair based on running the environment forward from $(s,a)$.

Which kind of model is available makes a difference to which kinds of model-based reinforcement-learning approaches will work. For instance a distribution model is required for dynamic programming, but MCTS may only need a sample model.

A distribution model can be converted easily to a sample model, but the other way around is only possible approximately (by taking lots of samples).

Isn't it better to have a model and expectation about the next reward and state?

All things being equal, then yes having access to and using a model allows for better estimates, and opens up possibilities for various types of planning algorithm.

However, even if you can build a model of an environment, there are costs:

  • Writing a distribution model can be hard theoretically - it is often far easier to simulate an environment for a single step forward than calculate probability distributions for all outcomes of the same step.

  • The need for random access to arbitrary states can be expensive. A simulator that only needs to run forward from a single reset to a start state may run much more efficiently than one that needs to be set to multiple places in a tree of possibilities arbitrarily.

These costs are different for different environments. Setting a game like chess or go to arbitrary states is relatively cheap compared to a modern computer game or something that simulates a real-world physical system with lots of detail. Partially-observable states can be a major problem in both cases, since any random access model would need to account for all possible "true" states that lie behind whatever the agent is processing.

So basically, at different levels of environment complexity, either the development cost of making a full model, or the evaluation cost of setting arbitrary states can make it more effective to use a model-free approach.

Agents that learn to build their own approximate models of the environment are of great interest, as these potentially address the issue by getting the best of both world. Or possibly they could be tunable to use model-free vs model-based approaches in the most efficient manner depending on the relative costs and accuracy of each approach.

",1847,,1847,,8/4/2021 17:53,8/4/2021 17:53,,,,9,,,,CC BY-SA 4.0 30024,1,,,8/4/2021 23:42,,0,77,"

In probability, we have two types of probability functions: unconditional probability $p(x)$ and conditional probability $p(x | y)$. Both are fundamentally different and the latter can be obtained by the following equation

$$p(x|y) = \dfrac{p(x, y)}{p(y)} \text{ provided } p(y) \ne 0$$

I never heard of formal definition for conditioning except for conditional probability function.

But in case of neural networks, I came across the notion of conditioning.

$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x|y)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z|y)))]$$

Since neural network $D$ is intended to implement a probability function, we can at-least think about conditioning on an input. But the neural network $G$ is not intended to implement probability function. $G$ is intended to provide datasamples by learning an underlying probability distribution whose output is not in the range $[0, 1]$.

Does $G$ obey the laws of probability? If yes, how, since its output is not restricted to $[0, 1]$? If no, then why the authors use the notation of conditional probability for $G$ also?

",18758,,18758,,9/15/2021 2:31,9/15/2021 2:31,Does generator in conditonal GAN obey probability laws?,,1,4,,,,CC BY-SA 4.0 30025,1,,,8/4/2021 23:56,,0,49,"

In the research paper titled Conditional Generative Adversarial Nets by Mehdi Mirza and Simon Osindero, there is a notion of conditioning a neural network on a class label.

It is mentioned in the abstract that we need to simply feed extra input $y$ to the generator and discriminator of an unconditional GAN.

Generative Adversarial Nets were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, $y$, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.

So, I cannot see whether there is any special treatment for input $y$.

If there is no special treatment for the data $y$, then why do they call $y$ a condition and follow the notation of conditional probability such as $G(z|y), D(x|y)$ instead of $G(z,y), D(x,y)$?

If there is a special treatment to input $y$, then what is that special? Don't they pass $y$ in the same way as $x$ to the neural networks?

",18758,,2444,,5/26/2022 10:08,5/26/2022 10:08,Is there any difference between 'input' and 'conditional input' in the case of neural networks?,,0,4,,,,CC BY-SA 4.0 30030,2,,30024,8/5/2021 12:07,,1,,"

I don't see where it's implied that G is a probability distribution. G is a function, whose output conditioned on one variable has a probability distribution, but it isn't one.

z is random noise which is G's source of randomness. y is something that isn't random. We call G over and over with the same y and different random z's and look at the distribution of the output.

For example, here are some outputs of one possible G(z,y), for different z, when y=3. z is not displayed, only G(z,y). This G outputs one digit.

2262662626626262626262626626262262622226626622626662626666262626262626262

These digits have a probability distribution. About half of them are 2 and about half of them are 6. P(G(z)=2|y=3) = 0.479 and P(G(z)=6|y=3) = 0.521 (approximately). Even though neither 2 nor 6 is a valid probability.

",28406,,,,,8/5/2021 12:07,,,,2,,,,CC BY-SA 4.0 30032,2,,30019,8/5/2021 12:51,,1,,"

The original (vanilla) MCTS use random rollouts. In some games this is enough to produce a strong agent. However, in most of the games, using a heuristic that finds the opponent's likely moves makes stronger agents. There is another line of practice that uses Opponent Modeling to predict the opponent moves. That is important in games where you have several opponent "types" or when an opponent can go for different goals.

From my experience, a good heuristic can greatly improve the agent. I have implemented UCT agents for Spades (the card game). I made a vanilla UCT and one that uses a different (simpler) agent as heuristic. The second UCT is stronger.

Picture from wiki:MCTS

The four phases of MCTS:

Selection: Start from root R and select successive child nodes until a leaf node L is reached. The root is the current game state and a leaf is any node that has a potential child from which no simulation (playout) has yet been initiated. The section below says more about a way of biasing choice of child nodes that lets the game tree expand towards the most promising moves, which is the essence of Monte Carlo tree search.

Expansion: Unless L ends the game decisively (e.g. win/loss/draw) for either player, create one (or more) child nodes and choose node C from one of them. Child nodes are any valid moves from the game position defined by L.

Simulation: Complete one random playout from node C. This step is sometimes also called playout or rollout. A playout may be as simple as choosing uniform random moves until the game is decided (for example in chess, the game is won, lost, or drawn).

Backpropagation: Use the result of the playout to update information in the nodes on the path from C to R.

",43351,,43351,,8/7/2021 20:58,8/7/2021 20:58,,,,8,,,,CC BY-SA 4.0 30033,1,,,8/5/2021 14:44,,0,60,"

I'm working on a project where the dataset contains time series of three classes, depending on the shape of the series. I want to learn the representations of these series as vectors, so naturally I use AutoEncoder for the task (precisely, I use LSTM-AutoEncoder to better handle the sequential data).

My question is: should I train one model for all classes or one model for each class? If possible, could you also point out what are the pros and cons of each approach? One thing that worries me about the latter approach is that the AE will simply memorize the data without any learning (again, would that be a concern?)

Thank you very much in advance!

Sincerely,

",31022,,,,,8/5/2021 14:44,Train separate AutoEncoder's on each class or one AE for all classes to learn features?,,0,2,,,,CC BY-SA 4.0 30035,1,,,8/6/2021 0:48,,1,876,"

I found the usage of both objective function and value function in the same context.

Context #1: In the paper titled Generative Adversarial Nets by Ian J. Goodfellow et al.

We simultaneously train G to minimize $\log(1 −D(G(z)))$. In other words, $D$ and $G$ play the following two-player minimax game with value function $V (G,D)$:

$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$$

Context #2: In the paper titled Conditional Generative Adversarial Nets by Mehdi Mirza et al.

The objective function of a two-player minimax game would be as

$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x|y)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z|y)))]$$

In fact, the second paper also iterated context #1 i.e., used the term "value function" at another place.

We can observe that objective function is a function which we want to optimize

The objective function is the most general term that can be used to refer to a cost (or loss) function, to a utility function, or to a fitness function, so, depending on the problem, you either want to minimize or maximize the objective function. The term objective is a synonym for goal.

Since the generator or discriminator has to perform optimization, it is agreeable to use the term objective function in this context.

But what is the definition for the value function and how is it different from the objective function in this context?

",18758,,18758,,11/5/2021 22:50,11/5/2021 22:50,Is there any difference between an objective function and a value function?,,1,3,,,,CC BY-SA 4.0 30036,1,30054,,8/6/2021 1:03,,1,166,"

There are plenty of research papers, especially in deep learning are present only in arXiv with large number of citations. I cannot find them in journals as peer-reviewed ones.

For example if I search for Conditional Generative Adversarial Nets then I can find only an arXiv pre-print and has been Cited by 5722

This is not the single paper and I personally found lot of papers in pre-print only with no journal/conference affiliation. Many research papers are at-least 3 years old.

Is it solely due to the will of authors or is there any other reason for this phenomenon of not getting published even though they are widely accepted especially in the domain of deep learning?

",18758,,,,,8/6/2021 21:45,Why many deep learning research papers continue to be in arXiv?,,1,1,,,,CC BY-SA 4.0 30037,1,30040,,8/6/2021 5:13,,5,1084,"

I've been studying geometry and linear algebra for months with the goal to build neural networks. But now I'm reading that perceptrons require fitting curves, and curves are not expressed as linear functions. So, I might need to study differential geometry and calculus for building good fitting curves in perceptrons.

I already know how to code and was hoping to get my hands dirty by coding a few neural networks. But should I study calculus and differential geometry before coding?

From this video, I understand that the least squares approximation can be used to fit a curve through a set of points, so maybe linear algebra is enough for building good neural networks?

",48881,,19524,,8/6/2021 23:14,9/4/2021 18:31,Are calculus and differential geometry required for building neural networks?,,2,4,,,,CC BY-SA 4.0 30038,1,,,8/6/2021 6:10,,0,92,"

I was reading this research paper titled 'Image Style Transfer using Convolutional Neural Networks' which as the title suggests was based on Neural Style Transfer. I came across this line which didn't make immediate sense to me.

Here's how it went -

We reconstruct the input image from from layers ‘conv1 2’ (a), ‘conv2 2’ (b), ‘conv3 2’ (c), ‘conv4 2’ (d) and ‘conv5 2’ (e) of the original VGG-Network. We find that reconstruction from lower layers is almost perfect (a–c). In higher layers of the network, detailed pixel information is lost while the high-level content of the image is preserved (d,e).

The line that is italicised; Why does that happen?

",49078,,49078,,8/6/2021 9:49,9/6/2021 8:09,Why do we lose detail of an image as we go deeper into a ConvNet?,,2,6,,,,CC BY-SA 4.0 30040,2,,30037,8/6/2021 9:21,,13,,"

Neural networks are essentially just repeated matrix multiplications and applications of an activation function, so you really don't need a great deal of linear algebra to construct a simple neural network — if you understand how to multiply matrices, that's probably sufficient.

The harder bit is the training process which is typically done through backpropagation. You need a bit of calculus, but differential geometry is overkill. There are some interesting topics in differential geometry for machine learning, but it's far beyond what is needed to implement backpropagation.

To understand backpropagation you just need to know about the gradient of a function and what this means intuitively; you should also have a good knowledge of the chain rule. That's really all you need, and any course on "multivariable calculus" or something similar would give you more than enough to get started.

Of course, it never hurts to know more, but fortunately neural networks are simple enough that you don't need to struggle for years before you can implement a basic neural network; try to get started as soon as you have the basics, and learn the rest as you go.

",44413,,,,,8/6/2021 9:21,,,,0,,,,CC BY-SA 4.0 30044,2,,28594,8/6/2021 10:39,,2,,"

In my experience, knowledge of any particular programming language does not matter. What matters is that you can quickly pick up the basics of a given language.

In my professional work I have been programming in Scala, Java, Groovy, and now Lisp; I didn't really know any of these languages before my working with them (except for Java). But I have been able to pick up a working knowledge in them due to general familiarity with programming (I have been programming in a variety of languages for the past 35 years).

I would assume that knowledge of a specific language becomes relevant if you are acctually working on the tools themselves, where advanced proficiency would be required. For applications using existing libraries this is generally not necessary. As long as you can work with the language, and are able to diagnose why something didn't work, then you should be fine.

Programming concepts are in my view far more important than a specific language.

",2193,,,,,8/6/2021 10:39,,,,0,,,,CC BY-SA 4.0 30045,1,,,8/6/2021 10:48,,2,90,"

How does one approach proposing AI to management? This is something I have struggled with for a long time. I want to implement AI toward a specific problem in my place of work. My supervisors are generally willing to listen; but they want to know how the algorithm(s) is going to work. They are not programmers. My tendency is to write out the math and step through it. However, most of them don't want to do that because they have a limited amount of time to sit there and listen. On top of that, some of these algorithms can get somewhat complex.

Lets take a simple neural network for example; how would you explain the way it works without diving into the math?

",20271,,1671,,8/10/2021 21:43,9/10/2021 8:00,Explaining AI to Non-Technical Individuals,,2,6,,,,CC BY-SA 4.0 30047,2,,30038,8/6/2021 11:21,,1,,"

The point of a convnet, or many kinds of neural networks in general, is to go from a lot of data down to a small piece of data. In classification tasks, for example, the input is all the pixels that make up a picture of a house, and the output is just the word "house" (or rather a number representing the word "house").

Obviously this process loses information. If you go from the word "house", back to a picture (i.e. you tell it "draw a house") you're probably going to get a completely different house!

In the style transfer task we have many numbers to describe the house picture with, not just the word "house", but we still have less than the full pixel data. Imagine that the intermediate representation represents something like "yellow wooden house with three windows and one window above and a red brick basement with 4 windows and the house is drawn at a 30 degree angle and there's a pink house to the left with two small windows below and one window above and a red roof is visible behind and above the pink house and ....."

If you had that representation, you could try to draw the original picture again, and the more information you have, the more accurately you can draw it. Early layers of the convnet contain information like "there's a vertical line at pixel coordinates 123,456" and later layers contain information like "there's a yellow house at pixel coordinates 123,456", and if you get to dense layers, they may just say "there's a yellow house".

",28406,,,,,8/6/2021 11:21,,,,0,,,,CC BY-SA 4.0 30049,1,,,8/6/2021 11:35,,0,117,"

I am thinking about a project and have a few questions before I accept it. Would be grateful I anyone experienced of you could give me some advice.

In the project, I have been given a data set with (rather small) 30.000 text documents, which are labeled with 0 and 1. I want to train and evaluate (with respect to accuracy) a BERT and XLNet model.

Can you give me some rough estimates for the following questions?:

  1. How much computing power do I need for this task, i.e. can I simply use my private laptop for this or do I need a special CPU/GPU for it?
  2. So far, I just worked with classical machine learning models (e.g. random forests, SVMs, etc.). I am not experienced deep learning architectures yet. How difficult would it be to implement a BERT oder XLNet model with my own data set, having no experience with BERT oder XLNet yet? I.e. how much code would it be that I have to develop by myself? And would I need a deep understanding for it or would be sufficient to follow an online tutorial and basically copy the code from there? Many thanks.
",33511,,,,,1/8/2022 8:02,Training and Evaluating BERT and XLNET,,1,0,,2/14/2022 16:36,,CC BY-SA 4.0 30050,1,,,8/6/2021 16:33,,2,56,"

I am trying to create an environment for RL where the size of my input (observation space) is not fixed. As a way around it, I thought about padding the size to a maximum value and then assigning "null" to those values that do not exist. Now, these "null" values are meaningful in a certain sense, because they are related to the shape and size of the input.

If these "null" values were zeros, would neural networks be able to distinguish between these zeros (nulls) and the zeros that are actually part of the picture? If that's not the case, should I assign a different number for the padding? What should I be mindful of in these scenarios? Is there any example I can look at with a similar situation?

",33488,,2444,,8/6/2021 23:31,8/6/2021 23:31,How do neural networks deal with inputs of different sizes that are padded in order to have them of the same size?,,0,2,,,,CC BY-SA 4.0 30053,1,,,8/6/2021 19:44,,1,1394,"

I am working on a DQN project with Pytorch, where I should choose multiple discrete actions, each in a range, say, (0, 15). I am wondering how I can model it, such that the sum of actions is 15. Does anyone know how to model that?

",49095,,2444,,9/11/2021 20:31,10/6/2022 22:06,Deep Q-Learning with multiple discrete actions,,1,6,,,,CC BY-SA 4.0 30054,2,,30036,8/6/2021 21:15,,2,,"

Simple answer: the peer-review process can be slow and not all papers deserve to be accepted at (major) conferences and journals (e.g. some are just tutorials or simple or more complete descriptions of previously published ideas, so they do not propose anything novel, which is required for being accepted at major conferences or journals).

Why is peer review slow? Because multiple people need to read the paper carefully, understand it (so they may need to read about the topic, if they are not fully familiar with it), analyze it, and maybe read (previously published) related literature to know if the proposals are really novel or deserve to be published (if you're proposing a VAE, you will not go anywhere, as VAEs have already been proposed and accepted at ICLR 2014, i.e. no progress and novelty!).

I found several papers on arXiv to be useful, but I also found low-quality ones. Generally, if you can cite a paper that has been accepted at a major conference or journal in your field, you should do that, rather than citing a pre-print. If a paper has been peer-reviewed, then it's been scrutinized in multiple senses (although some still contain some typos), so it's very unlikely to find a paper that has been accepted, for example, at NeurIPS that is poor (e.g. the GAN or transformer papers were accepted there). You will find so many useful papers published there or in conferences/journals like ICLR, JMLR, and so on. You should also notice that some papers that are first published on arXiv might be later presented/accepted at some conference or journal (example: VAE), so the citation may redirect you to the pre-print version, but this does not mean that there isn't a version that was later published in some journal. People may first publish on arXiv so that other researchers can have early access to the ideas, and so progress can be faster, but, again, there are many papers on arXiv that will never be accepted at any (major) conference/journal.

(By the way, although this question was asked in the context of deep learning (hence Artificial Intelligence), so it can be considered on-topic here, you probably would find more useful answers at Academica Stack Exchange, because this topic is not unique to research in AI (e.g. there's also bioRxiv, which has a similar role to arXiv, i.e. proliferation of ideas, etc., but for biology). In fact, you can already find some answers there that address this question, for example, this one.)

",2444,,2444,,8/6/2021 21:45,8/6/2021 21:45,,,,0,,,,CC BY-SA 4.0 30055,1,,,8/6/2021 23:29,,1,153,"

Although the GAN is widely used due to its capability, there were generative models before the GAN which are based on probabilistic graphical models such as Bayesian networks, Markov networks, etc.

It is now a well-known fact that GANs are excelling at image generation tasks. But I am not sure whether the generative models that were invented before GANs were used for image generation or not.

Is it true that other generative models were used for image generation before the proposal of the GAN in 2014?

",18758,,2444,,8/9/2021 1:00,8/9/2021 20:17,Is image generation not existent before generative adversarial networks?,,1,8,,,,CC BY-SA 4.0 30056,2,,30035,8/6/2021 23:59,,1,,"

The value function may be used in the GAN paper because GANs are inspired by game theory, where terms like utility, utility function and value function (just like in reinforcement learning) are used (the first two for sure, but I am not sure about the usage of the term value function in game theory, as I am far from an expert in game theory). If you want to know more about the usage of the term value function, this Wikipedia article could be useful (or maybe make things more confusing).

Having said that, it seems to me that the usage of the term "objective function" in the conditional GAN paper is a bit sloppy. They probably meant the optimization problem.

However, it's also true that the notation used by the original authors of the GAN can also be confusing. They wrote

$$\min_G \max_DV(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))] \label{1}\tag{1}$$

Here, $V(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$, so they could have written \ref{1} as follows

$$\min_G \max_D V(D, G) = \min_G \max_D\mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))] \label{2}\tag{2}$$

or just

$$\min_G \max_D V(D, G) \label{3}\tag{3}$$

Then clarify that $V(D, G) = \mathbb{E}_{x ∼ P_{data}}[\log D(x)] + \mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]$.

This is clarified in this paper (equations 2.1 and 2.2., page 5).

So, in the GAN, we're optimizing $V$, so $V$ is the objective function; thus, in this case, the term "value function" is a synonym for "objective function". In this case, the optimization problem is a $\color{blue}{\textrm{min}}$$\color{red}{\textrm{max}}$ game, i.e. we $\color{red}{\textrm{maximize}}$ and $\color{blue}{\textrm{minimize}}$ at the same time two terms of the objective function, i.e. $\color{red}{\mathbb{E}_{x ∼ P_{data}}[\log D(x)]}$ and $\color{blue}{\mathbb{E}_{z ∼ p_z}[log (1 - D(G(z)))]}$ (this is explained in the GAN paper!). In practice, they optimize two slightly different objectives, but they are equivalent. See algorithm 1 in the GAN paper.

So, as I said in my other answer, the objective function is the function that you want to optimize (i.e. minimize or maximize), so it's usually a synonym for loss/cost/error function (in case you want to minimize it) and can be a synonym for value function (in case you want to maximize it, for example, in reinforcement learning), as it seems to be the case in the GAN (although, in the GAN, you maximize and minimize the value function).

",2444,,2444,,8/10/2021 17:28,8/10/2021 17:28,,,,2,,,,CC BY-SA 4.0 30057,1,,,8/7/2021 0:45,,0,64,"

The unsqeeze operation is used in several deep learning algorithms. However, I only found this operation in the code/implementation of the algorithms presented in the papers, which do not mention it.

The unsqueeze operation never modifies data in a tensor, and it only changes the positions of available data in the tensor. Wherever I see its use in coding till now, it is used before matrix multiplication only. So I am unaware of other uses of it.

Is unsqueeze operation only useful to handle compatibility issues i.e., to make the data compatible for underlying operation and has no other significance, or does it have (used for) any other purposes in deep learning?

",18758,,2444,,8/11/2021 12:13,8/11/2021 12:13,What are the (key) purposes of unsqueezing operation on tensors?,,1,0,,,,CC BY-SA 4.0 30059,1,,,8/7/2021 1:46,,0,38,"

I have read the use of Targeted Adversarial Attacks for making the model perform better. But can we change the bias of the neural networks and control the outcome of the network rather than changing the input. if yes, can you share some resources or research papers on targeted bias in neural networks?

",49103,,18758,,8/7/2021 5:20,1/3/2023 11:01,Can we change bias and control the output of neural network?,,1,0,,,,CC BY-SA 4.0 30061,1,30213,,8/7/2021 7:58,,1,51,"

Per google's glossary, an iteration refers to

A single update of a model's weights during training ...

The following code comes from a github repo

def fit(self, x, y, verbose=False, seed=None):
    indices = np.arange(len(x))
    for i in range(self.n_epoch):
        n_iter = 0
        np.random.seed(seed)
        np.random.shuffle(indices)
        for idx in indices:
            if(self.predict(x[idx])!=y[idx]):
                self.update_weights(x[idx], y[idx], verbose)
            else:
                n_iter += 1
        if(n_iter==len(x)):
            print('model gets 100% train accuracy after {} epoch(s)'.format(i))
            break

Note that this model doesn't update weights for each single example, because when the model make a correct prediction for some example, it skips the example without updating weights.

In this kind of scenario where model makes a correct prediction for $i$th input $x_i$ and jump into next example $x_{i+1}$ without updating weights for $x_i$, does it count as an iteration?

Assume there are 120 training examples, in one epoch, the model makes 20 correct prediction and updates weight for the other 100. Should I count this epoch 100 iterations or 120 iterations?

Note: This question is NOT about coding. The code cited above works well. This question is about terminology. The code is just to illustrate the scenario in question.

",45689,,,,,8/17/2021 4:55,"Assume 120 examples, a model makes 20 correct predictions and updates weight for the other 100. Should I count this epoch 100 iterations or 120?",,2,0,,,,CC BY-SA 4.0 30063,2,,30057,8/7/2021 10:06,,1,,"

Yes, reshaping operations aren't very theoretically interesting, they just make the data compatible with the following operations.

For example, if you have a 1D array of pixels, and you want to do a 2D convolution, you can (not with unsqueeze specifically) reshape that array into 2D so the 2D convolution code knows where the rows and columns are. You could write 2D convolution code that works on a 1D array of pixels, or you could just make it 2D and then use the normal 2D convolution code.

Same with unsqueeze. Perhaps you want to feed a 2D array into a convolution function that expects the last dimension to be channels. You can add a last dimension of 1 and now that code can see there's 1 channel. Or you want to pass one data sample through a function that takes batches. You can add a first dimension of 1 meaning there's only one item in the batch. Adding or removing dimensions of size 1 is free, since it doesn't change the data, only the interpretation of the data.

If you wanted to convert a greyscale image to RGB (but still grey) you might use unsqueeze followed by repeat_interleave to duplicate that 1 channel into 3.

It may be worth noting that Tensorflow has a very generic "reshape" operation which lets you convert any shape of tensor into any other shape of tensor, as long as the total number of elements is the same.

",28406,,28406,,8/7/2021 11:20,8/7/2021 11:20,,,,0,,,,CC BY-SA 4.0 30064,1,,,8/7/2021 10:13,,0,200,"

The initial environment state is 0.25. Each time step the agent performs a discrete action of 0 or 1. If action is 1, then the new state will be state + 0.1. If action is 0, the new state will be state - random() * 0.2. The reward is state - 0.5, however if state > 0.98 (or state < 0) the agent dies (with no reward).

First question: How do I teach the agent not to be too greedy? How to verify that the agent learned?

Main question: How to reduce the number of trials (i.e. the number of episodes) before the agent learns?

I would also appreciate any relevant references.

Here is the environment and here is what I tried.

It works, however:

  1. It took 1000 episodes of max 2000 timesteps, which is unacceptable for me (I wish to drastically reduce the number of episodes and timesteps).

  2. The behavior is far from optimal. Ideally, the agent should choose action 0 only if the state is larger than 0.88 (or something below that and within a small interval such as 0.01). [Edit] However, the threshold is 0.75, that forces the agent to choose 0 even if it could safely choose 1, e.g. following 0.8 -> 0.76 -> 0.75 -> 0.74 trajectory before choosing 1 again.

",23360,,2444,,5/23/2022 10:03,5/23/2022 10:03,How to reduce the number of episodes before the agent learns in this game?,,1,2,,,,CC BY-SA 4.0 30066,1,30067,,8/7/2021 13:47,,4,183,"

In a comment to this question user nbro comments:

As a side note, "perceptrons" and "neural networks" may not be the same thing. People usually use the term perceptron to refer to a very simple neural network that has no hidden layer. Maybe you meant the term "multi-layer perceptron" (MLP).

As I understand it, a simple neural network with no hidden layer would simply be a linear model with a non-linearity put on top of it. That sounds exactly like a generalized linear model (GLM), with the non-linearity being the GLM's link function.

Is there a notable difference between (non-multi-layer) perceptrons and GLMs? Or is it simply another case of two equivalent methods having different names from different researchers?

",18652,,,,,8/7/2021 17:32,"What's the difference between a ""perceptron"" and a GLM?",,1,0,,,,CC BY-SA 4.0 30067,2,,30066,8/7/2021 15:18,,3,,"

The perceptron uses the Heaviside step (or sign) function as the activation function (so you are not free to use any activation function), while a GLM is a generalization of linear regression, where the link function can be, for example, the logit (which leads to the logistic regression), identity function (which leads to linear regression), and so on. The sign function has the role in the perceptron as the sigmoid in logistic regression.

So, GLM models have a probabilistic interpretation (i.e. you assume that the response variable follows some distribution), while perceptrons do not, even though, for example, the perceptron and logistic regression can both be used for classification.

",2444,,2444,,8/7/2021 17:32,8/7/2021 17:32,,,,0,,,,CC BY-SA 4.0 30068,1,30071,,8/7/2021 19:13,,2,71,"

I am trying to understand a certain KL-divergence formula (which can be found on page 6 of the paper Evidential Deep Learning to Quantify Classification Uncertainty) and found a TensorFlow implementation for it. I understand most parts of the formula and put colored frames around them. Unfortunately, there is one term in the implementation (underlined red) that I can't tell how it fits in the formula.

Is this a mistake in the implementation? I don't understand how the red part is necessary.

",48641,,2444,,8/8/2021 21:25,8/8/2021 21:25,How is this statement from a TensorFlow implementation of a certain KL-divergence formula related to the corresponding formula?,,1,0,,,,CC BY-SA 4.0 30069,1,,,8/7/2021 22:26,,2,188,"

Per a review post, a simple Logistic Regression model on the Iris data set gets about 97% test accuracy on iris dataset whereas a neural network gets just 94%. The neural network model used in Keras is

model = tf.keras.Sequential([
    tf.keras.layers.Dense(500, input_dim=4, activation='relu'),
    tf.keras.layers.Dense(256, activation='relu'),
    tf.keras.layers.Dense(256, activation='relu'),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.2),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(3, activation='softmax')
])
model.compile(loss='categorical_crossentropy',
              optimizer='adam',
              metrics=['accuracy'])

The model is fit for 30 epochs using a batch size of 20.

Note that I did try fewer neurons and layers but none of them got better performance.

Does this make sense? Can any other neural network get a higher test accuracy than a logistic regression model?

",45689,,45689,,8/11/2021 23:00,8/12/2021 8:34,Does it make sense for a logistic regression model to perform better than a neural network on the Iris data set?,,2,0,,,,CC BY-SA 4.0 30070,1,,,8/7/2021 23:14,,1,46,"

From this answer, stability is attributed to a learning algorithm

A stable learning algorithm is one for which the prediction does not change much when the training data is modified slightly.

At some other places, I read the phrase "stability of neural network model". I am not sure whether the stability of a learning algorithm and the stability of a model are the same or not. If same then

A stable model is one for which the prediction does not change much when the training data is modified slightly.

Is it true? If not, is there anything called stability of a model and is different from the stability of a learning algorithm?

Suppose I am training a neural network model with a gradient descent algorithm. For which one, do I need to attribute stability or instability? Is it to the neural network model or to the gradient descent training algorithm? Or it should be attributed to the combination of both?

",18758,,18758,,10/9/2021 8:31,10/9/2021 8:31,Is stability an attribute of model or training algorithm used or combination of both?,,0,3,,,,CC BY-SA 4.0 30071,2,,30068,8/7/2021 23:32,,1,,"

It should remain from a general code that has been refactored. By the way, the red code phrase is always zero. Because, beta is a vector of 1, and $\log(\Gamma(1)) = \log(1) = 0$, i.e., tf.math.lgamma(beta). So, sum of zeros will be zero.

As you said, the other parts of the code are clear and completely matched with the definition.

",4446,,,,,8/7/2021 23:32,,,,1,,,,CC BY-SA 4.0 30073,1,,,8/8/2021 6:40,,0,35,"

There are instances in literature where we need to change loss function in order to escape from gradient problems.

Let $L_f$ be a loss function for a model I need to train on. Some times $L_f$ leads to the problems due to gradient. So I reformulate it to $L_g$ and can apply the optimization successfully. Most of the times the new loss function is obtained by making a small adjustments on $L_f$.


For example: Consider the following excerpt from the paper titled Evolutionary Generative Adversarial Networks

In the original GAN, training the generator was equal to minimizing the JSD between the data distribution and the generated distribution, which easily resulted in the vanishing gradient problem. To solve this issue, a nonsaturating heuristic objective (i.e., “$− \log D$ trick”) replaced the minimax objective function to penalize the generator


How can one understand those facts geometrically? Are there any simple examples on either 2d or 3d that shows two types of curves: one gives no gradient issues and the other gives gradient issues yet both obtains the same objective?

",18758,,18758,,8/11/2021 22:30,8/11/2021 22:30,Is there any geometrical interpretation on overcoming gradient related problems by adjusting/changing loss function?,,0,6,,,,CC BY-SA 4.0 30075,1,,,8/8/2021 9:32,,1,33,"

The divergence between two probability distributions is used in calculating the difference between the true distribution and generated distribution. These divergence metrics are used in loss functions.

Some divergence metrics that are generally used in literature are:

  1. Kullback-Leibler Divergence
  2. Jensen–Shannon divergence
  3. f-divergence
  4. Wasserstein distance

Some other divergence measures include:

  1. Squared Hellinger distance
  2. Jeffreys divergence
  3. Chernoff's $\alpha-$divergence
  4. Exponential divergence
  5. Kagan's divergence
  6. $(\alpha, \beta)-$product divergence
  7. Bregman divergence

I think some naive divergence measures include

  1. Least-squares divergence
  2. Absolute deviation

Along with these, are there any other divergence measures available to compute the distance between the true probability distribution and estimated probability distribution in artificial intelligence?

",18758,,18758,,8/8/2021 13:42,8/8/2021 13:42,Are there any other metrics available for calculating the distance between two probability distributions other than those mentioned?,,0,3,,,,CC BY-SA 4.0 30077,1,,,8/8/2021 14:10,,0,69,"

I have hardly ever seen anyone cover the entire input image with a filter of the same dimensions. I was wondering why that is the case, and if the performance in say, an image detection application would decrease if someone used kernel size = the size of the input image itself?

",21513,,18758,,8/8/2021 23:30,8/8/2021 23:30,What is the significance behind having small kernel sizes over having one large kernel size that covers the entire input in a CNN?,,0,2,,,,CC BY-SA 4.0 30078,2,,30069,8/8/2021 14:28,,4,,"

The most likely explanation is that the neural network model proposed has a much higher model capacity than the logistic regression model.

In fact, the neural network used in the code has 246,343 trainable parameters; the full Iris data set has only 150 samples and only four features – so the model is much more complex than the training data. A neural network model with fewer neurons or layers will likely generalise much better. A logistic regression model is much simpler and has far fewer parameters, so it is in some sense forced to "learn" better.

The neural network may simply overfit to the training data as the capacity of the model is large enough; an analogous idea is trying to fit a line to data, then trying to fit a degree 100 polynomial. While the polynomial may fit the training data better, it is less likely to generalise well.

The hyperparameters used in the training of the network might also not be optimal; I haven't experimented, but neural network training can be a little sensitive to the choices made during training.

",44413,,,,,8/8/2021 14:28,,,,1,,,,CC BY-SA 4.0 30079,1,,,8/8/2021 15:11,,0,57,"

Inception score is used to evaluate the generative models. It is a score given based on quality and diversity of images generated.

I have doubt about the range of inception score because of the reason that an article mentions about the possibility of range $[0, \infty]$ and still talks about upper bound in practical setting

The lowest score possible is zero. Mathematically the highest possible score is infinity, although in practice there will probably emerge a non-infinite ceiling. For a ceiling to the IS, imagine that our generators produce perfectly uniform marginal label distributions and a single label delta distribution for each image — then the score would be bounded by the number of labels.

Suppose I have 1000 classes/labels in my task, then is it possible to get an inception score of 2000? Or is it mandatory that the inception score must lie in $[1, 1000]$?

To be concise: Is bounding inception score to a particular range $[1, \text{number of classes}]$ optional or mandatory?

",18758,,40434,,8/8/2021 21:55,9/12/2021 20:03,Is the range of inception score flexible or bounded based on number of classes?,,1,0,,,,CC BY-SA 4.0 30080,1,,,8/8/2021 23:19,,1,54,"

Affine transformation as I am aware can be expressed as either dot product followed by addition or a matrix multiplication followed by addition

$$a.x+b$$ $$a^{T}x + b$$

where the first one is based on dot product and the second one is based on matrix multiplication. it should be noticed that $a, x$ are column vectors here and $b$ is a real number.

In case if a matrix $a$ is compatible (say order $m \times n$) and $x, b$ are column vectors of order $m \times 1$. Then affine transformation is (generally) a matrix multiplication followed by a vector addition.

$$a^{T}x + b$$

I think it is correct.

I have doubt about the product operation between $a$ and $b$ if $a, b$ and $x$ are tensors of higher dimensions. In the case of tensors, should we need to perform Hadamard product or normal matrix multiplication? Or are they both equivalent like in the case of affine transformation on column vectors?

I got this doubt because I encountered an affine transformation that neither uses dot product nor matrix multiplication but uses Hadamard product.


Background: Recently I came across an affine transformation that applies Hadamard product on reshaped weight $a$ and input $x$ and then adds reshaped bias $b$ to it

Initially, the dimensions are as follows

a -> [r, c]
x -> [r, c, d1, d2]
b -> [r, c]

Later they reshaped weight and bias to the shape of the input

a -> [r, c, d1, d2]
x -> [r, c, d1, d2]
b -> [r, c, d1, d2]

and finally, they are performing a * x + b

here $*$ is Hadamard product, an element-wise multiplication operation and (i think) is entirely different from normal matrix multiplication.

Is there any clue on what they did? Is it possible to view Hadamard's product as normal matrix multiplication?

",18758,,18758,,10/21/2021 1:28,10/21/2021 1:28,Which product operation should be used in affine transformation?,,0,1,,,,CC BY-SA 4.0 30083,1,,,8/9/2021 2:55,,2,185,"

MDP stands for the Markov decision process. It is a 5-length tuple used in reinforcement learning.

$$MDP = (S, A, T, R, \pi)$$

$S$ stands for a set of states, also called state space.

$A$ stands for a set of actions, also called action space.

$T$ is a probability distribution function $$T: S\times A \times S\rightarrow [0,1]$$

$R$ is a reward function

$$R: S\times A \rightarrow \mathbb{R}$$

$\pi$ is a policy function

$$\pi: S\times A \rightarrow [0,1]$$

This question is restricted to continuous spaces i.e., state and action spaces are continuous. And also to stochastic policy function. And also consider only the basic MDP instead of its flavors.

In general, MDP in reinforcement learning is applied mostly to games. And most of the games have certain start states as well as goal states.

Is there any reason for not specifying start and goal states in MDP like in a finite automaton?

Or does MDP has an implicit start and goal states (say from the values of reward function)?

Or is the MDP, by nature, defined irrespective to start and goal states? If yes, can I just imagine MDP as a state-space search problem without a particular goal?

",18758,,2444,,8/10/2021 10:13,8/10/2021 10:13,Is there any inherent assumption of start and goal states in an MDP?,,1,0,,,,CC BY-SA 4.0 30084,2,,29985,8/9/2021 3:18,,1,,"

The terms "insufficient gradient" or "not strong enough gradient" usually means that the magnitude of the gradient vector is too small or nearly zero that they can't drive the optimization properly.

Not having sufficient gradient is similar to having a very low learning rate - they are not only slow (in terms of convergence) but also drifts the optimization in a poor direction and get stuck in local minima.

",11030,,11030,,8/9/2021 14:07,8/9/2021 14:07,,,,0,,,,CC BY-SA 4.0 30085,1,,,8/9/2021 4:36,,1,15,"

In the original GAN paper (table 2), why is it mentioned that you can sample deep directed graphical models without a Markov chain (well, they say without difficulties, but others list MCMC as a difficulty).

I was wondering how this is done because I have only seen MCMC based approaches.

",49010,,2444,,12/21/2021 19:38,12/21/2021 19:38,"In the original GAN paper, why is it mentioned that you can sample deep directed graphical models without a Markov chain?",,0,0,,,,CC BY-SA 4.0 30086,2,,30019,8/9/2021 6:29,,0,,"

I think I came up with an intuitively good solution. During the selection phase, if in the current state it's the opponent's turn to play then the winrate in the UCT formula becomes 1.0 - winrate instead. This should make us invest more in good opponent moves rather than bad ones.

I am implementing MCTS for Quoridor at the moment I ll update if it works better or not when I am done.

Edit: Yup, this seems to be the way. Now the tree is symmetrical and we can also generate moves for the opponent as well.

",46020,,46020,,8/19/2021 15:06,8/19/2021 15:06,,,,0,,,,CC BY-SA 4.0 30087,2,,30083,8/9/2021 9:00,,1,,"

Is there any reason for not specifying start and goal states in MDP like in a finite automaton?

In general MDPs have a start state distribution. That may be a single state, but does not have to be. In non-episodic problems, you might want to consider a long term state distribution under any given policy, although it is quite common to use a simple start distribution and the assumption of ergodicity for long term distribution.

In general MDPs do not have goal states. Although using the agent's actions to achieve certain desirable end states, such as winning a game or completing a puzzle, is a very common design, there is no requirement for this. The more general requirement is to maximise some aggregate of the reward at each time step - usually either a discounted sum of rewards or the mean reward.

Or does MDP has an implicit start and goal states (say from the values of reward function)?

No, although if you are designing an MDP to model some environment, and it has goal states, you will typically take the goal states into account. Likewise you will usually select the start state distribution as part of the problem definition.

Or is the MDP, by nature, defined irrespective to start and goal states?

You will need to choose at least a distribution of start states to use an MDP practically.

There is no requirement to set a goal state. Whether or not you do that depends on the problem you are modelling.

If yes, can I just imagine MDP as a state-space search problem without a particular goal?

There may or may not be a goal state. You cannot frame RL as state-space search in general. The general solution to an RL control problem is one that maximises an aggregate (sum or mean) over the rewards. There is no requirement for that reward to be received from any single state.

You can usually consider RL control methods to be policy-space searches. The value-based methods such as Q-learning perform the policy search indirectly, whilst policy gradient methods such as REINFORCE model the policy function and optimise it.

The reverse situation, if you do have a state-space search problem, for example some form of combinatorial optimisation, then you can frame it as an RL problem. However, RL would normally be a very inefficient way to perform the search, because it will perform a policy search through trial and error to find the policy which builds the desired state from a starting state. Much better AI tools exist for graph searches and combinatorial optimisations than learning the whole series of actions required to convert an arbitrary start state to a goal state through trial and error.


Aside:

$R$ is a reward function $$R: S\times A \rightarrow \mathbb{R}$$

This is not general. This looks more like an expected reward function. You can derive the Bellman equations using an expected reward function, so using the expected reward function does not interfere with most RL theory. However, individual rewards may be based on the next state, and can be stochastic, so the reward function you list does not fully define an MDP - the difference would be important when considering qualities of the MDP such as variance which will impact agent learning efficiency for example.

",1847,,1847,,8/9/2021 9:06,8/9/2021 9:06,,,,0,,,,CC BY-SA 4.0 30089,1,,,8/9/2021 10:33,,1,33,"

Suppose that in version 1 of a reinforcement-learning system an optimal policy $A$ got generated for executing a task. But, in a newer version 2 of that application (with new code changes), there might be some policy $B$ that would do slightly (1-2%) better than policy $A$.

How do you allow the system to learn that "better" policy $B$? I think the answer is retraining.

But during the training process, the old policy $A$ might still keep accumulating rewards delaying policy $B$ to be recognised as the "better" policy than $A$. This could get worse if each newer version of the system would contain a better policy which is only slightly better than the previous release's best policy. It would take a very long time to find the best policy.

Is this accepted in real-world RL systems? Or should I be figuring out a way to tell the system that "Hey, there might be a better policy somewhere, try to find that instead of rewarding existing policies."?

",48881,,2444,,8/10/2021 10:36,8/10/2021 10:36,How to allow RL systems to find better policies after code changes?,,0,1,,,,CC BY-SA 4.0 30090,1,,,8/9/2021 11:23,,3,403,"

I understand that seq2seq models are perfectly suitable when the input and/or the output have variable lengths. However, if we know exactly the input/output sequence lengths of the neural network. Is this the best approach?

",49133,,,,,1/1/2023 23:04,Is seq2seq the best model when input/output sequences have fixed length?,,2,0,,,,CC BY-SA 4.0 30091,2,,30090,8/9/2021 12:35,,0,,"

If you would classify a transformer as Seq2Seq, then it is arguably the best. This is only arguably due to accuracy.

Shallow neural networks or even decision trees and forests may be better in production due to lower training time, lower inference time and smaller size in memory.

Overall, it requires a bit of a compromise, and you need to do what suits your use case. For example you wouldn't run GPT-3 on a raspberry pi pico (for example). So it depends on what you mean by "best".

",49134,,,,,8/9/2021 12:35,,,,0,,,,CC BY-SA 4.0 30092,1,30110,,8/9/2021 14:53,,0,64,"

I learned from this post about the so-called bit memory:

They froze its self-attention and feed-forward layers and, in separate copies, fine-tuned peripheral layers on each on a wide range of tasks: Bit memory (memorizing strings of bits), Bit XOR (performing logical operations on pairs of strings of bits), ListOps (parsing and performing mathematical operations), MNIST, CIFAR-10 (classification of images), CFAR-10 LRA (classification of flattened, greyscale images), and remote homology detection (predicting what kind of protein structure an amino acid is part of).

I wonder what the "bit memory" task is? Is it an identity function as described in this post? Or the memory network?

",5351,,,,,8/10/2021 16:13,What is the bit memory task?,,1,0,,,,CC BY-SA 4.0 30093,1,30095,,8/9/2021 16:57,,1,340,"

I was looking at this implementation for creating an agent for playing Tetris using DeepRL.

This model uses "a state based on the statistics of the board after a potential action. All predictions would be compared but the action with the best state would be used".

So at each iteration, it's feeding a set of future states, computed based on the current state (future state made up of statistics of the game like nb of holes in the board, cleared rows, total height...) to a neural network and outputs one "value" per future state.

So at every step, you predict N "values" from the neural network for all N possible future states for the one you are currently in and choose the greatest one as your future state and thus associated action.

Now, my issue: the implementation says it's "deep Q-learning", but I do not see it that way. The action, nor some sort of current state is given as input of the network.

Since it is feeding the "future states", for me, it looks more like a value iteration algorithm with a neural network or at least something where you know the transition model?

Did I miss something and it is actually DQN? If not, do you have any references for this kind of RL model? Does this have a name?

",33532,,2444,,8/10/2021 10:11,8/10/2021 10:11,"In deep reinforcement learning, what is this model with state as input and value as output?",,1,0,,,,CC BY-SA 4.0 30094,1,,,8/9/2021 17:34,,0,165,"

As I understand from literature, normally, the last activation in an actor (policy) network in TD3 and SAC algorithms is a Tanh function, which is scaled by a certain limit.

My action vector is perfectly described as a vector, where all values are between 0 and 1, and which should sum up to 1. This is perfect for a Softmax function. But these values are not probabilities of discrete actions. Each value in action vector should be a percentage from the whole portfolio to be invested in a certain stock.

But I cannot figure out, if it would be mathematically fine to use Softmax as an activation layer in TD3 or SAC?

",49140,,49140,,8/10/2021 7:54,8/10/2021 7:54,Is it possible to use Softmax as an activation function for actor (policy) network in TD3 or SAC Reinforcement learning algorithms?,,0,5,,,,CC BY-SA 4.0 30095,2,,30093,8/9/2021 17:45,,1,,"

This is a variant of RL value-based approach using afterstate values. These are similar to action values, but have the following properties:

  • Afterstates treat an action as "choosing a next state". This works well for deterministic environments, or in games where setting a board state is at least deterministic before any random factors might apply (such as an opponent's turn).

  • Afterstates can result in efficient functions (and efficient learning) when there are multiple paths to a given state and where the details of how that state was achieved do not matter much.

  • An afterstate value function is pretty similar to an action value function, provided the agent can predict all the state features that will result. This is not necessarily the same as needing a full model, as it does not need to account for changes after the state is selected due the environment - or opponent - but before the next time step. However, it is often more than a simple action selection, for example in tetris you may have actions which are rotate, shift, drop, but the afterstate selection will be a model of what happens to state for each allowed action selection. So some functions are needed to:

    • Convert allowed actions to afterstates before passing them to an afterstate selection algorithm.

    • Convert selected next states to actions before passing back to the environment.

Environments can also be written to present valid next states and only accept a valid next state as a selection from the agent. It is possible to code Tic Tac Toe agents to do this for example - the environment generates all possible next states, and the agent can select one. The environment may also check that the selected next state is a valid change from the current state. In that case you do not need any conversion between actions and afterstates, everything is handled by passing states between agent and environment. I would not expect a Tetris game and agent to be written so directly like this though.

Due to the similarity of afterstate values with action values, it is straightforward to adjust other value-based methods like Q-learning or SARSA to use afterstates.

",1847,,1847,,8/9/2021 17:57,8/9/2021 17:57,,,,0,,,,CC BY-SA 4.0 30096,2,,7695,8/9/2021 19:12,,0,,"

It might be easier to approach as an object segmentation problem which identifies multiple objects in a given image/frame. There are lots of examples of you do a search using “object segmentation” as keyword.

",49143,,,,,8/9/2021 19:12,,,,1,,,,CC BY-SA 4.0 30097,2,,30090,8/9/2021 19:15,,-1,,"

I often see this type of question about finding the “best” model. Perhaps the best approach would be to use a sample or even better a toy dataset similar for the kind of problem you are solving and use an AutoML tool such as PyCaret or Darts to evaluate several/many different models to narrow the choices then experiment further. It would at least be a more scientific approach IMO

",49143,,,,,8/9/2021 19:15,,,,0,,,,CC BY-SA 4.0 30100,2,,30055,8/9/2021 20:17,,-3,,"

Most any ML algorithm which is used for image applications can be adapted to other datasets since images are convertd to vector form before feeding to an ML model. However, I jus finished my masters thesis in which I did a literature review on GAN. The history of GAN c and related models are given in the landmark Goodfellow paper which was to “represent probability distributions over the kinds of data encountered in artificial intelligence applications, such as natural images, audio waveforms containing speech, and symbols in natural language corpora”.

Like any ML model it quickly found applications in a variety of areas/applications. This, the “history” of models in a particle application area will be somewhat convoluted and difficult to trace. However, there are research papers that give such information if just takes time to research and find them.

For example, I am currently working on time series prediction for stock market prediction and I found a good of references of its history in “ Deep Learning Networks for Stock Market Analysis and Prediction: Methodology, Data Representations, and Case Studies” (E. Chong, C. Han, and F. Park, "Deep Learning Networks for Stock Market Analysis and Prediction: Methodology, Data Representations, and Case Studies.” Expert Systems with Applications, vol. 83, Elsevier Ltd, 2017, pp. 187–205)

",49143,,,,,8/9/2021 20:17,,,,0,,,,CC BY-SA 4.0 30102,1,,,8/9/2021 22:51,,0,76,"

There are two types of value functions in reinforcement learning: State value function $V^{\pi} (s)$, state-action value function $Q^{\pi}(s, a)$.

State value function:

This value tells us how good to be in state $s$ if we are following policy $\pi$. Formally, it can be defined as the average returns obtained at time step $t$ from state $s$ if we follow policy $\pi$.

$$V^{\pi}(s) = \mathbb{E}_{\pi}[R_{t}|s_t = s] = \mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s\right] = \mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s, a_t = a \right]$$

State-action value function:

This value tells us how good is to to perform action $a$ in state $s$ if we are following policy $\pi$. Formally, it can be defined as the average returns obtained at time step $t$ from state $s$ and action $a$ if we follow policy $\pi$ further.

$$Q^{\pi}(s, a) = \mathbb{E}_{\pi}[R_{t}|s_t = s, a_t = a] = \mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s, a_t = a\right] = \mathbb{E}_{\pi} \left[ \sum \limits_{k=0}^{\infty} \gamma^{k}r_{t+k+1} \mid s_t = s, a_t = a \right]$$

Now, Q-learning and SARSA learning algorithms are generally used to update $Q$ function under policy $\pi$ using the following recurrences respectively

$$Q(s_t,a_t) = Q(s_t,a_t) + \alpha[r_{t+1} + \gamma \max\limits_{a} Q(s_{t+1},a) - Q(s_t,a_t)] $$

$$Q(s_t,a_t) = Q(s_t,a_t) + \alpha[r_{t+1} + \gamma Q(s_{t+1},a_{t+1}) - Q(s_t,a_t)] $$

Now my doubt is about the recurrence relations in Temporal Difference (TD) algorithms that update state value functions. Are they same as the recurrences provided above?

$$V(s_t) = V(s_t) + \alpha[r_{t+1} + \gamma \max V(s_{t+1}) - V(s_t)] $$

$$V(s_t) = V(s_t) + \alpha[r_{t+1} + \gamma V(s_{t+1}) - V(s_t)] $$

If yes, what are the names of the algorithms that uses these recurrences?

",18758,,18758,,8/11/2021 12:27,8/11/2021 12:27,What are the recurrences used for updating state value function in $TD$ and $TD(\lambda)$ learning?,,0,7,,,,CC BY-SA 4.0 30103,1,,,8/9/2021 22:54,,0,24,"

I'm attempting to build a neural network to play the card game, Lost Cities.

A brief overview of the game:

  • The game involves two players taking turns to play cards on expeditions.
  • Expeditions incur a debt when you play the first card. Subsequent cards will buy out of that debt and return a profit.
  • Each player must either play or discard a card (with restrictions), and then draw a new card from the deck or the discard pile.
  • The game ends when the last card is drawn.

The expanded rules can be found here.

I'm attempting to train a sequential model to play this game, with the state of the board/hand/discard pile as a set of inputs and a three heuristic values for each card (value of playing, value of discarding, value of picking up from this discard) as the output values. However, it's unclear to me how I should approach this from a training perspective.

My biggest hurdle is that the network's success can only be evaluated by wether or not it can "beat" itself or a competing network in a game. A player's raw score during the game is not viable due to the nature of play. To me, this means that the network will have to be used with several hundred different sets of input data (for each turn of state of the game board during a match) before any meaningful results are generated.

So far, the only solution I've had minor success with is a generational algorithm that creates "fuzzy" children to compete against each other for the most "wins". This was done in Python's standard library, and was obscenely slow, even with a reduced sequential network.

My question:

Is there an established method to deliver delayed feedback to a network (after several uses of the network)?

I'm very new at this, so any and all feedback is more than welcomed.

",49144,,40434,,8/10/2021 17:11,8/10/2021 17:11,Training a sequential model that can only evaluate after several hundred cycles,,0,2,,,,CC BY-SA 4.0 30105,1,,,8/10/2021 1:32,,0,45,"

Consider the following code in PyTorch

>>>torch.tensor([8]).shape
torch.Size([1])
>>>torch.tensor([[8]]).shape
torch.Size([1, 1])
>>>torch.tensor([[[8]]]).shape
torch.Size([1, 1, 1])   

We can notice that we want to store only a single element $8$ in a tensor. But it is possible in tensors to store $8$ in any n-dimensional tensor where $n \in \mathbb{N}$. In strict case $\mathbb{N}$ may be replaced by $\mathbb{W}$.

But, I am facing difficulty in understanding this fact of a single element contributing to all dimensions. If the element is present in all dimensions, then I am assuming that it has to be present multiple times, which is not the case. I can't understand how a single element is contributing any number of dimensions without repeating itself multiple times.

How to understand this phenomenon? How should I interpret or visualize this fact intuitively?

",18758,,18758,,1/18/2022 23:28,1/18/2022 23:30,How to visually or intuitively understand single element multi-dimensional tensors?,,2,0,,,,CC BY-SA 4.0 30106,1,,,8/10/2021 8:41,,1,112,"

I am working on building a model to classify the type of touch the user makes(Long Press, Left Swipe, Right swipe and so on). I have data with features that characterise the user's touch, like duration, velocity in x-direction, velocity in y-direction etc. One feature that's also present is the trajectory of the touch.

The problem is that for touches like taps or long-press, the length of the trajectory array is 2 or 3 points, but for swipes, it reaches up to 40-100 points. What I thought can work is either use padding or use CNNs. But the problem with padding is that as I am using trajectories, if I pad them with 0s it might affect the learning because a '0' still has meaning in trajectories as some points. And what I think the problem might be with CNNs is that first I don't know if such an architecture could work for all features (touch duration, xVelocity etc.) as they are not spatially related. I may be wrong about this, feel free to correct it. I also thought of using RNNs but I did not as they are mainly used for NLP tasks and all the features are not related to each other sequentially.

What are the different ways I can handle this kind of variable-sized input feature for neural networks?

",37797,,37797,,8/10/2021 8:55,8/10/2021 8:55,How to pass variable length data as feature to a neural network?,,0,2,,,,CC BY-SA 4.0 30107,1,,,8/10/2021 8:48,,1,22,"

It is claimed that the main goal of graph embedding methods is to pack every node's properties into a vector with a smaller dimension, so node similarity in the original complex irregular spaces can be easily quantified in the embedded vector spaces using standard metrics.

However, I can find no formal explanation as to why nodes with similar properties should be embedded so that their separation in embedded space respects their similarity. Is there such a proof, or is it a convenient consequence of embedding?

",26382,,2444,,8/12/2021 10:04,8/12/2021 10:04,Is graph embedding linear in its maintaining of graph geometry?,,0,0,,,,CC BY-SA 4.0 30110,2,,30092,8/10/2021 16:13,,2,,"

Read the paper. It tells you. (page 3)

Bit memory. Similar to the task proposed by Miconi et al. (2018), we consider a bit memory task where the model is shown 5 bitstrings each of length 1000. Afterwards, the model is shown a masked version of one of the bitstrings, where each bit is masked with probability 0.5, and the model is tasked with producing the original bitstring. The bitstrings are broken up into sequences of length 50, so that the models are fed 120 tokens of dimension 50.

It wasn't immediately clear to me whether they train the model on the bitstrings (in which case 5 is quite low and a Transformer model is not required) or just present them to it when running the model. Figure 4 (page 8) makes it clear that it's the latter, as the original bitstrings and the query both appear on the same attention plot.

i.e. the input is something like:

00011110011100100001000101110011...
01000101001101100011001110010110...
11000101111111001010011010111010...
01011101001100100011111000000100...
11110100011010011000101100010101...
query
*10**10*1**1110*****0110**11**1*...
",28406,,,,,8/10/2021 16:13,,,,0,,,,CC BY-SA 4.0 30111,1,,,8/10/2021 16:32,,0,70,"

I'm confused about the interpretation and assumptions of the Dice coefficient versus the more popular measure mutual information. I'm specifically referencing its use in hierarchical semantic network analysis, or ranking the significance of collocation of words.

I'm referencing Translating Collocations for Bilingual Lexicons: A Statistical Approach which talks about how the Dice coefficient is more appropriate when you don't want 0-0 matches to be significant. However, as a amateur in probability, it's not really clear to me from the respective formulas why this would be.

Could someone explain?

",49158,,2444,,8/11/2021 12:25,8/11/2021 12:25,Why would the Dice coefficient be more suitable than mutual information when you don't want 0-0 matches to be significant?,,1,0,,,,CC BY-SA 4.0 30112,1,,,8/10/2021 16:36,,0,330,"

So I've been making a mini version of VGGNet, trying to tweak the hyperparameters to match the CIFAR-100 dataset.

It was running slow at first but I was able to get decent accuracy after 60 epochs or so. However, when I added BatchNormalization layers to my two fully-connected hidden layers, it started learning at like 20% accuracy immediately, but began overfitting my data so badly that after 7 epochs my validation didn't improve from 0.01, compared to 20+ testing accuracy.

Why would adding these layers which are supposed to act as a regularizer, actually cause severe overfitting instead? I'm confused.

",49159,,,,,8/10/2021 16:36,Why is BatchNormalization causing severe overfitting to my data?,,0,2,,,,CC BY-SA 4.0 30114,2,,30053,8/10/2021 19:26,,0,,"

As I understand it, you have a problem with a large action space - a vector of 10 integer variables. You also have a constraint on what valid actions should look like.

Even with the action vector discretised to integer amounts, there are millions of possible actions. This is beyond anything you can reasonably solve with value-based methods such as Q-learning. The problem is deriving the policy from the action value estimates. To select a greedy action, you need to find the action which maximises $\hat{q}(s,a, \theta)$, which in your case would mean either an insanely large output vector (covering all possible action combinations) or very large input batches to maximise over.

So, DQN is not really available to you as a method, before considering the constraint. What can you use instead? Any policy-gradient method or actor-critic method should work. These are more fiddly to understand and implement than value-based methods, although there are plenty of references for PyTorch, for example the PFRL library implements A3C, PPO, DDPG which would all be suitable as a start.

What a policy gradient method does for your problem is allow you to define a policy function $\pi(s, \theta)$ which will either output a single action or the parameters for an action probability distribution that you can sample from. The latter is actually more common, to create a neural network that outputs the parameters of a probability distribution. This allows for exploration in an on-policy approach. DDPG (Deep Deterministic Policy Gradients) is an example of a method that uses a deterministic policy function, but there is still an action sampling stage because DDPG adds a noise function to the policy in order to explore.

In your case, you could build a policy network that output a vector of 10 real values to repesent the means of the distribution, plus either 1 or 10 standard deviations if you are not using something like DDPG. Then the action choice could be sampled from the distribution that this defined.

This would not solve your other problem - a constraint on the sum of elements of the vector. You also have an implied constraint of a minimum value for each element.

For the constraint, I suggest you do not attempt to model it directly in the policy function, but instead define a fixed (no learnable parameters) mapping function from something that is easier to model in the neural network, to the constrained version. For instance, whatever action vector is output by sampling the neural network action, you could clip to minimum zero, then sum elements, divide by this sum and multiply by 15. Putting this function outside of the agent for training purposes - either part of the environment, or a "helper" - should make the maths and using the framework easier.

Optimising the raw, unconstrained policy function does mean you will have multiple equivalent policies once transformed, so makes it a less efficient search. However, this is offset by not needing to figure out valid probability distributions to sample from within the constrained space, or the gradients associated with the constraints.

Success of this approach will depend on a few details:

  • Choice of distribution function for selecting actions

  • Choice of mapping function to apply constraints

  • Feasibility of exploring the state and action space sufficiently to find near optimal behaviour

You have some influence over the first two issues - if you have some sense of what "good" actions will be in the problem then you can try to ensure that the representation covers them well. For example if you expect that the vector should have multiple zeroes, then you could ensure the sampled values can go below zero easily and that you use a clipping function so that you are likely to get a few zeroes.

The last issue is beyond your control. It is possible the problem is too hard to explore using RL. This may be the case when only very specific action values out of the many possible will give you good results. RL relies on getting some kind of reward signal to guide improvements. If there is a very large search space and only sparse rewards in specific circumstances, then the trial and error process may never find the optimal behaviours.

",1847,,1847,,8/11/2021 5:59,8/11/2021 5:59,,,,3,,,,CC BY-SA 4.0 30116,2,,30045,8/10/2021 22:51,,1,,"

There are a lot of ways to describe "Artificial Intelligence".

This form of automation/computing/AI goes back to neolithic times.

Early AI was purely heuristic. (Also known as "good old fashioned AI" aka "Symbolic Intelligence" aka classical expert systems.)

The current generation of strong (narrow) AI is statistical, which encompasses both neural networks and evolutionary/genetic algorithms.

Artificial intelligence is a machine that makes a decision. Modern statistical methods allow these machines to learn and improve their decisions.

Current best AI is "narrowly superintelligent" in that it can exceed humans at most definable tasks, but machines still lack the intuitivity of biological brains, and this strong intelligence is narrow—restricted to single problems or classes of problems.

",1671,,,,,8/10/2021 22:51,,,,1,,,,CC BY-SA 4.0 30118,2,,29916,8/11/2021 2:12,,1,,"

I/ Does it has any sense to apply backpropagation in these settings?

If I understand correctly, this question should be "Can backpropagation return the gradient for every node of these networks?".

It depends on your network $N$ if it is differentiable with all inputs $i_t$ so the backpropagation can be guaranteed that there are the gradient values at every node of the network.

II/ If yes, what happens to the gradients?

We deal with the vanishing or exploding gradient after we make sure that the gradient is exist. The reason for this phenomenon in a neural network can come various, some typical causes are:

  1. Very deep neural networks (a large number of layers):

    Assume a neural network with $n$-th layers, the backpropagation is followed the Markov Chain's rule can be show as:

$$\frac{\partial loss}{\partial w_t} = \frac{\partial loss}{\partial l_{n}} \times \frac{\partial l_{n}}{\partial l_{n-1}}\times...\times \frac{\partial l_{t+1}}{\partial l_{t}}\times \frac{\partial l_t}{\partial w_t}$$

It's easy to see that the gradient is scaled exponentially for each layer, so if each gradient value is larger than 1, the gradient will become $\inf$ with $n\rightarrow \inf$ (exploding) and $0$ in vice versa.

  1. Activation function:

    The image below shows the difference between different activation functions. It's easy to see that the logistic function such as Sigmoid or Tanh is limited in the range $[0,1]$ (in case sigmoid) or $[-1,1]$ in case Tanh. Therefore, if the output from a node of the neural network is larger than 2 or smaller than -2, the gradient will become nearly zero (vanishing) because the output is always the same.

Solution:

  • Replace it by the simple function such as ReLU: $o_{t+1} = max(0, N(i_t))$. However, the output from ReLU will be $0$ if it's lower than $0$, so there are many variants of ReLU to solve this problem, you can find them easily by the keyword "ReLU family" on google (example).

There are other methods or strategies to duel with vanishing/exploding gradient, it also depends on your model or your data. The answer can be more detailed if you give more information.

",41287,,38846,,9/10/2021 22:34,9/10/2021 22:34,,,,0,,,,CC BY-SA 4.0 30120,2,,30045,8/11/2021 5:36,,0,,"

In this context, I would focus on the what and not the how.

  • What part of the business problem will it solve?
  • How does that fit into the bigger solution (AI model is probably making a prediction - is that it? Is there an application or report built around it?)
  • How do you expect it to perform compared to alternative solutions?
  • What do you need in terms of resources: data, computation, time?

As far as how it works - I would just describe it as a “probabilistic model” and leave it at that. If they want to go deeper, they’ll ask. You may not even know the exact model/algorithm/approach yet, as often experimentation and iteration are necessary.

",13360,,,,,8/11/2021 5:36,,,,0,,,,CC BY-SA 4.0 30121,2,,26794,8/11/2021 6:00,,1,,"

Generally speaking, the power of BERT for applications like NER is that the authors (of whichever implementation you use) performed a large-scale pretraining effort to create the embeddings. You can then “fine-tune” those for your specific task using far less computation, but the rub is that you need to use the same tokenization scheme (I.e. the BERT Tokenizer) in order to have your input “fit” the existing embeddings. Intuitively, tokenization is mapping a word in your text to an index number. If the embedding was trained thinking that word number 42 is “cat” then things won’t work well if you tokenize differently and provide a 43 instead when “cat” pops up in your text.

Unless you’re training on a sparse language that hasn’t been well-represented by one of the public embeddings, the above is almost certainly your wisest approach. If, however you really want to train the BERT architecture on new embeddings, then you can technically use any embedding scheme you like.

The BERT Tokenizer uses subwords along with a few specific administrative tokens. If you were going to explore further, a byte pair encoder might be useful, especially if the language starts to beer away from eg English.

",13360,,,,,8/11/2021 6:00,,,,0,,,,CC BY-SA 4.0 30122,1,30127,,8/11/2021 6:23,,1,236,"

I have an input tensor of shape $\mathbf{(3, 32, 32)}$ consisting of 3 channels, 16 rows, and 16 columns. I want to convolve the input tensor using $\mathbf{(3 \times 3)}$ kernel/filter. How can I calculate the required FLOPs?

",49172,,2444,,8/11/2021 21:28,8/11/2021 21:28,"Given an input of shape $(3, 32, 32)$, which is convolved with a $(3 \times 3)$ kernel, how do I calculate the FLOPS?",,1,7,,,,CC BY-SA 4.0 30123,2,,30049,8/11/2021 6:37,,1,,"
  1. You’ll want a reasonable GPU (probably 8GB+), but otherwise no special hardware needed.* You may need to tune down sequence length and batch size to fit your GPU; RAM will be the limiting factor. Don’t try it on a CPU. It will “work” but you’re gonna have a bad time.
  2. Try the Huggingface Transformers library as your implementation. It’s well documented and straightforward and includes both models.

*assuming an Nvida GPU or something compatible with CUDA. Things are rather hairier on Apple hardware. But you can always grab a cloud VM for a few hours

",13360,,,,,8/11/2021 6:37,,,,0,,,,CC BY-SA 4.0 30124,2,,30111,8/11/2021 8:48,,1,,"

Their reasoning is that mutual information is symmetric, giving equal value to 1s and 0s, as it is derived from information theory, where they are just two symbols used to encode a message, with neither being more important than the other. A message encodes a lot of information if the two symbols are roughly equal in probability.

The Dice coefficient, however, centres on two events occurring at the same time, and so handles 1-1 differently from 0-0, as 1 stands for the occurrence of a (comparatively rare) event, whereas 0 represents the (much more common) absence of the event.

In the formulae, the Dice coefficient adds up the individual probabilities in the denominator, whereas in mutual information they are multiplied. If you add two small numbers, you get a number that is slightly larger than the two individual ones, but if you multiply them, you get one that is much smaller. Mutual information has a well-known problem in that it emphasises extremely rare events, which is why it is not used as much any more as it was in the early 1990s.

Thus the Dice coefficient looks for the mutual occurrences but is less concerned with how often each item occurs on its own (addition vs multiplication of individual probabilities).

",2193,,,,,8/11/2021 8:48,,,,0,,,,CC BY-SA 4.0 30125,2,,30059,8/11/2021 8:55,,0,,"

No, the bias is part of the network, not one of the inputs, so an adversary has no ability to manipulate it.

Unless the adversary can change the network, in which case they don't need to use any tricks, because they can just change it to one that outputs what they want!

",28406,,,,,8/11/2021 8:55,,,,0,,,,CC BY-SA 4.0 30127,2,,30122,8/11/2021 9:49,,1,,"

Each output pixel channel is a 3x3x3 filter, so 27 inputs which get multiplied by 27 weights and then added together. This is 27 FMA (fused-multiply-add) operations, or 27 multiply operations and 26 additions. I believe all modern devices implement FMA.

The number of output pixel channels is 30x30x3 = 2700 (as a 3x3 kernel shaves off one pixel on each edge) and each one takes 27 operations to calculate. So that's 72900 operations in total.

",28406,,,,,8/11/2021 9:49,,,,0,,,,CC BY-SA 4.0 30128,1,30132,,8/11/2021 9:52,,1,83,"

I am looking for a textbook that is a nice entry level to Bayesian Inference. I was hoping that there is a nice blend of theory and applications (data sets) on how concepts are applied. Programming techniques presented are welcome.

Just for perspective, I feel that Christopher Bishop's PRML is a theoretical treatment. It is very good theoretically, but I find myself not understanding how to apply it given a data set.

I have tried jumping from one book to another and this has just confused me. Is there any authoritative book with these requirements?

",23723,,2444,,5/10/2022 8:38,5/10/2022 8:38,Is there an entry level textbook on Bayesian Inference that is a nice blend of theory and applications?,,1,2,,,,CC BY-SA 4.0 30129,2,,30105,8/11/2021 13:13,,1,,"

The number is not repeated if it is the only element of some high dimensional space.

An $n$-dimensional (vector) space is a set of objects (known as vectors, although this term is more commonly used to refer to the objects in $1$-dimensional spaces), which are ordered/organized/shaped in specific ways, but composed of scalars/numbers, and to which you can also apply certain operations.

It may be a good idea to think that the shape/organization of each object in an $n$-d space is determined by some "container", which has a certain shape, and that the container of the objects of higher-dimensional spaces generalizes (or it can include) the containers of lower-dimensional spaces.

  • In the case of a $0$-dimensional space (e.g. $\mathbb{R}$), there's no container because you don't need it, as every object is composed of only one number (this is always the case!).

  • In the case of $1$-dimensional spaces, an object could be composed of more than one number (but this does not have to be always the case: it depends on the specific $1$-d space, so there are many $1$-d spaces), so you need a way to organized these numbers. You organize them in a sequence that follows some direction. So, the objects $[0] \in \mathbb{R}^1$, $[1, 2] \in \mathbb{R}^2$, $[2, 3, 1] \in \mathbb{R}^3$, $[0, 4, 2, 2] \in \mathbb{R}^4$, etc., are all $1$-d objects because each of them is composed of one or more numbers, which are organized in sequence. $\mathbb{R}^1$, $\mathbb{R}^2$, $\mathbb{R}^3$, etc., are all $1$-d spaces, because they organize each of their objects in a sequence: their only difference is the number of numbers/scalars in each object.

  • In the case of $2$-d spaces, you organize the objects not in a line, but in a rectangle. The rectangles can have different shapes, so they are not necessarily just squares. So, $[0] \in \mathbb{R}^{1 \times 1}$, $ \begin{bmatrix} 1 & 1\end{bmatrix} \in \mathbb{R}^{1 \times 2}$, $ \begin{bmatrix} 1 & 1 \\ 0 & 2\end{bmatrix} \in \mathbb{R}^{1 \times 2}$.

  • In the case of $3$-d spaces, you organize each object into cuboids. This does not mean that these cuboids contain more than one number. In general, they could, but you also have cuboids that contain only one number, i.e. $[0] \in \mathbb{R}^{1 \times 1 \times 1}$, which, in PyTorch, would be printed as [[[0]]] to give you the idea that the number 0 is inside a container [0], which is inside another container [ ], which can contain only one small container that can contain only one number (e.g. [0], but you could also have had [1] or [10]), which is inside another container [ ], which can contain only one container, which can contain only another container, which, in turn, can contain only one number.

To give you an analogy, think of having many boxes of different sizes and also have balls that you can put inside the smallest boxes. These boxes are the "containers" and the balls are the numbers. You can put smaller boxes inside the bigger ones (such that the boxes cannot slide around) and you can put the balls inside only the smallest boxes (so that they do not move).

",2444,,2444,,8/11/2021 13:34,8/11/2021 13:34,,,,0,,,,CC BY-SA 4.0 30130,1,30131,,8/11/2021 16:39,,2,158,"

In Sutton and Barto's book (Chapter 6: TD learning, 2nd edition), he mentions two ways of updating value function:

  1. Monte Carlo method: $V(S_t) \leftarrow V(S_t) + \alpha[G_t - V(S_t)]$.
  2. TD(0) method: $V(S_t) \leftarrow V(S_t) + \alpha[R_{t+1} + \gamma V(S_{t+1}) - V(S_t)]$.

I understand that $\alpha$ acts like a learning rate where it take some proportion of MC/TD error and update value function.

From my understanding, in stationary environments, transition probability distribution and reward distribution don't vary with time. Hence, one should supposedly use $\alpha-$decay to update value functions. On the other hand, since distributions change with time in non-stationary environments, $\alpha$ should be kept constant so as to keep updating the value function with recent TD/MC errors (in other words, history doesn't matter).

What's been bothering me is that in Example 6.2, 6.5, and 6.7, probability and reward distribution doesn't change. So why is constant-$\alpha$ being used?

Question: How does $\alpha$ vary in stationary and non-stationary environments?

",46214,,2444,,8/11/2021 20:41,8/11/2021 20:41,How does the learning rate $\alpha$ vary in stationary and non-stationary environments?,,1,0,,,,CC BY-SA 4.0 30131,2,,30130,8/11/2021 17:04,,2,,"

So why is constant-$\alpha$ being used?

This is because control scenarios are inherently non-stationary with respect to value functions. Decaying alpha comes with a risk that improvements to the policy will occur progressively more slowly, because the impact to changing the policy will be learned slowly.

From my understanding, in stationary environments, transition probability distribution and reward distribution don't vary with time.

This is correct when considering immediate reward and transitions from any give $(s,a)$ pair. However you are forgetting that the policy function $\pi(a|s)$ does vary with time whilst the agent is discovering the optimal policy, and this affects trajetories and expected values that are being estimated.

Question: How does $\alpha$ vary in stationary and non-stationary environments?

For non-stationary environments, you will want to maintain some minimum learning rate $\alpha$ and also some mininum exploration ($\epsilon$ if you are using $\epsilon$-greedy exploration).

In stationary environments, then a learning rate schedule is still valid and may be useful. You will see it discussed in passing when referencing proofs of convergence for the basic algorithms.

For example in the second edition of Reinforcement Learning: An Introduction it says this regarding convergence of Q learning:

Under this assumption and a variant of the usual stochastic approximation conditions on the sequence of step-size parameters, Q has been shown to converge with probability 1 to $q_*$.

The "usual stochastic approximation conditions on the sequence of step-size parameters" part is a reference to decaying the learning rate.

However, due to the added complexity of handling exploration vs exploitation, the inherently non-stationary nature of value predictions in control scenarios, and the difficulty of getting function approximation to work, discussions of learning rate decay is a minor detail in texts like Sutton & Barto.

",1847,,1847,,8/11/2021 17:16,8/11/2021 17:16,,,,3,,,,CC BY-SA 4.0 30132,2,,30128,8/11/2021 21:03,,1,,"

Using as a best reference accordingly my own google research, find the best post about best introductory Bayesian statistics book and summarize the answers. I find this post in stats.stackexchange about bayesian statistics books maybe this is the best recomendation for you. I read the post weeks ago and some books are stunning.

This is my TOP 3 books from the answer from stats this books repeat themselves in answers below (all the books have applications)

  1. Doing Bayesian Data Analysis: A Tutorial with R and BUGS.
  2. Bayesian Data Analysis 3rd Andrew Gelman
  3. Statistical Rethinking A Bayesian Course with Examples in R and Stan Second Edition Richard McElreath

I start reading Bayesian Data Analysis 3rd Andrew Gelman days ago I am in the first chapter about Bayesian Inference you can check the first chapter:

Part I: Fundamentals of Bayesian Inference 1
1 Probability and inference 3 
1.1 The three steps of Bayesian data analysis 3
1.2 General notation for statistical inference 4 
1.3 Bayesian inference 6
1.4 Discrete probability examples: genetics and spell checking 8
1.5 Probability as a measure of uncertainty 11
1.6 Example of probability assignment: football point spreads 13
1.7 Example: estimating the accuracy of record linkage 16
1.8 Some useful results from probability theory 19
1.9 Computation and software 22
1.10 Bayesian inference in applied statistics
24 1.11 Bibliographic note 25
1.12 Exercises 27

and also the book have the Chapter 10 Introduction to Bayesian computation..

Some people prefer some book than others for understand Bayesian Statistics so you have to choose which one fits you, for example statistical rethinking is a different than other books is another way to explain bayesian statistics some people not understand. but is the best 3 from the post.

Just take a look the chapter difference between Gelman and Statistical Rethinking

",30751,,30751,,8/12/2021 3:34,8/12/2021 3:34,,,,0,,,,CC BY-SA 4.0 30133,1,,,8/11/2021 23:55,,0,67,"

In a Generative Adversarial Network (GAN), there are two multi-layer perceptrons. One is the generator network and another is a discriminator network.

The input for the generator network is a noise vector $z$. The input for a discriminator network is either a generated sample $G(z)$ i.e., the output of a generator network or a training sample $x$ for a training dataset.

My doubt is regarding the input of the generator. The noise vector is generally sampled from the standard normal distribution.

$$z \sim \mathcal{N(0, 1)}$$

Although I am not sure, I think : since the values in the normal distribution vary, the output of the generator can vary accordingly.

But some of the research papers say that the noise vector can also be sampled from a uniform distribution i.e., $z \sim \mathcal{U(a, b)}$ for $a<b$.

$$ U(x) = \begin{cases} \dfrac{1}{b-a} & x\in [a, b] \\ 0 & x\not\in [a, b] \\ \end{cases} $$

It is clear that uniform distribution does not vary like normal distribution and takes only two possible values, hence all samples have equal probability in the given range. Then how can it contribute to the diversity of the output of the generator network?

",18758,,18758,,5/17/2022 10:26,5/18/2022 22:35,How does noise samples from uniform distribution contribute to the diversity of generator output?,,2,8,,,,CC BY-SA 4.0 30135,2,,30061,8/12/2021 3:01,,0,,"

A quick google the keyword "definition of iteration in machine learning" gives us a lot of results. I would like to stick with this StackOverFlow question.

As your example, if we have 100 samples, let me assume batch size is 20, so the number of iteration is 5. If there is one iteration that both 20 samples are predicted correctly, this epoch should be counted which means one epoch you still have 5 iterations since the number of iterations is important in some situations such as control the learning rate while training (decay or cycle).

If you feel uncomfortable with the non-gradient / non-updating on that iteration, you can understand as your model's weight is updated with the gradient is 0.

",41287,,,,,8/12/2021 3:01,,,,0,,,,CC BY-SA 4.0 30137,2,,30064,8/12/2021 7:02,,1,,"

TLDR: Simplify your agent.

Context:

As you've noticed, it's not a hard game and it does not require a complex policy.

You'd need a single neuron to solve it perfectly: 1 if (state < 0.78) else 0

Problem:

However, the default agent comes with a big neural network (I believe it's a 2 layered 64 fully connected neural network). So you have thousands of parameters trying to solve a simple problem. And that's why it takes thousands of time-steps.

Solution:

So first thing I'd recommend is to use a simpler policy model.

If it doesn't completely solve the problem, then simplify your model by diving deeper on your agent and disabling the functions you think aren't helpful. https://tensorforce.readthedocs.io/en/latest/modules/policies.html

Lastly, if you really need to nail it, you can use some meta-learning for tuning the hyper-parameters.


Edit: adding Meta-Learning

Meta-Learning is a broad concept of using machine learning for setting up your machine learning architecture and/or hyperparameters. It's an automation of your trial and error process, exploring and exploiting different configurations in a search for a good model.

A related concept is auto-ML.

Keep in mind that:

  1. It's a very extensive computational process, since you need to train and evaluate thousand or millions of models.
  2. You'll need to explicitly define your definition of a good model: Will you reward it to be simple? Are you looking for fast training speed? Does in need to be light for deploying? Or you want a huge, complex, but accurate model?

In your case, if the training time is too big, you could limit the training time, so you'll find the best model that can be trained in a fixed short time window. And once again, keep in mind it will take way more time to find a good model this way, than just selecting a not ideal architecture and train it extensively.

Here is a Siraj's video talking about the general concept.

",49188,,49188,,8/13/2021 20:41,8/13/2021 20:41,,,,8,,,,CC BY-SA 4.0 30138,2,,30069,8/12/2021 8:13,,1,,"

Yes, it absolutely makes sense as the Iris dataset is linearly separable (in the sense that linear decision boundaries are near optimal). This can just about be seen in the scatter plots:

Particularly look at petal width versus petal length, two linear decision boundaries for that pair of variables gives a very low error rate already. This shouldn't be surprising as the dataset was introduced in a paper by Fisher describing his linear discriminant method (about 1936?).

As @htl (+1) rightly points out, a large neural network has way too much capacity for this problem and will likely overfit. However you will probably find that a small neural network fares no better than a linear classifier for this problem.

There is a lot of excitement about deep neural networks over the last few years, but they are no better than most existing classifiers for these sorts of benchmark dataset. They are best (IMHO) for problems with very large datasets, or where convolutional layers are useful.

The comp.ai.neural-nets (an old Usenet news group that we used before the WWW - I trained my first neural net back in 1990) FAQ is well worth a read for anyone wanting to experiment with neural networks. It contains a lot of folk wisdom that is good to know. From part 3:

Subject: How many hidden layers should I use?

You may not need any hidden layers at all. Linear and generalized linear models are useful in a wide variety of applications (McCullagh and Nelder 1989). And even if the function you want to learn is mildly nonlinear, you may get better generalization with a simple linear model than with a complicated nonlinear model if there is too little data or too much noise to estimate the nonlinearities accurately.

This is excellent advice.

BTW I this case, you might want to try and diagnose the problem by training your neural network on a two-dimensional version of the problem, so that you can plot the decision boundary of the model and see what it is doing. Sepal Width -v- Sepal Length might be a good choice. Experimenting with low dimensional problems that can be directly visualised is often very informative. I Often use Brian Ripley's synthetic dataset in my work because you can easily generate as much data as you could want.

",49192,,49192,,8/12/2021 8:34,8/12/2021 8:34,,,,0,,,,CC BY-SA 4.0 30139,1,,,8/12/2021 9:29,,0,90,"

I have a time series classification problem that uses a series of if-else statements to arrive at a particular label. I am attempting to use ML/DL to make the system simpler.

So far, I have tried using a tabular data approach where I take a snapshot of information up to a particular point. For example, this will be the rolling sum of certain columns and so on. I have also tried LSTM and CNN. All these approaches have failed to give me F1 scores significantly above 50 %.

Are there other ML/DL approaches that I should try before giving up? The models were built using AutoKeras and PyCaret.

",49194,,,,,8/13/2021 4:41,Transforming a complex if-else decision-making to ML,,1,0,,,,CC BY-SA 4.0 30140,2,,14290,8/12/2021 9:42,,0,,"

Minimising MSE in a classification setting is perfectly reasonable as it is also known as the Brier Score and is a proper scoring rule which means that it is minimised if the network outputs the conditional probability of class membership. This is not unduly surprising as minimising MSE leads to a model that outputs an estimate of the conditional mean of the target distribution, which for a 1-of-c coding is the conditional probability of class membership. You can even use MSE for training networks with logistic or softmax activation functions in the output layer so that the obey the usual constraints of being in the interval $[0,1]$ and summing to one.

However, the MSE penalises very confident misclassifications much less harshly than the cross-entropy metric does. Whether this is a good or bad thing depends on the needs of the application. If you are mostly interested in the p=0.5 decision boundary, then you probably don't want model resources spent dealing with highly confident misclassifications, which are a long way from the decision boundary, and have little effect on it. This is a large part of the justification for purely discriminative methods like the SVM.

",49192,,,,,8/12/2021 9:42,,,,0,,,,CC BY-SA 4.0 30141,1,30147,,8/12/2021 9:59,,0,120,"

I am working on an RL problem that I am trying to solve using a Deep Q-network. The problem concerns choosing drivers to take specific taxi orders. I am familiar with most of the existing works and that they use RL to determine which orders to take. I specifically look at the situation where we want to determine which drivers to take.

This means that action space concerns the various drivers we can choose. Initially, we assume a fixed number of drivers to ensure a fixed action space.

My question is about defining the state space. First of all, the state space consists of information about the next order we are trying to assign to a driver from our action set. Besides that, we also want to incorporate state information about the different drivers (e.g. their location). However, this would mean we include state information about the actions as input of the DQN. The reason is that the state of the drivers is the main thing that changes when choosing a different action and therefore determines the choice we want to make at the next timestep. I am for example thinking about creating a list of size |drivers| with element i defining the location of agent i.

I tried to find existing work that uses a similar setting (so that incorporates action states in the state input), however, I did not succeed in this yet. Therefore I am wondering:

  • Is this a logical/reasonable approach to the problem?
  • If yes, is someone familiar with existing works that use a comparable approach?

I am familiar with works that take (state, action) as input, which describes the full pair of the state s and the action a, and then produce a single Q(s,a) for each specific pair of state + action. This is an approach we do not want to take, given that it leads to |A(s)| passes through the network instead of a single pass (as explained here).

",49196,,49196,,8/12/2021 13:22,9/11/2021 16:03,How to incorporate action information in the state input of a DQN?,,1,0,,,,CC BY-SA 4.0 30142,1,,,8/12/2021 10:40,,1,44,"

I've recently read about NeuralHash, and immediately thought that it might be used as a loss for an autoencoder. However, it only seems to preserve "structure" from what I've read, not actual pixel values (which makes sense, given its purpose). Thus, how likely it is that an autoencoder performs well given a loss that compares the NeuralHash of its output with the NeuralHash of its input?

I feel like, assuming that NeuralHash is secure, it should either work well, that is produce an image similar to its input (because the hash is approximately unique) or not work at all (otherwise we would have found a collision), no middle-ground. Is there any thoughts/research on this?

",49198,,49198,,8/12/2021 15:16,8/12/2021 15:16,Can NeuralHash be used as a loss for an Autoencoder?,,0,0,,,,CC BY-SA 4.0 30143,1,,,8/12/2021 11:44,,2,458,"

NEAT is an evolutionary algorithm. When would you want to use NEAT over more traditional/common RL algorithms like PPO or SAC etc. What advantage does it give you?

",45240,,49255,,8/15/2021 5:19,10/24/2022 0:52,In what situation would you want to use NEAT over reinforcement learning?,,1,4,,,,CC BY-SA 4.0 30144,1,30149,,8/12/2021 12:58,,1,79,"

All the literature I read seems to indicate catastrophic forgetting affects only neural networks. Do other online/incremental algorithms not suffer from catastrophic forgetting (for example, SGDClassifier)? Why would that be the case?

",49201,,2444,,8/12/2021 13:26,8/12/2021 16:45,Do other online/incremental algorithms not suffer from catastrophic forgetting?,,1,0,,,,CC BY-SA 4.0 30145,1,,,8/12/2021 13:47,,1,469,"

I was reproducing the findings of a research article in which I discovered that they had switched the Channel dimension from last to first. To clarify this concept, I went through A Gentle Introduction to Channels-First and Channels-Last Image Formats . The author of this link stated:

When represented as three-dimensional arrays, the channel dimension for the image data is last by default, but may be moved to be the first dimension, often for performance-tuning reasons.

There are two ways to represent the image data as a three dimensional array. The first involves having the channels as the last or third dimension in the array. This is called “channels last“. The second involves having the channels as the first dimension in the array, called “channels first“.

Channels Last. Image data is represented in a three-dimensional array where the last channel represents the color channels, e.g. [rows][cols][channels].

Channels First. Image data is represented in a three-dimensional array where the first channel represents the color channels, e.g. [channels][rows][cols].

.

We are aware of when the channel was last used and the manner in which kernels were applied, theoretically. However, I'm curious as to when the dimensions of the channel come first. How kernels will process. More precisely:

Assume we have [rows, columns, channels] -> [2,4,3] image dimensions. We may say we have three data channels, each with two rows and four columns, correct?

Alternatively, if we assume that channel dimensions are first means [channels, rows, columns] -> [3,2,4]. In other words, we now have four data channels, each with three rows and two columns, am I correct? If I am taking correctly than this is quite confusing because we are completely modifying our image.

Question:

What is the benefit of shifting the channel dimension first, and how will the kernels move on it?


For more detail check the code:

input_layer = tf.keras.Input(shape=input_shape, name="Time_Series_Activity")
con_l1 = tf.keras.layers.Conv2D(64, (5, 1), activation="relu", data_format='channels_first')(input_layer)

Summary of Code

Layer (type)                 Output Shape              Param #   
=================================================================
Time_Series_Activity (InputL [(None, 1, 30, 52)]       0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 64, 26, 52)        384       
_________________________________________________________________
",41756,,49188,,8/23/2021 23:07,8/23/2021 23:07,What would be the advantage of making channel dimension first in TensorFlow Keras implementation?,,0,0,,,,CC BY-SA 4.0 30147,2,,30141,8/12/2021 15:13,,0,,"

Drivers are not actions in this case, they are objects that are part of the state space, your state vector would look something like this \begin{equation} \mathbf{x} = [x_{o}^T, x_1^T,\ldots, x_N^T]^T \end{equation} where $x_{o}$ is a vector that contains information about location (e.g. starting location, destination location, etc.) and $x_i$ is a vector that represents state of driver $i$ (e.g. location of the driver, do they have an active order, etc.). The action in this case would be assigning the order to one of the drivers not the drivers themselves. For instance if you assign the order to driver $1$ the order vector $x_o$ would change and the vector $x_1$ would also change, so you would change your total state $\mathbf x$

",20339,,,,,8/12/2021 15:13,,,,3,,,,CC BY-SA 4.0 30148,2,,16022,8/12/2021 15:25,,0,,"

Alternatively, you might measure the angle between the two vectors (assuming they are points on a sphere), perhaps using their scalar product and use that as the loss function.

$a \cdot b = \|a\|\|b\|cos\theta$

(or just use polar co-ordinates)

An important question is whether the direction of the errors is likely to be uniform or whether errors in particular directions happen more often than others (in which case that needs to be built into the loss function)

",49192,,49192,,8/12/2021 16:04,8/12/2021 16:04,,,,0,,,,CC BY-SA 4.0 30149,2,,30144,8/12/2021 16:39,,0,,"

To enlighten this, you need to understand the cause of the catastrophic forgetting. Fundamentally, the cause is an overlap in representations of different aspects of data in the learning model [Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionlst Networks]. This can be explained easily in neural networks based on the representations of the data in hidden layers. However, in learning models such as SGD Classifier, we cannot see any explicit representation for input data, and we merely have a decision function to classify it. Hence, we cannot see any serious discussion about catastrophic forgetting in machine learning literature around other learning methods than neural networks.

By the way, this forgetting can happen in another form of the problem called "concept drift" [Understanding Concept Drifts]. If the data distribution will change during training a model (any learning model such as SGD Classifier), the model is likely to forget the first part of the data as it is fitted to the new distribution. The cause of such an issue is the capacity of the model for training different distributions of data. So, you can find many discussions in the literature on "Concept Drift" for all types of learning methods.

",4446,,4446,,8/12/2021 16:45,8/12/2021 16:45,,,,1,,,,CC BY-SA 4.0 30153,2,,30139,8/13/2021 4:41,,1,,"

Are you sure, that there is a need for ML? If there is a set of rules that allows to solve this problem without ML, it gives already 100% accuracy, whereas ML/DL will do it up to a certain accuracy, and this task may be even tough for supervised algorithm. Sorting problem is very difficult for neural networks.

Anyway, check whether your algorithm can express these if-else statements. For example such branch, given that the output is of the same size for both conditional branches, can be approximated by the following:

sigmoid(beta * condition) * output_1 + sigmoid(-beta * condition) * output_2

Here beta controls the slope of the sigmoid, condition is the conditional statement of the form $f(x) >(\geqslant) 0$, and output_1 and output_2 are the outcomes of both options.

",38846,,,,,8/13/2021 4:41,,,,0,,,,CC BY-SA 4.0 30154,1,,,8/13/2021 6:13,,0,98,"

I am currently in the process of reading and understanding the process of style transfer. I came across this equation in the research paper which went like -

For context, here is the paragraph -

Generally each layer in the network defines a non-linear filter bank whose complexity increases with the position of the layer in the network. Hence a given input image is encoded in each layer of the Convolutional Neural Network by the filter responses to that image. A layer with $N_l$ distinct filters has $N$ feature maps each of size $M$ , where $M_l$ is the height times the width of the feature map. So the re- sponses in a layer l can be stored in a matrix $Fl ∈ R^{N_l×M_l}$ where F l is the activation of the ith filter at position j in ij layer l. To visualise the image information that is encoded at different layers of the hierarchy one can perform gradient descent on a white noise image to find another image that matches the feature responses of the original image (Fig 1, content reconstructions). Let $\vec p$ and $\vec x$ be the original image and the image that is generated, and $P^l$ and $F^l$ their respective feature representation in layer l. We then define the squared-error loss between the two feature representations $\mathcal{L_{content}(\vec p, \vec x, l)} = {1\over 2} \Sigma_{i,j} \big(F_{ij}^l - P_{ij}^l \big)$. The derivative of this loss with respect to the activations in layer $l$ [the equation above $(2)$].

I just want to know why the partial derivative is $0$ when $F_{ij}^l < 0$.

",49078,,30751,,8/13/2021 20:22,8/13/2021 20:22,Why the partial derivative is $0$ when $F_{ij}^l < 0$?. Math behind style transfer,,1,2,,,,CC BY-SA 4.0 30157,1,31765,,8/13/2021 9:25,,3,686,"

I was wondering if it is possible to use GPT-3 to translate text description of a circuit to any circuit design language program, which in turn can be used to make the circuit. If it is possible, what approach will you suggest?

",45586,,,,,2/17/2022 18:30,How can GPT-3 be used for designing electronic circuits from text descriptions?,,2,3,,,,CC BY-SA 4.0 30158,2,,30154,8/13/2021 11:13,,2,,"

$F_l$ is the activation of the filter. They state in the paper that they base their method on VGG-Network, which uses ReLU as its activation function. In fact, VGG uses it in all of its hidden layers. ReLU is defined as

$$f(x) = max(0,x)$$

Since ReLU is 0 for all x's below 0, the equation above holds; When x is non-positive, all terms in the loss function are constants with respect to $F_{ij}^l$.

",31879,,31879,,8/13/2021 12:07,8/13/2021 12:07,,,,1,,,,CC BY-SA 4.0 30159,2,,24409,8/13/2021 14:47,,2,,"

There is a pre-trained language model called ProphetNet for sequence-to-sequence learning with a novel self-supervised objective called future n-gram prediction.

https://github.com/microsoft/ProphetNet

Also, there are few variants on hugging face website as well https://huggingface.co/models?search=ProphetNet

",16334,,,,,8/13/2021 14:47,,,,0,,,,CC BY-SA 4.0 30161,2,,30079,8/13/2021 19:03,,1,,"

Yes. You are right. The IS is bound by the number of classes.

This paper titled "A Note on the Inception Score" clearly shows a formal proof of the same. Please head to section 3.3 of the main text for a description and the appendix for the proof.

",11030,,,,,8/13/2021 19:03,,,,0,,,,CC BY-SA 4.0 30163,1,30180,,8/13/2021 20:11,,1,192,"

I know how skip connections work: you add the activations of the previous layer to the activations of a successive layer to stabilize information/gradient flow.

My question is, why doesn't it just get implemented in the seemingly more sensible way of concatenating some previous layer's activations onto a later layer's activations?

Most regularization methods are implemented somewhat transparently (to avoid possible negative consequences, e.g. BatchNorm having learnable parameters to disable it). While this method instead interferes with regular functioning of the network rather than simply making itself available in case it is useful.

What is the reasoning behind the choice to do this rather than simply using concatenation?

",42996,,42996,,9/9/2021 18:23,9/10/2021 15:46,Why do skip layer connections require the same layer sizes?,,3,6,,,,CC BY-SA 4.0 30164,2,,15648,8/14/2021 0:03,,1,,"

Genetic Algorithm is not the best approach here:

  1. GA is a stochastic methods and therefore will never guarantee the best possible solution.
  2. Your solution (GA individual) is modeled as a simple integer. GA is optimal to find a solution modeled as an array (like our genomes), so it can do mutation and crossover.

Good Approach:

The set |S|=N can be around 10000, and each element also has a value in tens of thousands.

It certainly looks a lot, but for a computer, that's not so much. We can start with a brute-force and incrementally simplify it for better performance:

Brute Force:

You can try every single integer from D>T until D<max(S) and store the best candidate.

# Fully functional python solution:
import random
import time

# Let's start initializing a random T
T = random.randint(1, 100)
# And a S, with 10,000 integers fom T to 10,000
S = [random.randint(T, 10000) for i in range(10000)]


def LowestRemainder(List, Threshold):
    # Storing the all time lowest D and remainder
    Best_D_SoFar = Threshold+1
    lowestRemainderSoFar = sum(List)
    for d in range(Threshold+1, max(List)+1):
        remainder = sum(c%d for c in List)
        if remainder < lowestRemainderSoFar:
            lowestRemainderSoFar = remainder
            Best_D_SoFar = d
    return Best_D_SoFar, lowestRemainderSoFar

print("Looking for D > %d, that minimizes the sum of all remainder in a list containing %d integers" % (T, len(S)))
# Keep track of time
start_time = time.time()
# call the function
D, R = LowestRemainder(S, T)
# stop the clock
elapsed_time = time.time() - start_time
print("Found D =", D, "and lowest remainder =", R, "in", elapsed_time, "seconds")

I've tested a few times on my machine and it always runs in less than 4 seconds (for the maximum

But that's the brute-force baseline. So let's make it more efficient, by early interrupting the inner loop:

    # Remove this:
    # remainder = sum(c%d for c in List)
    
    # Add this instead:
      remainder = 0
      for c in List:
          remainder += c%d
          if remainder > lowestRemainderSoFar:
              break

And done!

It now takes less than 0.2 seconds

(for the whole 10,000 sized array with random integers up to 10,000:)


Extra thoughts:

As the best solution is probably a low number (as discussed later), if it takes too long to finish, you can simply interrupt and you'll still have a good solution (not guaranteed to be optimal) just like with GA (and probably better, as you'll quickly exploit the best candidates, instead of exploring the space).

Factorization is our friend:

Suppose a solution D, with a score S.

Now factorize the solution, finding the list of primes that composes D: [p1,p2,…].

Now remove some p, generating a D' and you will notice that the c mod(D) >= c mod(D') for any c, any D, and any p.

That means, you don't need to explore any multiple of D, if D was already evaluated.

That will drastically drop the amount of trials!

No threshold scenario:

Using this factorization principle, in the simpler scenario where T=1 (if T<1, than D=1 is the best solution), we guarantee that the solution is a prime number! Also, you don't need to search until the biggest number max(S), as the sqrt(max(S)) will be enough.

With threshold scenario

It makes it a little harder to program, but still you just need to loop through co-primes.

Upper limit:

Once you hit the first candidate (d) larger than the smallest S (d>min(S)), than this remainder will be min(S).

If the best solution so far is already better than min(S), we can skip all the next candidates.

This logic is cumulative. Once the candidate d is larger than the N lowest S, the sum of remainders will be at least the sum of all N lowest S. So, our loop could be something like:

if(sum([s<d for s in S]) > lowest):
  return d
else:
  d = next_coprime()

In a list with thousand of elements, there's a high chance that the smallest element is found early. And even if it does not satisfy the finish condition, it will only accumulate more elements as d grows.

In summary:

Start with a baseline and restrict the boundaries until your code runs in the desirable time.

Example:

Inputs:

  • S = [10, 19, 28]
  • T = 3

Process: Testing all d>T, skip the multiples and stop on upper limit:

  • d = 4 --> remainders = [2,3,0] = 5
  • d = 5 --> remainders = [0,4,3] = 7
  • d = 6 --> remainders = [4,1,4] = 9
  • d = 7 --> remainders = [3,5,0] = 8
  • d = 8 --> is multiple of 4 (skip)
  • d = 9 --> remainders = [1,1,1] = 3
  • d = 10--> is multiple of 5 (skip)
  • d = 11--> 11>min(S) --> remainders = [10,x,x] > 3. DONE!
",49188,,49188,,8/14/2021 16:41,8/14/2021 16:41,,,,0,,,,CC BY-SA 4.0 30165,1,30166,,8/14/2021 10:44,,1,310,"

I am reading this book called "Deep Learning" (by Goodfellow, Bengio and Courville).

On page 326, in the first paragraph, it says:

CNNs, are a specialized kind of neural network for processing data that has a known grid-like topology. Examples include time-series data, which can be thought of as a 1-D grid taking samples at regular time intervals, and image data, which can be thought of as a 2-D grid of pixel

Considering an image as a grid is completely intuitive. And, similarly, we can extend the logic to a 1-D time series.

But then what cannot be considered as having a grid-like structure?

",45586,,2444,,8/14/2021 13:50,8/14/2021 13:50,What exactly is a grid-like topology according to the book Deep Learning?,,1,0,,,,CC BY-SA 4.0 30166,2,,30165,8/14/2021 11:01,,1,,"

Graphs.

In graphs, nodes and edges do not usually have a specified position/orientation: for example, we cannot say that node $A$ is to the right/left of node $B$ because edges do not typically have an orientation. In grid-like structures, like images, we could say that about pixels. In a graph, all we know is that nodes are connected to other nodes via some edges, which may have weights, but, again, no orientation.

",2444,,,,,8/14/2021 11:01,,,,0,,,,CC BY-SA 4.0 30169,1,,,8/14/2021 14:06,,1,109,"

The main advantages of the self-attention mechanism are:

  • Ability to capture long-range dependencies
  • Ease to parallelize on GPU or TPU

However, I wonder why the same goals cannot be achieved by global depthwise convolution (with the kernel size equal to the length of the input sequence) with a comparable amount of flops.

Note:

In the following, I am comparing against the original architecture from the paper Attention Is All You Need.

Idea:

Consider the depthwise convolution of size $L$ with circular padding: $$ y_{t,c} = W_{t^{'},c} x_{t^{'} + t, c} $$ Here, $x$ is the input signal and $y$ is the output signal, $t$ is the position in the sequence, and $c$ is the channel index. Since the convolution is depthwise the given output channel depends on the unique input channel (we would like to have linear complexity in the dimension of the embedding vector).

After a single convolution, one definitely would not have any interactions between the tokens in the sequence.

However, a two-layer convolutional network with these tokens is able to capture long-range pair-wise interactions: $$ x_{t,c}^{(2)} = W_{t^{''},c}^{(2)} \sigma(W_{t^{'},c}^{(1)} x_{t^{'} + t, c}^{(0)}) $$

And by stacking a not very large number of these layers (like 12 or 24) one can model interactions between tokens in the sequence of arbitrary complexity.

Comparison of complexity:

The asymptotic complexity of both approaches seems to be the same.

  • Attention: $O (L^2 d)$

  • Depthwise convolution: $O (L^2 d)$

However, dot product attention seems to be a rather intuitive and biologically motivated operation that is crucial for sequence problems.

Has this question been studied in the literature or discussed somewhere before?

EDIT

De-facto global depthwise convolution is used in MLP-Mixer. One stage performs convolution with global receptive field (of the size of feature map), and other operation is pointwise convolution with kernel_size=1.

",38846,,38846,,12/5/2021 17:47,12/5/2021 17:47,Couldn't the self-attention mechanism be replaced with a global depth-wise convolution?,,0,2,,,,CC BY-SA 4.0 30170,1,,,8/14/2021 20:40,,1,156,"

In the NEAT algorithm, what is the purpose of treating disjoint and excess genes differently?

They are treated so (or may be treated potentially) at least when calculating the distance between 2 individuals when dividing the population into species (c1 and c2 coefficients).

",49255,,2444,,8/17/2021 10:34,8/17/2021 10:34,"In the NEAT algorithm, what is the purpose of treating disjoint and excess genes differently?",,1,0,,,,CC BY-SA 4.0 30171,1,,,8/15/2021 4:33,,1,20,"

Affine transformation on $X$ is a transformation of the following form

$$Y = wX + b$$

In general, $w, X, Y$ and $b$ tensors.

We generally call tensor $X$ as an input to affine transformation or the tensor which we want to transform. We call $w, b$ as weight and bias tensors respectively. We call $Y$ as output tensor after transformation. Every layer of multi layer percetron contains an affine transformation.

Suppose I have two types of inputs, say $X_1, X_2$. Now, I want to apply affine transformation on them using other.

Consider the following

#1: Combining using individual affine transformations

$$Y_1 = w_1X_1 + b_1$$ $$Y_2 = w_2X_2 + b_2$$ $$Y_1Y_2 = w_1w_2X_1X_2 + w_1b_2X_1 +w_2b_1X_2 + b_1b_2$$

#2 multiplying them and applying affine

$$Y = wX_1X_2 + b$$

#3 concatenating them and applying affine

$$Y = w (X_1, X_2) + b$$

Does anyone of the above eligible to call affine transformation in terms of $X$ and $Y$ (not $XY$)? If not, is it true that there is nothing like affine transformation on two inputs taken together?

",18758,,40434,,8/15/2021 8:53,8/15/2021 8:53,Is there any concept like 'applying affine transformation on multiple inputs'?,,0,0,,,,CC BY-SA 4.0 30172,2,,30170,8/15/2021 10:20,,1,,"

In the original NEAT paper, these two concepts are defined distinctly.

When crossing over, the genes in both genomes with the same innovation numbers are lined up. These genes are called matching genes. Genes that do not match are either disjoint or excess, depending on whether they occur within or outside the range of the other parent’s innovation numbers. They represent structure that is not present in the other genome. In composing the offspring, genes are randomly chosen from either parent at matching genes, whereas all excess or disjoint genes are always included from the more fit parent.

However in terms of process, they are handled equivalently. If you see equation 1 on page 110, the equation says using coefficient c1 for Excess E and c2 for Disjoint genes. But in the parameter settings, c1 and c2 are set to equal values. So, while the evaluations presented in the paper do not distinguish between excess and disjoint genes from a process perspective, the purpose is for the framework to allow the distinction between D and E genes to be used by the user of the framework by supplying the values that make sense in their context.

",20745,,,,,8/15/2021 10:20,,,,3,,,,CC BY-SA 4.0 30173,1,,,8/15/2021 10:44,,0,42,"

I am trying to run a code that has a batch size around 28. I can run the program on my GPU with this batch size.

But, when I modify the code for my requirements and try to run, it is showing an run-time error due to insufficient memory in GPU.

I checked for possible batch-size that I can run and it is just 2-5.

I am not sure whether there is any issue if I run with such small batch sizes? I mean, will there be any performance issues keeping aside the time it takes?

",18758,,18758,,8/15/2021 11:15,9/15/2021 15:00,Will there be any changes in the model's performance due to the usage of very small batch sizes?,,1,1,,,,CC BY-SA 4.0 30174,2,,30105,8/15/2021 17:10,,2,,"

How should I interpret or visualize this fact intuitively?

You could visualize it as a point in a geometrical space:

  • 8 is just a number
  • [8] is just a number in a line
  • [[8]] is a number in a plane
  • [[[8]]] is a number in a space

The object (number 8) won't change. The space around it changes.

  • You can never represent a complex object (3d-cube) in a simpler shape (2d-plane).
  • But you can always represent a simpler object (2d-square) in a higher dimensional shape (3d-space).

A number is a simplest possible object, and therefore it "fits inside" (can be represented in) any dimension.

",49188,,18758,,1/18/2022 23:30,1/18/2022 23:30,,,,0,,,,CC BY-SA 4.0 30175,1,,,8/16/2021 0:06,,-1,118,"

I need a PyTorch Model which can do road segmentation on OAK-D camera.

The model provided requires Input Image Size: 896x512, which is too big for running on OAK-D camera. Thus I need to re-train it with a smaller input size(224x224) and just need the BG(background) and road classes, or if any other options available which can easily make it running on the OAK-D camera.

Does anyone know how to do this?

",27988,,62466,,11/2/2022 13:19,11/2/2022 13:19,How to re-train an AI model to have smaller input image size,,1,5,,,,CC BY-SA 4.0 30176,1,,,8/16/2021 0:17,,1,697,"

In general, if I have a collection of data then mean(Expectation) and standard deviation are calculated as follows

$$\text{mean } = \mu = \mathbb{E}[X] = \sum\limits_{i = 1}^n p_ix_i $$ $$\text{Variance =}\sigma (X) = \sqrt{\sum\limits_{i = 1}^{n}p_i{(x_i - \mu)^2}{}}$$

where $X$ is a random vector having support $\{x_1, x_2, x_3, \cdots, x_n\}$.

Thus a dataset of samples have a single mean and single variance.

Now, let us discuss about the case of variational auto-encoders. They look like follows

Suppose I trained the above auto-encoder on a training set, then for each sample I will get a mean and standard deviation at latent layer. Here, we can get a new $\mu$ and $\sigma$ for each data sample. But, as we see earlier, mean and standard deviation exists for a dataset and not for each sample.

I am confused about "how can we say that mean and standard deviation are obtained at latent layer if they are not constant in nature"?

",18758,,40434,,8/16/2021 5:27,3/18/2022 10:44,Are mean and standard deviation in variational autoencoders unique?,,2,1,,,,CC BY-SA 4.0 30177,2,,30176,8/16/2021 6:24,,1,,"

Mean $\mu(x)$ and standard deviation $\sigma(x)$ are actually learnable functions, whose parameters are adjusted via the back propagation procedure.

Mean and standard deviation are not computed on the input vector $x$ or any transform of it.

The procedure is the following:

  • Pass the sample $x$ from the training data
  • Propagate this vector $x$ through some NN (Feedforward MLP) and obtain some other vector $\tilde{x}$
  • Get the mean $\mu(x)$ and std $\sigma(x)$ from $\tilde{x}$ from two more neural networks (maybe single layer)
  • Generate random noise $\varepsilon$ and get a point in the latent space $\mu(x) + \sigma(x) \varepsilon$ (it is known as reparametrization trick)

You can think about the procedure as follows - you have Normal distribution around each point of the input data in the latent space, and the mean $\mu(x)$ and $\sigma(x)$ are the parameters of this distribution (different for each point). The generated data is expected to resemble the training example, but differ in some reasonable sense, belong to the manifold of realistic images.

",38846,,,,,8/16/2021 6:24,,,,2,,,,CC BY-SA 4.0 30178,1,30181,,8/16/2021 8:12,,2,1248,"

In general, can ANNs have continuous inputs and outputs, or do they have to be discrete?

So, basically, I would like to have a mapping of continuous inputs to continuous outputs. Is this possible? Does this depend on the type of ANN?

More specifically, I would like to use neural networks for learning the Q-function in Reinforcement Learning. My problem has basically a continuous state and action space, and I would like to know whether I have to discretize it or not (a more detailed description of my problem can be found here).

",48758,,2444,,8/17/2021 10:58,8/18/2021 10:39,"Can neural networks have continuous inputs and outputs, or do they have to be discrete?",,1,0,,,,CC BY-SA 4.0 30179,2,,30163,8/16/2021 8:45,,0,,"

Of course, it does create more parameters to train, but that seems like a small sacrifice to me.

It is not a "small sacrifice". For the very deep networks that skip connections are applied to, to get the same benefits when concatenating, you would end up witha significant multiplier on the number of parameters.

To get the same passthrough effect on gradient signals (and allow later layers to learn modifications to the identity function), each layer's output would need to be copied to all the following layers. This scales poorly.

Let's take an example of a 10-layer fully-connected network, with 100 neurons per layer in the hidden layers where we want to apply skip connections. In the simple version of this network (ignoring bias to keep the maths simpler), there are 100x100=10,000 parameters for each added layer, making 90,000 parameters overall. If you use addition-based skip connections, the total number of parameters remains the same, at 90,000. If you use concatenation, the layers connect as 100x100, (100+100)x100, (100+100+100)x100 etc, so you end up with 450,000 parameters, five times as many. This is not a "small sacrifice", this is a scaling problem.

The technique of concatenating layers into later parts of the network is known and used. As is adding "early output" or head layers to generate gradient signals at different points in the network. These are valid approaches, and can help with vanishing gradient problems in a similar way. However, the additive skip connections in a residual network scale much better into very deep networks.

Concatenated copies of layers, or additional network heads with the same target functions and loss, are still used, but more sparingly.

This avoids the strange choice of addition, which has a questionable benefit

It may avoid a specific addition mechanism that you seem concerned about. However there are still very similar additions occurring when the concatenated layer feeds forward to the next layer. You could set those weights in a specific way (a copy of the weights of the new layer that it is concatenating with for just two combined layers) when the layer sizes are the same, and the concatenation and addition approaches would very similar.

",1847,,1847,,8/16/2021 10:06,8/16/2021 10:06,,,,0,,,,CC BY-SA 4.0 30180,2,,30163,8/16/2021 11:57,,2,,"

One can concatenate with the previous layer outputs as well, and this approach in pursued in DenseNets. A nice illustration, that compares difference between ResNets and DenseNets is presented below:

As pointed in the other answer it will lead to an increase of computation cost, with the same number of channels (given all other properties of architecture are the same).

Suppose, you had ResNet with the fixed channel size $N$. Standard convolution has computational and storage cost proportional to the product of input and output channels or $O(N^2)$ in the present case.

If you concatenated features from the previous layer, after each layer number of channels would be doubled. Therefore, the computational cost would grow $4$ times for each new layer, and the total cost grows exponentially with depth in this approach.

However, you can make each of the convolution to shrink the number of channels $2$ times and concatenate only half of the channels from the previous layer. In this way, total computational cost and storage cost is the same in every layer.

",38846,,,,,8/16/2021 11:57,,,,3,,,,CC BY-SA 4.0 30181,2,,30178,8/16/2021 13:39,,6,,"

Neural networks normally work in continuous spaces. A typical neural network function could be written as $f(\mathbf{x}, \mathbf{\theta}): \mathbb{R}^N \rightarrow \mathbb{R}^M$. That is, a function of some $N$ dimensional input vector of real numbers $\mathbf{x}$ that outputs some $M$ dimensional output vector of real numbers (which you could call $\mathbf{y}$) and that is parametrised by a vector of weights and biases $\mathbf{\theta}$.

When you train a neural network with some data $\mathbf{X}, \mathbf{Y}$ then you are trying to adjust $\mathbf{\theta}$ so that $f(\mathbf{x}, \mathbf{\theta})$ approximates some assumed mapping function $g(\mathbf{x}): \mathcal{X} \rightarrow \mathcal{Y}$ that provides a "true" mapping between all possible inputs $\mathcal{X}$ and the matching correct values from all possible outputs $\mathcal{Y}$. In a neural network, then $\mathcal{X}$ and $\mathcal{Y}$ must be sets of vectors of real numbers.

So, yes all neural networks map between continuous values by default*. When NNs are used to work with discrete values, then the discrete variables need to be converted to real-valued variables in order to be usable with a NN. So discrete inputs might be mapped to a small set of different real values or a vector of $\{0, 1 \}$ (called "one hot" encoding). Discrete outputs are often mapped to a probability vector giving the confidence that the learned function assigns to each possible discrete output - this is the usual case in classifiers.

You have a specific use case in mind:

More specifically, I would like to use neural networks for learning the Q-function in Reinforcement Learning. My problem has basically a continuous state and action space, and I would like to know whether I have to discretize it or not

For an action value, or Q function in reinforcement learning, then you do not need to be concerned that state space or action space are continuous. However, there are some details that may be important:

  • Neural networks learn most efficiently when input elements are within a unit-like distribution such as $\mathcal{N}(0,1)$ (normal distribution with mean $0$, standard deviation $1$). This does not have to be precise, but you should take care to scale the input values so that each element has a typical magnitude around $1$.

  • The output values you want to learn need to be within the domain of the output layer's activation function. It is common to use linear (or "no" activation function) in the output layer for regression problems for this reason, and learning the action value Q associated with a state, action pair is a regression problem. So unless you have good reason to do otherwise, use a linear activation function on the output layer.

  • Control problems in continuous action spaces cannot be solved using only a learned action value function. That is because, in order to derive a greedy policy from a Q function, you need to solve $\pi(s) = \text{argmax}_aQ(s,a)$ and finding the maximum value in a continuous action space is not practical. If your Q network is part of an agent solving a control problem - i.e. finding the optimal policy - then you will need some other way to generate and improve the policy. This is usually done using a policy gradient method.

Due to this last point, it is sometimes advisable to discretise the action space if you want to use a simpler value-based RL method, such as DQN.


* Caveats:

  • There are a few architectures which really do use discrete inputs, weights and/or outputs. These are either historical variants or specialist though, you will not find any mainstream libraries that do this by default.

  • Computers model real numbers as floating point which are technically discrete due to practical limitations of CPUs. Sometimes it is important to know this, but it will not impact modelling a Q function.

",1847,,1847,,8/18/2021 10:39,8/18/2021 10:39,,,,6,,,,CC BY-SA 4.0 30182,1,30193,,8/16/2021 13:48,,0,63,"

https://www.nature.com/articles/s41467-020-17419-7

I am a medical school graduate and I really want to learn AI/ML for computer-aided diagnosis.

I was building a symptom checker and I found the material. It clarifies the drawbacks of associative models which are performing differential diagnosis. And it suggests counterfactual(causal) approach to improve accuracy.

The thing is I couldn't understand what the formulas mean in the article, e.g.:

$$P(D \mid \mathcal{E}; \theta )$$

I really want to know what | and ; are doing here, what do they mean, etc.

I would really happy if someone can directly answer or just provide me some references to get general idea quickly.

Here comes the most tricy part...

",42543,,42543,,8/16/2021 17:36,8/16/2021 19:32,What does all the formula and pictures mean?,,1,2,,,,CC BY-SA 4.0 30184,1,,,8/16/2021 14:36,,1,33,"

I would like some pointers, possible projects that solve conceptually similar goals, code examples or tutorials.

I am trying to achieve a system that is able to start or stop ventilation of a given space based different outside and inside metrics such as humidity, temperature, time, etc. to achieve a decrease of relative humidity.

Actuating the system based on simple physics gave me questionable results, this is because I am not able to model the whole dynamic of the system.

I was thinking if reinforced learning could help me learn a good policy. The system should learn on real life data, actuating real ventilation, with the obvious slowness of such system.

I am quite new with AI, able to comprehend and create a simple OpenAi Gym. I am not even sure if and how something like this is achievable with so limited data flow. I am currently recording and analyze all possible data I can measure, together with some more or less random ventilation sessions. I am sure there are better ways to do this.

",49278,,49278,,8/16/2021 15:13,8/17/2021 20:16,Where to start with reinforced learning on actions and rewards sampled from slow ongoing real life system,,1,1,,,,CC BY-SA 4.0 30185,1,30331,,8/16/2021 14:42,,1,78,"

I have an interesting example for the NEAT and want to clarify what behavior is correct from NEAT's perspective and why (why the opposite is wrong, what are the consequences of choosing the different one).

So let we have an initial network of 3 nodes and 2 edges:

Initial Condition

Nodes: [A, B, C]
Edges: {
1: A->B
2: B->C
}

1st Gen

Then in the 1-st generation we get 2 mutants:

Mutant 1 (edge 1 got split)

Nodes: [A, B, C, D]
Edges: {
1: A->B DIS
2: B->C
3: A->D
4: D->B
}

Mutant 2 (edge 2 got split)

Nodes: [A, B, C, E]
Edges: {
1: A->B
2: B->C DIS
5: B->E
6: E->C
}

2nd Gen

In the second generation 2 if we mutate Mutant 1 (by splitting edge 2) and mutate Mutant 2 (by splitting edge 1) which result should we get?

Hypothesis 1: the same result:

Nodes: [A, B, C, D, E]
Edges: {
1: A->B DIS
2: B->C DIS
3: A->D
4: D->B
5: B->E
6: E->C
}

or...

Hypothesis 2: Two new mutants:

Nodes: [A, B, C, D, F]
Edges: {
1: A->B DIS
2: B->C DIS
3: A->D
4: D->B
7: B->F
8: F->C
}

and

Nodes: [A, B, C, E, G]
Edges: {
1: A->B DIS
2: B->C DIS
5: B->E
6: E->C
9: A->G
10: G->B
}

In case the second hypothesis is correct, how does it deal with crossover in the next run? Say these 2 mutants are breeded. We get :

Breeding in 2nd Hypothesis

Nodes: [A, B, C, D, E, F, G]
Edges: {
1: A->B DIS
2: B->C DIS
3: A->D
4: D->B
5: B->E
6: E->C
7: B->F
8: F->C
9: A->G
10: G->B
}

Looks like a too complicated genome for the 3-rd generation, doesn't it?

In case the first option is correct then actually innovation numbers are somewhat redundant in NEAT and can be done differently. We can have node list as a list of strings (node names). Then instead of assigning the innovation number to an edge we can use string value calculated like HASH(fromNodeName + toNodeName). That way whenever the new link is created in any generation between 2 nodes it gets the same innovation number name for it. When the node is created (by splitting an edge) its name can be taken right from the edge getting split and the innovation names of 2 new edges can be calculated like HASH(fromNodeName + splitEdgeName) and HASH(splitEdgeName + toNodeName). That way the algorithm has no global variables, no shared list of all innovations and can be simply parallelized

",49255,,49188,,8/24/2021 4:08,8/24/2021 4:08,Different ways to produce the same network in NEAT,,1,0,,,,CC BY-SA 4.0 30186,2,,30173,8/16/2021 14:53,,1,,"

Batch size affects how many training updates (steps) will happen during each epoch.

When the batch size is small, this means that the model sees fewer data in each weights update. Thus, your question really depends on the data you have, along with the corresponding task (classification / RL etc.)

If your data is highly imbalanced, then I would not suggest a small batch size, since the probability of seeing a positive instance would be far smaller (assuming you take uniform batches).

For an RL task, imagine using a replay buffer of past experiences and your agent had very few good action selections during the only exploration process. Then a small batch size would make the agent training very difficult, since most of the time samples with not good action selections would be seen. As a result, the agent may drift from good policies.

For a classification task, what I always do, is to make stratified batches. That is each batch has the same label percent as the whole dataset. And most of the time it works for better than uniform batches even for smaller batch sizes. For RL, I would recommend higher batch sizes or similar clever ways of sampling.

",36055,,,,,8/16/2021 14:53,,,,4,,,,CC BY-SA 4.0 30189,1,30384,,8/16/2021 16:35,,0,37,"

I've been working on learning about NLP via a beginners competition on Kaggle.

I first trained a model with an embedding layer and then a simple linear layer. I actually got way better than a flip of the coin with this model, so I decided to try to step it up with an LSTM.

What happened was that training loss decreased and then palteaued while validation loss never decreased at all.

In the case of overfitting, I would expect validation loss to decrease for a while but then either remain steady or perhaps even increase as the model starts to overfit.

I can't find any reason for the strange loss curves I'm seeing:

What could cause such a phenomenon?

I would be happy to share my network architecture and training code if there isn't a straightforward answer (I know there usually isn't).

",37691,,,,,8/26/2021 13:08,What is the reason for a training loss that drops but validation that NEVER does,,2,0,,,,CC BY-SA 4.0 30191,1,,,8/16/2021 17:51,,2,161,"

I just learnt about the properties of equivariance and invariance to translation and other transformations. Being invariant to translation is clearly an advantage, as even if the input gets shifted, the network will still learn the same features, and work fine. But how is equivariance useful?

",45586,,,,,5/14/2022 1:01,How can equivariance to translation be a benefit of a CNN?,,2,1,,,,CC BY-SA 4.0 30192,2,,30191,8/16/2021 18:55,,1,,"

Equivariance is useful because the neural network can learn to detect common image components - edges, corners, curves in specific orientations - in a general way that is then applied across a whole image evenly. These components typically do exist and can appear in multiple places within an image, and may be parts of larger-scale features in turn. Identifying all the edges in an image can be useful before any kind of pooling that adds invariance is applied.

Without equivariance, an edge oriented in one way in one part of the image would be completely different to the neural network to the same kind of edge elsewhere. It would only get made part of a relevant filter and used if a specific training example had an important edge in that one place.

It is hard to separate this usefulness of equivariance from the associated reduction in number of free parameters, thanks to using convolutional filters with small amount of local connection as opposed to fully connected neural networks. CNNs have to be equivariant due to the architecture, whilst the invariance requires a little more effort (pooling and/or strided convolutions).

The fact that CNNs are so successful using this approach probably says something about natural images. It should be possible to construct non-natural images where equivariance would be of limited use e.g. where local features don't exist, or where they vary over the image in a way that makes detecting them in more than a few places pointless.

",1847,,,,,8/16/2021 18:55,,,,0,,,,CC BY-SA 4.0 30193,2,,30182,8/16/2021 19:32,,2,,"

According to the provided article,

$$ \begin{equation} \tag{1} P(D| {\mathcal{E}};\ \theta ) \end{equation} $$

is a probability of disease $D$ given findings $\mathcal{E}$, and a model $\theta$ that is used to estimate this probability.

$D$ represents a disease or diseases, and findings $\mathcal{E}$ can include symptoms, tests outcomes and relevant medical history.

The Sheffer stroke symbol (vertical bar) | in the conditional probability notation is read "given that", whereas the semicolon symbol ; tells that we use a model (or parameters for the model) to calculate this probability. For instance, we can define this probability as $P(D| {\mathcal{E}};\ \theta ) = M_{\theta}(\mathcal{E})$, where $M$ can be a nueral network with parameters $\theta$ that takes $\mathcal{E}$ as input and returns the probability of $D$.

As we read the article further, we see that Equation 1 is nothing more than the posterior probability from Bayes' theorem:

$$ \begin{equation} \tag{2} P(D| {\mathcal{E}};\ \theta )=\frac{P({\mathcal{E}}| D;\ \theta )P(D;\ \theta )}{P({\mathcal{E}};\ \theta )}. \end{equation} $$

where $P({\mathcal{E}}| D;\ \theta )$ is a likelihood of findings $\mathcal{E}$ given that we have the disease $D$, $P(D;\ \theta )$ is the prior probability of the disease $D$, and $P({\mathcal{E}};\ \theta )$ is a likelihood of findings $\mathcal{E}$.

As the article suggests, Theorem 2 is related to the Noisy-OR model, which itself is a large area of research. I encourage you to read the references provided in the article to learn more about approaches used by authors. If you have further questions regarding this theorem, I suggest you to open another question.

",12841,,,,,8/16/2021 19:32,,,,0,,,,CC BY-SA 4.0 30196,1,,,8/16/2021 21:21,,1,58,"

These are words that we frequently come upon. What can be said about the differences? Would these two words' subheadings be different?

",49287,,,,,1/14/2022 6:04,What is the difference between “AI Methods” and “AI Techniques”?,,1,1,,1/14/2022 9:25,,CC BY-SA 4.0 30197,1,,,8/16/2021 22:33,,2,185,"

Most of the algorithms in machine learning I am aware of use datasets and learning happens in an iterative manner given some examples. The examples can also be understood as experience in the case of reinforcement learning.

Consider the following from Numerical Computation chapter of Deep Learning book

Machine learning algorithms usually require a high amount of numerical computation. This typically refers to algorithms that solve mathematical problems by methods that update estimates of the solution via an iterative process, rather than analytically deriving a formula to provide a symbolic expression for the correct solution. Common operations include optimization (finding the value of an argument that minimizes or maximizes a function) and solving systems of linear equations. Even just evaluating a mathematical function on a digital computer can be difficult when the function involves real numbers, which cannot be represented precisely using a finite amount of memory.

I am wondering whether there is any domain in machine learning that deals with solving the problem analytically rather than computationally heavy iterative algorithms?

",18758,,2444,,8/19/2021 13:13,8/19/2021 15:24,Is there any domain in machine learning that solves a problem by using only analytical algorithms?,,3,9,0,,,CC BY-SA 4.0 30198,2,,30191,8/16/2021 22:38,,1,,"

As explained here, both properties are useful depending on your application and expected result.

  • For an image classifier, you'll expect a invariance (in-variance = not change) result, meaning all results are the same, no matter how you translate the image.
  • For an image segmentation, or an object detector, on the other hand, you'll expect the output to shift together as the input varies. In other words, an equivariance (equi-variance = same change) is expected.
",49188,,,,,8/16/2021 22:38,,,,0,,,,CC BY-SA 4.0 30199,1,,,8/16/2021 23:24,,1,24,"

Consider an architecture or programming language that uses $n$ bits for storing a floating point number in a particular format. Then each and every floating point number it can store should be in a given range, say $[lf, uf]$.

If there is a need to store any floating point number less than $lf$ then we generally treat such phenomenon as underflow. Consider the following from Numerical Computation chapter of Deep Learning book.

One form of rounding error that is particularly devastating is underflow . Underflow occurs when numbers near zero are rounded to zero. Many functions behave qualitatively differently when their argument is zero rather than a small positive number. For example, we usually want to avoid division by zero (some software environments will raise exceptions when this occurs, others will return a result with a placeholder not-a-number value) or taking the logarithm of zero (this is usually treated as $-\infty$, which then becomes not-a-number if it is used for many further arithmetic operations).

You can observe that two examples has been given while explaining underflow: division by zero and logarithm of zero. If we treat mathematically, both are undefined. It should not be an issue of storage, especially underflow.

Is there any reason behind proving such examples, which are mathematically undefined, under the umbrella term underflow and using the term "not-a-number"?

",18758,,,,,8/30/2021 3:42,Why not undefined expression is different from numerical underflow?,,2,1,,,,CC BY-SA 4.0 30200,1,,,8/17/2021 0:21,,3,94,"

In mathematics, there is a proof that the following infinite series converges to a constant irrational number, denoted by $e$, called as Euler's number or Euler's constant or Napier's constant. The value of $e$ lies between 2 and 3.

$$1 + \dfrac{1}{1!} + \dfrac{1}{2!} + \dfrac{1}{3!} + \cdots$$

The natural exponential function, defined as follows, has some interesting properties

$$f: \mathbb{R} \rightarrow \mathbb{R}$$ $$f(x) = e^x$$

It is used in several algorithms and in the definitions of functions like SoftMax. I am interested in knowing the possible mathematical characteristics that lead this function useful in artificial intelligence.

The following are the properties I am aware of. But, I am not sure about how some of them will be useful

  1. Non-linearlity: Activation functions are intended to provide non-linearity. So, it is a candidate for activation functions due to this property. You can check its graph here.

  2. Differentiability: Loss functions used for in back-propagation algorithm need to be differentiable. So, it can be a candidate for usage in loss functions too.

$$\dfrac{d}{dx} e^x = e^x \text{ for all } x \in \mathbb{R}$$

  1. Continuity: I am not sure how this property is useful in algorithms. Intuitively, you can check from graph provided above that it is continuous.

  2. Smoothness: I am not sure how this property is useful in algorithms. But seems useful. The natural exponential function has the smoothness property.

$$\dfrac{d^n}{d^nx} e^x = e^x \text{ for all } x \in \mathbb{R} \text{ and } n \in \mathbb{N}$$.

Are there any other properties like non-linearily, differentiability, smoothness etc., for the natural exponential function that make it superior to use in AI algorithms?

",18758,,4709,,3/1/2022 15:59,3/1/2022 15:59,What are the mathematical properties of natural exponential function that lead to its usefulness in artificial intelligence?,,3,2,,,,CC BY-SA 4.0 30201,1,,,8/17/2021 1:22,,0,247,"

I encountered the term multinoulli distribution in the following sentence from Chapter 4: Numerical Computation of the deep learning book.

The softmax function is often used to predict the probabilities associated with a multinoulli distribution.

I am guessing that multinouli distribution is any probability distribution that has been defined on a random variable taking multiple values. I know that SoftMax function is used in converting a vector into another vector of the same length with probability values that signify the chance of input falling into that particular class.

Suppose $C$ is a random variable with support $\{c_1, c_2, c_3, \cdots, c_k\} $. Then I am guesssing that any probability distribution on $C$ is a multinouli distribution. SoftMax is an example of such multinouli distribution that uses the expression $\dfrac{e^x}{\sum e^x}$ for calculating probabilities.

Is my guess correct about multinoulli distribution? The reason for my doubt is that I never came across the word multinoulli and I cannot find even on internet. If my guess is wrong, where can I read about multinoulli distribution?

",18758,,2444,,8/19/2021 9:43,8/19/2021 10:52,Where can I read about the multinoulli distribution?,,2,0,,,,CC BY-SA 4.0 30202,1,,,8/17/2021 2:42,,0,30,"

Preface: I’d like to clarify that I understand what a relay is and that a PLC uses a fairly conventional microprocessor that only digitally establishes logical logic gate configuration as a digitally programmable alternative to relay banks for analog and/or (depending on the PLC) digital signals. My question is based on the understanding that to date actual logic gates (as far as I know) aren’t non-locally programmable (“re-wirable”) without a person manually rewiring truly programmable actual (not logical programming of a statically wired microprocessor) logic gates.

Rectenna work interests me specifically around any potential relevance of varying transmission wavelengths and material resistances (if this is not possible with MoS2, generally as a concept for other potential materials) to making possible remote switch activation of logically chosen switches along an array. Essentially I am curious about if this or other research has potential for constructing truly physically reprogrammable (externally and maybe wirelessly) logic gates.

In general any information on advances towards this capability would be appreciated as right now it seems like the only rudimentary build I could manage for my project is a 64 gate one. That’s not great because anything less than 512 gates would be very hard to make useful for my proof of concept project, and I know there’s no way I could get to a more ideal 262,144 gates.

One example would be any publication which covers if the kind of uses of phase-engineered low-resistance contacts for ultrathin MoS2 transistors covered in the articles below would be able to be produced with varying resistance in a band usable for varying activation via radio waves for switches.

https://doi.org/10.1038/nmat4080

https://www.ece.cmu.edu/news-and-events/story/2019/05/rectennas-converting-radio-waves-into-electricity.html

I’m not picky if someone knows about other technological advances approaching this capability such as biochemical non-locally programmable switch activation equivalent processes. Thanks everyone.

Update 1: My specific question is: Have there been any significant technological advances towards non-locally electrically programmable logic gates?

Update 2: After further review I’ve found that FPGAs are not what I am asking about. Their reprogramming like PLCs is digital not analog. They seem to just be a more generalized similar thing to PLCs rather than being factory equipment. I might incorporate one or more in my project, but they aren’t what I am referring to which is true analog reprogramming. Why does analog matter? Analog means more efficient at the surface level, but it also allows structured logic similar to ladder logic at the hardware level which enables significantly different uses in structuring and restructuring logic execution.

Update 3: This is for an efficiency proof of concept project trying to prove it is possible to structure logic in a certain way to increase efficiency of certain specific processes. This is a project involving programming and/or design at every single level of development (transistors, machine code, assembly, mid level (such as C/C++), and high level (Python/Tensorflow). I will be creating custom NAND gate structures, writing the instructions to execute on them, writing in assembly, writing in a mid level language, and writing in Python and TensorFlow for different parts of this overall project’s functionality.

In conclusion the straightforward version of this question is: What are the current capabilities for or research done towards creating physically rewired logic gates using non-local digital instructions?

",49290,,49290,,8/17/2021 4:30,8/17/2021 4:30,Non-locally Electrically Programmable Logic Gates - Technological Advances Progress,,0,6,,,,CC BY-SA 4.0 30203,2,,10982,8/17/2021 3:27,,1,,"

In short: It depends.

Where will you run it?
  • On Premises: You may want want to run in your own environment.
  • IaaS: GPT models are often too big, so people might prefer to setup a different server for that, serving your API.
  • PaaS: If it's more experimental, I recommend running it on Google Colab.
  • SaaS: Or even use some external API, so you don't need to worry about this setup and just use it as a service. (easiest)

Each approach demands a different architecture and a different code.

Once you've setup the environment / API, you'll run it by providing an initial prompt and some parameters:

GPT

GPT-2 (any GPT model) is a general, open-domain text-generating model, which tries to predict the next word for any given context.

So, setting up a "summarize mode" is not just flagging a parameter. It's a non-deterministic process and requires tries and errors.

The GPT setup is experimental:

  • You use a sandbox.
  • Create an initial prompt.
  • Set some parameters (Temperature, Top-P, Top-K,...)
  • Evaluate your results
  • Adjust the prompt
  • Adjust the parameters
  • Until you consistently achieve the desirable results.

The prompt

1. Simple way

A very simple way to do it is something like:

prompt = text+"\nTD;DR:"

It should work specially well if your text is simple and small.

2. Explicit Introduction

You may have a better result if initially prompt some context, for example:

"Here is a text and it's respective summary.\n"+
"#Full text:" + text + "\n"+
"#Summary:"
3. Add examples

Another approach is filling your prompt with a few (high-quality) examples before your original prompt:

prompt = sample_1+"\nTD;DR:"+summary_1
prompt += "\n###\n"
prompt += sample_2+"\nTD;DR:"+summary_2
prompt += "\n###\n"
prompt += your_input+"\nTD;DR:"

And you realize the next logical thing for the generator to make is a summary of your input.

The results will greatly depend on your text size and style. So you should find what prompt template best suits your needs.

Execution

Keep in mind that GPT will not learn from previous executions. It has no memory or learning in between executions, which means that each input will require the whole prompt again.

So your main program should run a loop and prompt GPT once for every text file. Something like:

for file in glob(./files.txt)
  text = open(file, "r")
  prompt = text+"\nTLDR:\n"
  result = GPT(prompt, parameters)
Other tips to keep in mind:

If you have no examples:

  1. Results may be different from desired (Too big, too short, too informal, omit something you consider important)
  2. Be less stable. (Nail it sometimes and ruin other times).

If you have some examples, it should replicate your example style. But:

  1. It may wrongly make references to the example (instead of the desired text).
  2. Gets confused if your text does not match the example(s) lenght / style.
  3. Prompt may get too big.

If the original text is too big:

  1. It will require more computational power.
  2. It might extrapolate the limit of tokens it can digest. (So you might need to split the text into chapters).
  3. The model might get confused, like: Summarizing only the last paragraph (so you need to make it clear when the text starts and ends).
",49188,,,,,8/17/2021 3:27,,,,1,,,,CC BY-SA 4.0 30208,2,,23711,8/17/2021 4:13,,0,,"

I just finished my masters thesis project for multivariate time series prediction.

  1. The standard approach is to normalize the data using minmax or z-score (there is one research paper that found it really didn’t matter which normalization technique was used).

  2. You will most likely need to transform the dataset to a supervised problem (X, y)

  3. You will need to transform the data into a format used by most neural networks (samples, timesteps, feature)

How to Convert a Time Series to a Supervised Learning Problem in Python

Multivariate Time Series Forecasting with LSTMs in Keras

",49143,,,,,8/17/2021 4:13,,,,0,,,,CC BY-SA 4.0 30210,2,,30189,8/17/2021 4:31,,1,,"

As you know, it would be hard to tell exactly what is going on without knowing more about the dataset.

However, a couple things come to mind:

  1. Did u correctly normalize by fitting the scaler only on the train dataset and then apply the same transform (using the the mean and variance from train set) to the test set.

  2. Is your dataset imbalanced? I have found the Python DataPrep tell useful for exploratory data analysis

",49143,,,,,8/17/2021 4:31,,,,1,,,,CC BY-SA 4.0 30211,2,,30200,8/17/2021 4:47,,-1,,"

The key properties of the softmax function that are useful in machine learning are:

  1. It takes as input a vector of real numbers.
  2. After applying softmax, each component will be in the interval [0 , 1] and the components will add up to 1. Thus, a probability distribution.

In short it converts a random dataset of real numbers (usually normalize to [0,1] and transforms them into probabilities which are can useful for 1) classification/regression problems and 2) to minimize the cost/loss function used in neural networks

",49143,,49143,,8/17/2021 4:52,8/17/2021 4:52,,,,0,,,,CC BY-SA 4.0 30213,2,,30061,8/17/2021 4:55,,1,,"

This really depends on how you define "iteration". Here it's not so simple, given that the number of training steps per epoch varies based on the number of correct predictions, which would obviously change as you continue to train the model.

Generally I have found iterations refers to the number of times you run a batch through a network. This is the more reasonable definition, as when you are doing real-world machine learning, using batches of training examples allows you to utilize as much memory as possible, thus making the training much faster.

Using this definition with your example you would always have the same number of iterations as the batch size would be consistent, it's just inside the massive matrix that is your batch, you would have update values of 0 where the network correctly predicted.

It ultimately comes down to what's the best way to convey information to a reader? I think describing the number of iterations as variable based on network outputs is confusing and non-descriptive. In this case, if you don't want to use any batches, it is best to say you have 120 iterations per epoch. Saying you have up to 120 iterations is confusing. Instead, just specify that in some iterations the network may not be updated.

",26726,,,,,8/17/2021 4:55,,,,0,,,,CC BY-SA 4.0 30214,2,,30201,8/17/2021 5:11,,0,,"

You can find a definition on Wikipedia Categorical distribution.

In short, it is a generalization of the Bernoulli distribution to multiple variables (multivariate Bernoulli distribution).

",49143,,2444,,8/19/2021 9:41,8/19/2021 9:41,,,,1,,,,CC BY-SA 4.0 30215,2,,30199,8/17/2021 5:26,,0,,"

According to IEEE Standard, a processor must set exception flags if any of the following conditions arise when performing operations: underflow, overflow, divide by zero, inexact, invalid. Therefore, there is a special numeric binary representation for these conditions which is defined as the value NaN in IEEE single-precision and double-precision format.

",49143,,1671,,8/30/2021 3:42,8/30/2021 3:42,,,,0,,,,CC BY-SA 4.0 30216,2,,30196,8/17/2021 5:43,,0,,"

According to webster they are considered synonyms. However, from an academic viewpoint there is a distinction:

A method is a systematic procedure, technique, or mode of inquiry employed by or proper to a particular discipline (e.g. scientific method)

A technique is the manner in which technical details are treated or a way of accomplishing a desired aim that may not be considered a scientific method (e.g. heuristic technique)

",49143,,,,,8/17/2021 5:43,,,,2,,,,CC BY-SA 4.0 30218,1,,,8/17/2021 8:02,,0,39,"

I'm using nvidia Transfer Learning Toolkit to detect cars in some video frames.

I found some dataset (for example https://www.jpjodoin.com/urbantracker/dataset.html and https://www.kaggle.com/aalborguniversity/aau-rainsnow) and I noticed that usually parked cars are not labeled, and covered under a mask.

Why shouldn't I add also their labels? It would be easy to label them because they are static objects and I could copy-paste in all labels set. So why in video dataset they are not labelled?

",46384,,49188,,8/18/2021 14:31,8/18/2021 14:31,Should I label static objects on video dataset?,,1,0,,,,CC BY-SA 4.0 30219,1,30229,,8/17/2021 8:05,,0,36,"

Consider the following paragraph from NUMERICAL COMPUTATION of the deep learning book..

Suppose we have a function $y = f(x)$, where both $x$ and $y$ are real numbers. The derivative of this function is denoted as $f'(x)$ or as $\dfrac{dy}{dx}$. The derivative $f'(x)$ gives the slope of $f(x)$ at the point $x$. In other words, it specifies how to scale a small change in the input to obtain the corresponding change in the output: $f(x+\epsilon) \approx f(x) + \epsilon f'(x)$.

I have doubt in the equation $f(x+\epsilon) \approx f(x) + \epsilon f'(x)$ given in the paragraph.

In strict sense, the derivative function $f'$ of a real valued function $f$ is defined as

$$f'(x) = \lim_{\epsilon \rightarrow 0} \dfrac{f(x+\epsilon)-f(x)}{\epsilon}$$

wherever the limit exists.

If I replace the original definition of the derivative as follows

$$f'(x) \approx \dfrac{f(x+\epsilon)-f(x)}{\epsilon}$$

then I can obtain the equation given in the paragraph i.e, $f(x+\epsilon) \approx f(x) + \epsilon f'(x)$.

But, my doubt is that how can I modify the definition with $\lim\limits_{\epsilon \rightarrow 0}$ to an approximation with out limit? How can the following two are same?

$$f'(x) = \lim_{\epsilon \rightarrow 0} \dfrac{f(x+\epsilon)-f(x)}{\epsilon} \text { and } f'(x) \approx \dfrac{f(x+\epsilon)-f(x)}{\epsilon}$$

",18758,,,,,8/17/2021 19:15,Reason for relaxing limit in derivative in this context?,,2,0,,,,CC BY-SA 4.0 30220,2,,10447,8/17/2021 8:34,,1,,"

DeepInsight method has been used for converting tabular data into corresponding images which are then processed by CNN. Here is the link https://alok-ai-lab.github.io/DeepInsight/

",49295,,,,,8/17/2021 8:34,,,,0,,,,CC BY-SA 4.0 30221,2,,30219,8/17/2021 9:24,,1,,"

It is just an assumption. If we assume $\epsilon$ is small enough (depending on the function $f$), you can remove the limit $\lim_{\epsilon \to 0}$ for the approximation.

",4446,,,,,8/17/2021 9:24,,,,0,,,,CC BY-SA 4.0 30222,2,,30199,8/17/2021 9:29,,1,,"

Yes, they can be related to underflow. Mathematically, we do not expect facing with division by zero when we have an expression $\frac{1}{\varepsilon}$ when $\varepsilon > 0$. However, in numerical softwares, it can happen if $\varepsilon < lf$.

Moreover, you can think about the same scenario for taking the logarithm of zero.

",4446,,,,,8/17/2021 9:29,,,,0,,,,CC BY-SA 4.0 30224,2,,26601,8/17/2021 12:35,,2,,"

AI researcher Rob Miles uses the terms 'terminal goal' and 'instrumental goal' for the first and second types respectively. I'm not sure if these are standard parlance in the field, however Rob explains them in this video:

https://youtu.be/hEUO6pjwFOo?t=363

",49299,,49188,,8/17/2021 23:41,8/17/2021 23:41,,,,1,,,,CC BY-SA 4.0 30225,2,,8190,8/17/2021 14:24,,2,,"

According to this meta paper, "vanilla" RNN of today are based on Elman's work on networks with dynamic memory: Finding structure in time

",49301,,,,,8/17/2021 14:24,,,,0,,,,CC BY-SA 4.0 30226,1,,,8/17/2021 15:48,,0,172,"

I am training some Object-Detection-Models from the TensorFlow Object Detection API and got from the evaluation with MS COCO metrics the following results for Average Precision:

IoU = 0.5;0.9 maxDets = 100 area = small AP = -1.000

The other values all make sense to me. But I don´t know what the -1.000 stands for. Does it mean that there are no small objects in my dataset to be detected?

",49303,,,,,1/9/2023 23:05,What does a value of -1.000 mean in MS COCO Metrics for Object Detection,,1,0,,,,CC BY-SA 4.0 30227,1,,,8/17/2021 17:06,,0,85,"

I have a machine learning task where I would like to weight losses based on the frequency of the categorical values appearing in the data. The binary solution can be seen below, but I'd like to know what to do about the case of n>2 categories.

w_0 = (n_0 + n_1) / (2.0 * n_0)
w_1 = (n_0 + n_1) / (2.0 * n_1)

The frequencies for the samples n0-n5 are:

n_0:     1552829
n_1:     14479
n_2:     13445
n_3:     13781
n_4:     18795
n_5:     64187
",49304,,40434,,6/27/2022 16:11,6/27/2022 16:11,How do I select the class weights for the loss function in the case of more than 2 classes?,,1,1,,,,CC BY-SA 4.0 30228,1,,,8/17/2021 17:41,,0,64,"

I'm wondering what are the different choices of parametrizations available for the decoder in a variational autoencoder. If the data is discrete, you can just output probabilities for each class, so that's straightforward. If the data is a) continuous and not bounded, then the only parametrization I know of is the (multivariate) normal distribution. If the data is b) continuous and bounded, again the only parametrization I know of is a (multivariate) normal distribution, mapped to [0,1] using a sigmoid function.

Are there other formulations that are commonly used? If there are, what are the advantages and any disadvantages compared to using a normal distribution?

To ask the question in a different way: the normal distribution seems like the "go to" choice in problems with continuous output since it's simple and allows gradients through the reparametrization trick. Are there other reparametrizable continuous distributions which are also commonly used in machine learning? I know the exponential distribution (and related distributions such as weibull, gamma) are reparametrizable, but would you ever use those instead of a normal distribution to model continuous data?

",47080,,,,,8/17/2021 17:41,Types of decoder parametrizations in VAE for continuous data,,0,2,,,,CC BY-SA 4.0 30229,2,,30219,8/17/2021 19:15,,2,,"

The equation $$f(x + \epsilon) \approx f(x) + \epsilon f'(x)$$ is justified from taylor series. It is not derived from the limit definition of a derivative. Let $f'(a)$ and $f''(a)$ exist for $a$ in the interval $(x, x+\epsilon)$. Then, \begin{align*} f(x + \epsilon) = f(x) + \epsilon f'(x) + \frac{\epsilon^2}{2}f''(b) \end{align*} where $b$ is some number between $(x, x + \epsilon)$. If $\epsilon << 1$, then $\frac{\epsilon^2}{2}f''(b)$ is small so it can be ignored, leading to the approximation.

taylor's theorem

",47080,,,,,8/17/2021 19:15,,,,0,,,,CC BY-SA 4.0 30230,2,,30226,8/17/2021 19:42,,0,,"

APsmall stands for Average Precision for small objects.


This website contains all the descriptions about the metrics: https://cocodataset.org/#detection-eval

If you want to know more about the metrics evaluation (how to calculate this metric), you can find the metric's source code here: https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/cocoeval.py

",49188,,,,,8/17/2021 19:42,,,,0,,,,CC BY-SA 4.0 30231,2,,30227,8/17/2021 19:56,,1,,"

Is that what you want?

w_0 = (n_0+n_1+ ... +n_5) / (5.0 * n+0)

If so, it can be achieved by:

    n = [n_0, n_1, n_2, ...]
    w = []
    for i in range(len(n)):
      w[i] = sum(n) / (len(n)*(n[i]))

Notice that Sum(n)/Len(n) = Average(n), so you'd basically saying your loss-function is avg(n)/ni.

",49188,,,,,8/17/2021 19:56,,,,0,,,,CC BY-SA 4.0 30232,2,,30184,8/17/2021 20:16,,1,,"

First you'd need to mathematically model your real environment. Probably use some differential equations.

Once you have a good model, you still won't have your real case parameters. So I can see 2 different approaches:

  1. Theoretically + Experimentally: Empirically measure real data to try to find those parameters. (Make a simple PID controller)
  2. Make a robust and general policy that adapts to any parameter.

The first alternative is very straightforward, you basically just have 3 parameters to adjust and there are clear methods to adjust it depending on your data behavior.

Once you are in AI.exchange, I assume you're choosing the 2nd way. I consider it's way harder to implement, but also more fun. So you'll need create a highly parametrized environment, make a reinforcement learning model, train it on all different kinds of parameters (just make sure your real parameter are somewhere inside that range).

If you do it all right, you should have a robust general policy that can behave well in any general condition.

",49188,,,,,8/17/2021 20:16,,,,0,,,,CC BY-SA 4.0 30233,2,,30218,8/17/2021 22:20,,1,,"
  1. Model architecture:

In machine learning, static image detectors can be is very different from video detectors, as movement plays a big role on the task. So, even when comparing frames the objects are similar, when digesting the video, a model can learn very different things. Maybe adding parked cars to the database increased false positives, mistakenly labeling other static noises as cars.

  1. Business goal:

Why are you labeling cars from security camera, in the first place? What is your goal here? Maybe if you want to know car density in a parking lot, than labeling parked cars is very useful. But if you just need to know traffic flow, than parked cars will be just a distraction, noise from the data. So maybe whoever built the dataset had a different goal in mind.

",49188,,,,,8/17/2021 22:20,,,,1,,,,CC BY-SA 4.0 30235,2,,30197,8/18/2021 11:33,,1,,"

Honourable mention: Memory-based approaches

Although not analytic, memory-based models, such as k-nearest neighbours (k-NN) are very lightweight when learning, but have a higher cost to use the stored knowledge.

Even though a k-NN model is slow to make inferences, the computation involved is not complex or iterative. It makes a single pass through all the data, keeping only the k closest examples to the example that it is predicting the output for, and then performs a simple aggregate function (e.g. a weighted mean) on that list of k closest matches.

Knowledgebases and Inference Engines

A logic-based system can be considered a learning system if it is able to accept new statements. This might be at runtime, or you could consider the training process to be the addition of new logical rules into a long-term storage. Either way, the system learns by adding new rules, not by observing anything directly, or by consuming input/output pairs. Adaptors could in theory be written using other AI approaches to feed the knowledgebase though.

This is a classic use for the LISP programming language. For example, you could build an inference engine and teach it facts as LISP statements. Every time you added a fact, the engine would be able to infer more about the domain it was working with. In some ways this resembles the k-NN approach, in that all facts are stored, and the inference stage is more computationally expensive.

The main issue with learning systems based on predicate logic is that they brittle and cannot deal with uncertainty easily. The approaches used to patch that include coding how uncertainty works as a set of facts (see CYC), or starting with some form of fuzzy logic as a core part of the system.

You would not use it to process audio or image inputs, at least not directly. However, there are some advantages for things like explainable AI - an inference engine can always explain how it came up with an answer, step by step.

",1847,,,,,8/18/2021 11:33,,,,5,,,,CC BY-SA 4.0 30237,2,,30175,8/18/2021 12:39,,0,,"

What you need to search for is a Fully Convolutional Network, i.e. a network that use global pooling to overcome the issue of a fixed input size. Unfortunately the model you found is not fully convolutional, and every workaround to make those pre-trained weights usable implies retraining. At this point it is more convenient to find something else, or train something yourself. You can take a look at this repo which contains also a fully convolutional network for segmentation among other models (they don't seem to link to any pre-trained weights though).

",34098,,,,,8/18/2021 12:39,,,,0,,,,CC BY-SA 4.0 30238,1,,,8/18/2021 13:06,,2,60,"

I want to build an AI that can convert an image of a subject into an anatomically accurate 3D model. To do this, I was thinking of adapting the following code for Deep Deterministic Policy Gradient: https://keras.io/examples/rl/ddpg_pendulum/

My reasons considering RL:

  1. I don't have the needed skill (3D modelling) to procure a large dataset for the project. I was hoping RL may help me overcome that by adapting to a smaller dataset through state-reward learning. I'm looking mainly for human and animal anatomical models. Those are hard to find in large numbers.

  2. My second concern is the required density (polygon count) of these models can be rather high. I'm not sure if it is computationally feasible for a NN to output high density models. However, I'm thinking an RL agent can step through and write each vertex one at a time in a 3D space. As compared to a single output layer in a feed-forward network.

However, that means it will have to handle a rather large state-space (an array of length 50,000 or higher).

With all of that said, RL has mainly been used in video games and simple control problems from the OpenAI gym. Is it a waste of time to use RL for this level of complexity?

",49313,,2444,,9/17/2021 15:37,9/17/2021 15:37,Is Reinforcement Learning capable of learning complex functions (such as producing a 3d model given an image)?,,1,0,,,,CC BY-SA 4.0 30239,1,30339,,8/18/2021 13:22,,0,141,"

I am trying to figure out the difference between the architecture used in this and this paper. It looks like both used multi-headed self-attention and therefore should be the same in principle.

",31755,,2444,,11/30/2021 15:09,11/30/2021 15:09,What is the difference between a vision transformer and image-based relational learning?,,1,1,,,,CC BY-SA 4.0 30240,1,,,8/18/2021 13:38,,0,55,"

How does n-step TD removes the notion of time-step as referenced in Sutton and Barto (2nd edition, Page 163) below?

Another way of looking at the benefits of n-step methods is that they free you from the tyranny of the time step. With one-step TD methods the same time step determines how often the action can be changed and the time interval over which bootstrapping is done. With one-step TD methods, these time intervals are the same, and so a compromise must be made. n-step methods enable bootstrapping to occur over multiple steps, freeing us from the tyranny of the single time step.

Consider the following example.

We know our n-step update equation as: $$V_{t+n}(S_t) = V_{t+n-1}(S_t) + \alpha [G_{t:t+n} - V_{t+n-1}(S_t)]$$

Now, let $t=0$ and $n=2$. This gives us: $V_2(S_0) = V_1(S_0) + \alpha [G_{0:2} - V_1(S_0)]$.

Before our n-step TD prediction algorithm starts, we initialize with $V_0$. But we use $V_1$. Why? And how do we calculate $V_1$?

",46214,,46214,,8/21/2021 17:17,8/21/2021 17:17,How does n-step Temporal Difference remove the notion of time-step?,,0,2,,,,CC BY-SA 4.0 30241,2,,30238,8/18/2021 13:39,,1,,"

Yes, RL is capable of learning complex functions, as it is a very general learning approach. However, if you have a direct goal of learning a complex function from example data, it will not really add anything to that process. Supervised learning will be more efficient.

RL doesn't directly address or have special mechanisms to deal with the two main issues you have:

  • Lack of training data

  • Multi-variable optimisation with a very large number of variables

Adding states, timesteps and trail-and-error learning to this problem does not make it more tractable. If the problem naturally presented itself as a sequence of simpler choices, then RL might help somewhat, but as far as I can see you have a very high dimensional function to learn, and there is no benefit from adding a layer of trial-and-error learning on top of it.

What you probably want is to find ways of constraining the problem, using domain knowledge and/or transfer learning from similar systems. For example, if you already know the types of creature in the photos being turned into meshes, you could probably start by categorising them in order to use some pre-defined meshes that you then adjust, as opposed to starting with figuring out the general plan of the mesh.

There are already systems that can turn pictures of human subjects into 3D models with approximately correct shape and pose. I would suggest you study those to understand how they work and whether the ideas in them can be re-used for your problem. Here is one called PIFuHD.

",1847,,,,,8/18/2021 13:39,,,,0,,,,CC BY-SA 4.0 30242,1,,,8/18/2021 14:22,,1,58,"

Although I don't know in detail, I am aware of the following facts regarding the use of gradients in some domains of artificial intelligence, especially in minimizing the training of neural networks.

  1. First order gradient: It quantifies the rate of change of a function with respect to its inputs. It is useful in artificial intelligence, especially in gradient-based algorithms, to know about the direction in which the parameters need to be updated.

  2. Second-order gradient: It somehow quantifies the curvature of the function. It is used in artificial intelligence, to know whether the function has convex or concave portions.

In this context, I want to learn whether there is any significance for higher-order gradients in artificial intelligence? Note that higher-order refers to the order $\ge 3$.

",18758,,18758,,5/23/2022 10:21,11/29/2022 16:36,Is there any significance for higher order gradients in artificial intelligence?,,1,0,,,,CC BY-SA 4.0 30243,1,,,8/18/2021 14:45,,2,158,"

The only algorithm I know for updation of weights of a neural network is based on gradients. The update equation can be roughly written as

$$w \leftarrow w - \nabla_{w}L$$

where $\nabla_{w}L$ is the gradient of loss function with respect to weights.

Are there any learning algorithms for updating weights in neural networks that does not use gradients?

",18758,,,,,8/18/2021 15:35,Is there any way to train a neural network without using gradients?,,1,0,,,,CC BY-SA 4.0 30244,2,,30243,8/18/2021 15:35,,4,,"

Yes.

A prominent class of "gradient-free" algorithms in ML world is known as Evolution Strategies (ES). Evolutionary Algorithms, although existed for a long time, only a few have shown to scale well.

Recently, the research group OpenAI managed to train Deep RL models with a specific variant of ES (with careful engineering). You can read this paper. This blog by David Ha provides a starting point if you want to learn about ES and its modern derivatives.

",11030,,,,,8/18/2021 15:35,,,,0,,,,CC BY-SA 4.0 30245,2,,25315,8/18/2021 15:48,,0,,"

as far as I have found out it stands for a different type of task.

I have found it here. https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/text_classification.ipynb#scrollTo=kTCFado4IrIc

It states that mnli-mm stands for the mismatched version of MNLI That means that the mnli-m would mean matched version of MNLI

On the following link is more about the MNLI: https://cims.nyu.edu/~sbowman/multinli/

Hope this helps you.

Cheers.

",49316,,,,,8/18/2021 15:48,,,,0,,,,CC BY-SA 4.0 30246,1,,,8/18/2021 17:53,,0,63,"

I'm working on a fun project where I have a dataset of input and output data, both having a fixed size of characters. I would like to predict a part of the input based on a known output as follows:

$$Input = A+B$$ $$Output = X+Y$$

A, B, X, Y are strings that will be concatenated and they have a fixed size

Knowing A, X and Y; I want to predict B (even if it takes a lot of tries). The output is split in 2 because I don't really care what value Y has (so maybe I can delete it, depending how is more easy).

Is it possible? I'm new to ML and AI and first I want to know if it possible before starting to work on the project (I am time limited). And if it is possible then could you tell me what exactly to study/learn or how I can do it?

",49317,,,,,8/18/2021 17:53,Predict a part of the input based of the output,,0,2,,,,CC BY-SA 4.0 30248,2,,23098,8/18/2021 19:23,,1,,"

PEAS stands for (Performance, Environment, Actuators and Sensors), when you are asked to give the peas of a AI then you should describe it as follows: Example: PEAS for refinery controller: • Performance measure: maximize purity, yield, safety • Environment: refinery, operators • Actuators: valves, pumps, heaters, displays • Sensors: temperature, pressure, chemical sensors Please refer to the following link where I found precise information:

http://www.cs.bilkent.edu.tr/~duygulu/Courses/CS461/Notes/Agents.pdf

",49318,,,,,8/18/2021 19:23,,,,0,,,,CC BY-SA 4.0 30250,1,30274,,8/18/2021 22:37,,1,36,"

Most of the neural network models in contemporary deep learning packages are trained based on gradients.

Let $f: \mathbb{R}^m \rightarrow \mathbb{R}^n$ be a function for which we want to find a gradient, then the gradient is generally represented by a Jacobian matrix that looks like below

$$J = \begin{bmatrix} \dfrac{\partial y_1}{\partial x_1} & \dfrac{\partial y_1}{\partial x_2} & \dfrac{\partial y_1}{\partial x_3} &\dots & \dfrac{\partial y_1}{\partial x_m} \\ \dfrac{\partial y_2}{\partial x_1} & \dfrac{\partial y_2}{\partial x_2} & \dfrac{\partial y_2}{\partial x_3} &\dots & \dfrac{\partial y_2}{\partial x_m} \\ \cdots & \cdots & \cdots & \cdots & \cdots \\ \dfrac{\partial y_n}{\partial x_1} & \dfrac{\partial y_n}{\partial x_2} & \dfrac{\partial y_n}{\partial x_3} &\dots & \dfrac{\partial y_n}{\partial x_m} \\ \end{bmatrix} $$

For example: If $f(x_1, x_2) = \begin{bmatrix} x_1 + x_2 \\ x_1x_2 \end{bmatrix}$ then $J = \begin{bmatrix} 1 & 1 \\ x_2 & x_1 \end{bmatrix}$

After calculating the Jacobian matrix, we can substitute the co-ordinate values of a particular point so that we can obtain a real matrix which is a gradient at a particular point.

$$ J_{(4, 5)} = \begin{bmatrix} 1 & 1 \\ 5 & 4 \end{bmatrix} $$

In order to perform the gradient of a function at a point, the algorithm I know is as follows:

  1. Write each output of the function in the analytical form in terms of input;
  2. Apply partial derivative on each output w.r.t each input;
  3. Substitute the values of the input point at which we want to find the gradient.

Thus, finally, we will get the gradient.

Do the popular packages like PyTorch, Tensorflow, Keras, etc., use this or a variant of this algorithm to find the gradients at a particular point?

If yes, will those packages be able to write the analytical forms of all the output variables in terms of input variables?

If not, what is the high-level algorithm for calculating gradients? Is it based on geometrical slope version of gradient?

",18758,,18758,,11/7/2021 7:36,11/7/2021 7:36,What is the high-level algorithm followed by contemporary packages for the calculation of gradient?,,1,0,,,,CC BY-SA 4.0 30251,1,,,8/18/2021 22:39,,0,22,"

In the caption of figure 7.4 (p. 147) of Sutton & Barto's book (2nd edition), it's written

The one-step method strengthens only the last action of the sequence of actions that led to the high reward, whereas the n-step method strengthens the last n actions of the sequence, so that much more is learned from the one episode.

Why does one-step TD strengthen only the last action of the sequence of actions that led to the high reward, while n-step TD the last n actions?

Here's a screenshot of the figure.

",46214,,2444,,8/19/2021 12:59,8/19/2021 12:59,"Why does one-step TD strengthen only the last action of the sequence of actions that led to the high reward, while n-step TD the last n actions?",,0,2,,,,CC BY-SA 4.0 30252,1,,,8/18/2021 22:41,,0,104,"

In the algorithm below, when $\tau + n \geq T$, shouldn't the algorithm bootstrap with the value of the next state? For instance, when $T=5, \tau=3, \& \; n=2$, we don't bootstrap the sample return with $V_{(\tau+n)}$, i.e., $V_5$ or the terminal state.

Also, on line 4, what do we mean by "can take their index mod $n + 1$"?

",46214,,46214,,8/19/2021 16:05,8/26/2021 18:37,Why don't we bootstrap terminal state in n-step temporal difference prediction update equation?,,1,0,,,,CC BY-SA 4.0 30259,1,,,8/19/2021 1:34,,1,62,"

Gradients are used in optimization algorithms.

I know that a gradient gives us information about the direction in which one needs to update the weights of a neural network. We need to travel in the opposite direction of the gradient to get optimal values.

Thus the gradient provides direction to update parameters.

Is it the only information provided by the gradient? Or does it provide any other information that helps in the training process?

",18758,,18758,,2/8/2022 5:43,12/28/2022 16:04,What all does the gradient tells us other than the direction to move parameters?,,1,4,,,,CC BY-SA 4.0 30261,1,30272,,8/19/2021 8:02,,0,427,"

I want to create a model to predict the number of visitors.

Currently, I have a year's csv data for predicting the number of visitors, which is collected every 10 seconds.

I would like to predict the number of future visitors on a daily basis based on this data for the past year.

What kind of method or model can I use to achieve this? I can use a graphics board for learning.

If you have any page of sample codes, it would be very helpful.

",49324,,2193,,8/19/2021 10:52,8/20/2021 6:05,How to create a model for predicting the number of visitors,,2,2,,,,CC BY-SA 4.0 30263,2,,30201,8/19/2021 10:40,,1,,"

You can find a description of this distribution (which is also known as categorical distribution, which you probably already heard of) in section 2.3.2 (p. 35) of the book Machine Learning: A Probabilistic Perspective (by K. Murphy). You can also find there and in the previous section a description of the related Bernoulli, binomial and multinomial (the most general of the four) distributions. The word multinoulli is used in order to remind you that this distribution is a generalization of the Bernoulli.

In any case, this is how you may remember these four probability distributions.

  • Bernoulli: you throw a coin only once ($n=1$), and a coin has $k = 2$ outcomes (heads or tails)
  • Binomial: you throw a coin $n$ times, where $n$ can be greater than $1$, and a coin has $k = 2$ outcomes
  • Categorical: you throw a dice, with $k$ sides (e.g. a side may be $1$ or $5$), where $k$ can be greater than $2$, only once ($n=1$)
  • Multinomial: you throw a dice, with $k$ sides, $n$ times

So, these are all discrete probability distributions, with an associated probability mass function (p.m.f). In all of them, we have two parameters $n$ (the number of trials of the experiment) and $k$ (the number of outcomes).

I don't think it is correct to say that a softmax is a probability distribution. The softmax is a function used to compute the probabilities that you associate with a categorical distribution, i.e. you use the softmax to produce a probability vector (although some people will say that the softmax produces a probability distribution), as the excerpt that you quote states. In principle, I think you could use other functions to do that (an alternative to the regular softmax, sigsoftmax, is proposed in this paper, section 3.3, p. 6). So, the softmax is used to model a categorical distribution, but I wouldn't say it's a categorical distribution. You can find an explanation of why the softmax is used instead of e.g. just normalizing by the sum here (and a long list of probability distributions here).

",2444,,2444,,8/19/2021 10:52,8/19/2021 10:52,,,,0,,,,CC BY-SA 4.0 30264,1,,,8/19/2021 12:15,,0,105,"

I'm trying to solve a linear programming problem using reinforcement learning. The linear programming problem is:

\begin{array}{ll} \text{maximize}_x & C* x \\ \text{subject to}& A*x \le b\\ & x_i \in [0,1],\ where \ i=1,2,3,... \end{array}

For instance: \begin{array}{ll} C &= [1 \; 2 \;3 \;4]\\ x &= [x1; x2; x3; x4]\\ A &= [2 \;3 \;4 \;5]\\ b &= 10\\ \end{array}

I've tried to use the DDPG algorithm to train in MATLAB but the result is not good. Any suggestions for this problem, and is it possible to do so, thanks?

",49327,,49327,,8/20/2021 0:14,8/20/2021 3:08,Is it possible to solve a linear programming problem using reinforcement learning? (DDPG algorithm),,1,2,,,,CC BY-SA 4.0 30265,1,,,8/19/2021 13:06,,4,95,"

I'm using a small neural network (2 hidden layers, 60 neurons apiece) for a rather complex binary classification problem.

The network works well, but I'd like to know how it is using the inputs to perform the classification. Ultimately, I would like to interpret the trained network in order to learn more about the processes responsible for generating the data.

Ideally, I would end up with an equation that would allow me to perform the classification without the network and that would have parameters that I could interpret in the context of the system the network is being used on.

My first thought is to procedurally mask out a growing subset of the ~4000 parameters until there's an appropriate trade-off between performance and simplicity and then maybe use a symbolic logic library to try and simplify further.

I don't think that's the best plan, so I wonder if there's an existing workflow to interpret a neural network.

",48856,,2444,,8/19/2021 23:03,8/20/2021 14:01,How can I interpret the way the neural network is producing an output for a given input?,,1,6,0,,,CC BY-SA 4.0 30266,2,,30197,8/19/2021 13:26,,2,,"

In some cases, you can solve a linear regression problem with an analytical (or closed-form) solution/expression (although this may not always be the best approach). See this answer for more details.

Note that this solution involves matrix multiplications and the computation of an inverse with floating-point numbers, so this is still a numerical algorithm/problem. We could also consider this solution an iterative algorithm if, under the hood, you compute the inverse of the matrix or perform the matrix multiplications with iterative algorithms, but, from a high-level perspective, this is an analytical (non-iterative) method.

",2444,,,,,8/19/2021 13:26,,,,1,,,,CC BY-SA 4.0 30267,1,,,8/19/2021 14:23,,2,45,"

I wonder whether there are heuristic rules for the optimal selection of learning rates for different layers. I expect that there is no general recipe, but probably there are some choices that may be beneficial.

The common strategy uses the same learning rate for all layers. Say, take Adam optimizer with lr=1e-4, and this choice performs well in many situations.

However, it seems that convergence to the optimal values of weights in different layers may be with different speeds. Say, values in the first few layers are close to the optimum after a few epochs, whether features in deeper layers typically require much more epochs to be close to a good value.

Are there any rules to choose a smaller (higher) learning rate in the top layers of the network compared with the bottom layers?

Also, neural networks can have different types of layers - convolutional, dense, recurrent, self-attention. And some of them may converge faster or slower.

Has this question been studied in the literature?

Different learning rates for different layers emerge in transfer learning - it is common to tune only the last few layers, and keep other frozen or evolve with a smaller learning rate. The intuition behind this is that the top layers extract generic features universal for all models and it is desirable not to spoil them during fine-tuning.

However, my question is about training from scratch.

",38846,,2444,,11/30/2021 14:42,11/30/2021 14:42,Has the idea of using different learning rates for different layers been explored in the literature?,,0,2,,,,CC BY-SA 4.0 30268,2,,30197,8/19/2021 15:24,,0,,"

In addition to the other answers, I would like to mention that there is a branch in Deep Learning community, that tries to get an intuition on the general problem via solving some simpler problems.

The main complication with working with real-world data is that it doesn't possess a simple analytical description. Data from ImageNet or MNIST belongs to a complicated distribution on nontrivial manifolds, that is rather different from the Gaussian or any other common distribution.

Deep Neural networks are complicated nonlinear functions of data and the model weights, hence in general one is not expected to get a closed-form solution.

However, there are limits, when the dynamics simplifies a lot.

Mean field regime

For the case of a single hidden layer and an infinite number of hidden units, one switches from the dynamics of particular weight to the dynamics of the weight distribution. In addition let the learning rate $\eta \rightarrow 0$.

Discrete optimization is replaced by the continuous evolution of PDE. An example of such treatment is given here:

https://www.pnas.org/content/pnas/115/33/E7665.full.pdf

NTK regime

For infinitely wide networks with specific parametrization https://arxiv.org/abs/1806.07572 - evolution of the model output is given by a particular differential equation with fixed kernel.

An example of estimating convergence with NTK approach of the loss function, depending on the smoothness of target function, is given in the paper:

https://arxiv.org/abs/2105.00507

",38846,,,,,8/19/2021 15:24,,,,0,,,,CC BY-SA 4.0 30269,2,,30264,8/19/2021 19:07,,1,,"

Straight theoretical answer:

In theory, yes, it is possible to model this problem as a Reinforcement Learning. But in practice, RL is not the most suitable approach for a simple linear maximization with a boundary. For instance, you could use a Lagrangian.


Practical analysis on your specific problem

In this specific example, you have 1 single constrain: $\sum_{i} a_i x_i \le b$, for an $n$ degree equation (n = size of $X$). So you might also want to add another boundary, like: all $X > 0$. Otherwise your solution will diverge:

  • $C = [1 2 3 4];$
  • $X = [x_1; x_2; x_3; x_4];$
  • $A = [2 3 4 5];$
  • $b = 10$

Simple example of divergent solution:

$X = lim_{k=\infty} [-3k, 0,0, k]$

Gives you: $C*X= -3k + 0+0+4k = k$ ✅ Maximum possible reward for $lim_{k=\infty}$

Constrained by $A*X = -6k + 0 + 0 +5k = -k \le 10$ ✅ Minimum possible boundary for $lim_{k=\infty}$


Edit after adding $x_i \in [0,1] $ constraints:

You have described the simplest version of Knapsack Problem, where we can split items in fractions.

For this problem, the greedy solution is very simple and effective:

Calculate a new weight vector: $W = C/A = [ c_1 / a_1, c_2/a_2, ... ]$, which represents the ratio of value $c_i$ $/$ cost $a_i$ for each index $i$.

Now, to have the best value $C$ for a limited cost $A$, you just need to greedy select the $i$ from the largest ratio $w_i$ and "fill your Knapsack" (by increasing continuously $x_i$) until some boundary is filled:

  • If $x_i\le1$ is reached (you have exhausted all available $x_i$), than proceed to the next best $w_i$.
  • If total boundary $B$ is reached, than you've finished the algorithm and that's a guaranteed best solution.
",49188,,49188,,8/20/2021 3:08,8/20/2021 3:08,,,,3,,,,CC BY-SA 4.0 30270,2,,30261,8/19/2021 23:44,,0,,"

There are various cheat sheets to recommend what predictive model to use in different situations. Here’s a couple to guide you on options for your dataset.

https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html

For this first cheat sheet with your example, you would begin with the start icon. For the first question > 50 samples, the answer is yes. For the second question predict category, the answer is no. For the third question predict quantity, the answer is yes. For the fourth question, < 100K samples, the answer is yes . The suggested type of regression analysis to try from this cheat sheet is a SGD Regressor.

https://docs.microsoft.com/en-us/azure/machine-learning/algorithm-cheat-sheet

For the cheat sheet you example begin with the question what do you want to do. The answer is predict values; so again we see this should be some type of regression analysis. There are a few options on this one that would be appropriate to try, such as a Poisson Regression.

After deciding which model to use the next decision is how to implement the model. Kimball made a statement a good decade ago, In today’s environment, most organizations should use a vendor-supplied ETL tool as a general rule. This was mainly due to maintenance costs. We’ve reached the same level of maturity with data science tools that same logic can be applied for common data science models.

",15027,,15027,,8/20/2021 6:05,8/20/2021 6:05,,,,1,,,,CC BY-SA 4.0 30271,1,,,8/20/2021 3:25,,1,31,"

I am currently using LSTM to try to predict future data in AirPassengers.csv.

This is current code op my Colab (sorry for the comments are Japanese) https://colab.research.google.com/drive/16Ntg3dA5ywZvm35PEeMUjHtKhNdsSgbc?usp=sharing

I wanted to use this code to make predictions for a much longer period of time with different data in the future, so I changed the prediction period in the last code block from the original code I referenced from 3 years to 20 years as follows:

#from 
pred_time_length = 12*3
#to
pred_time_length = 12*20

When I do this, I get values like this damped oscillation, and I think that the prediction is probably not working well.

What is the cause of this? Also, what should I change in the code to make it work?

thank you in advance!

",49324,,,,,8/20/2021 3:25,The results are not correct when predicting the future for a very long period of time with LSTM,,0,0,,,,CC BY-SA 4.0 30272,2,,30261,8/20/2021 4:08,,1,,"

Straight answer:

There is no right answer as it depends on many factors. But here are some keywords you can look into:

Keywords about your problem:

  • Time Series
  • Periodic
  • Forecasting.
  • Uni-variable

About the model, I'd recommend checking ARIMA. But before jumping into code.

A good problem solving with Data Science is a dynamic process for deeply understanding your business while testing hypothesis with data.

Business:

  • If you are predicting visitors in a beach, for instance, it will probably depend on weather, weekdays and holidays.
  • If it's a bank, you might have more visits on pay-day and people may choose the time with less expected line length.
  • If it's an emergency hospital, you'll find a whole new situation.

So I encourage you to think of your real problem before (or while) diving into math and programming.

Data:

Depending on your available data, you could test some hypothesis. For example: "The weekday won't interfere much on the visitors". Once you are familiar with your data and your business, than you can even model it into a simple periodic regression. For example:

  • Everyday my restaurant has 2 main peaks, one for lunch and other for dinner.
  • I'll assume my data is well described by 2 Gaussian distribution across the day.
  • Model it by a simple regression to find the best parameters for your Gaussian curves and than check how well it fits.
  • Check the cases where the model fails and set new hypothesis.
",49188,,,,,8/20/2021 4:08,,,,1,,,,CC BY-SA 4.0 30273,1,,,8/20/2021 6:19,,1,22,"

In the paper Semi-Supervised Learning by Mixed Label Propagation, they say

One major limitation with most graph-based approaches is that they are unable to explore dissimilarity or negative similarity. This is because the dissimilar relation is not transitive, and therefore is difficult to be propagated.

Why is it so?

",49338,,40434,,8/21/2021 6:09,8/21/2021 6:09,Why is it difficult to propagate intransitive relations over a graph?,,0,0,,,,CC BY-SA 4.0 30274,2,,30250,8/20/2021 6:29,,1,,"

Does the popular packages like PyTorch, Tensorflow, Keras, etc., use this or a variant of this algorithm to find the gradients at a particular point?

Yes. This is effectively what back-propagation is. However, there are a couple of important details:

  • Using a loss function flattens the matrix form you have to a vector, because with a loss function the dimension of the output $n=1$. This is important, since it is not possible to directly search for a maximum or minimum value for multi-dimensional output.

  • Back-propagation is resolved as follows:

    • Set up by constructing a computation graph that composes multiple functions. Some libraries set this graph up directly e.g. TensorFlow, whilst others like PyTorch can resolve it dynamically at run time from requested computations (this is convenient and easier to write, but there is a performance cost).
    • Gradients are calculated numerically by resolving function composition in reverse using the chain rule.
    • There is an implied analytic form for the forward calculation, but it never used directly.

If yes, will those packages be able to write the analytical forms of all the output variables in terms of input variables?

In theory, yes, as the data to do so is in the computational graphs that are used to run neural networks forward, and is the same data that is used to resolve back propagation. In practice I have not seen this done. There may be an add-on or library that could print out the analytic forms of neural network outputs at any layer, or for the loss function. If there is not one, it would not be too hard to write one.

However, other than as a teaching aid for very small networks, then the analytical view of the outputs or loss function are not a practical or useful description of what is going on. The graph view showing how the linear algebra and non-linear activations are composed into a sequence of computations (and reversed step-by-step for gradient calculations) is much easier to comprehend. Many neural network libraries can generate views of this compuational graph.

One important take-away though, is that despite the apparent complexity of back-propagation, it is doing exactly what you expect in the question: Calculating the gradient components in a Jacobian.

",1847,,1847,,8/20/2021 6:42,8/20/2021 6:42,,,,0,,,,CC BY-SA 4.0 30275,1,,,8/20/2021 7:21,,0,26,"

I have succesfully trained ssd_mobilenet_v2_keras for object detection, with a dataset of about 3700 images. Now I have more images to add. I tried adding only a few images (150-300) to see what happened, but what I obtain is that the trainig looks good in the first steps, but then there are some really high peaks in the loss function.
At first, I thought the problem was the quality of the pictures, so I removed them and tried to add more or less 300 bigger pictures: nothing changed. Then I tried to add only good pictures (no shadows or lights that may confuse the net, no interference with the object, only images where the objects I want to find are big and centered), but nothing.
All the things I have tried leads to the same results:





As you can see, The training looks good at the beginning, but then there are those extremely high peaks that seems to happen at random steps (sometimes after 20.000 staps, sometimes after 2.000).
I tried to train both with and without some data augmentations (random contrast, brightness and saturtion adjust, random rgb-to-grayscale, random horizontal flip, ...) but the results are more or less the same (with data augmentations it's a little better, but still far from good).
Any suggestions on why this happens and how to fix?



EDIT: unfortunately I didn't take a screenshot at the end of the succesful training, I only have this one taken after 6.000 steps (total number of steps is 50.000), but then the chart followed this trend and ended with this values:
- classification loss: 4.16e-3
- localization loss: 1.11e-3
- total loss: 0.077

",48858,,48858,,8/20/2021 9:58,8/20/2021 9:58,Adding data to training results in loss random peaks,,0,4,,,,CC BY-SA 4.0 30276,2,,30265,8/20/2021 10:11,,2,,"

"Ideally, I would end up with an equation that would allow me to perform the classification without the network".

If you could find such analytic equation without machine learning then why training a multi layer perceptron in the first place? Or to phrase it differently, the mlp you trained is that equation. And I'm not trying to be ironic, if you need an analytic explanation then don't use a multi layer perceptron, but move to decision trees algorithms for example, than you could literally plot the model itself (still hard to interpret depending on the amount of features you're using).

If instead you still want to stick with mlp, then something that you could do to better understand your model is plotting the decision boundaries learned by it. Sklearn has a nice tutorial on how to do it, I copied it and changes the svm with an mlp just to show that the approach works regardless of the model

import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.neural_network import MLPClassifier

# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2]  # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target

h = 0.02  # step size in the mesh

# we create an instance of SVM and fit out data. We do not scale our
# data since we want to plot the support vectors
C = 1.0  # SVM regularization parameter
mlp = MLPClassifier().fit(X, y)

# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))

# title for the plots
title = "MLP boundries"


# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
# plt.subplot(1, 1)
# plt.subplots_adjust(wspace=0.4, hspace=0.4)

Z = mlp.predict(np.c_[xx.ravel(), yy.ravel()])

# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)

# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
plt.xlabel("Sepal length")
plt.ylabel("Sepal width")
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.xticks(())
plt.yticks(())
plt.title(title)

plt.show()

The code output this plot:

Your can see that the boundaries are a crude approximation of the mlp behavior, they are just estimated from a brute force prediction applied to all points of the 2D graph generated by 2 features of the input data. SO the boundaries will change depending also on the features you decide to plot. But it gives you an idea about the relationships learned by the mlp.

If you want more, I stress again that you should train a different model, like decision tree, random forest or XGboost, with these model you can compute scores about features importance and literally plot the decision thresholds learned by the models.

",34098,,34098,,8/20/2021 14:01,8/20/2021 14:01,,,,0,,,,CC BY-SA 4.0 30277,1,,,8/20/2021 11:12,,0,207,"

Gradients are used in optimization algorithms. Based on the values of gradients, we generally update the weights of a neural network.

It is known that gradients have a direction and the direction opposite to the gradient should be considered for weight updation. In any function of two dimensions: one input, and one output, there are only two possible directions for any gradient: left or right.

Is the number of gradient directions infinite in higher dimensions ($\ge 3$)? Or does the number of possible directions are $2n$ where $n$ is the number of input variables?

",18758,,18758,,11/6/2021 23:05,11/6/2021 23:09,How many directions of gradients exist for a function in higher dimensional space?,,1,0,,,,CC BY-SA 4.0 30278,2,,30277,8/20/2021 13:40,,1,,"

Let's look at the definition of gradient:

In vector calculus, the gradient of a scalar-valued differentiable function $f$ of several variables is the vector field (or vector-valued function) $\nabla f$ whose value at a point $p$ is the vector $r^{[a]}$ whose components are the partial derivatives of $f$ at $p .^{[1][2][3][4][5][6][7][8][9]}$ That is, for $f: \mathbb{R}^{n} \rightarrow \mathbb{R}$, its gradient $\nabla f: \mathbb{R}^{n} \rightarrow \mathbb{R}^{n}$ is defined at the point $p=\left(x_{1}, \ldots, x_{n}\right)$ in $n$-dimensional space as the vector: $^{[b]}$ $$ \nabla f(p)=\left[\begin{array}{c} \frac{\partial f}{\partial x_{1}}(p) \\ \vdots \\ \frac{\partial f}{\partial x_{n}}(p) \end{array}\right] $$

First of all, the gradient is not a single value or a vector, it's an operator that given a function returns another function (note that in the definition $$\nabla f$$ map from $\mathbb{R}^n$ to $\mathbb{R}^n$ again), which can be used to compute a vector for each point of a field. So, it doesn't really make sense to talk of a gradient direction per se, since the direction actually belongs to the single vectors associated with each point of the field. How many directions do these vectors have? Well, it depends on the field. A plane has 2 dimensions, hence 2 directions in which you can move, in the same way, $\mathbb{R}^n$ will have $n$ dimensions, hence $n$ directions in which you can move.

Note also that in gradient descent the gradient is computed with respect to the cost (loss) function:

$$W_{t+1} = w_t - \alpha(\partial C / \partial w)$$

Mathematically this means that:

  • we have a set of weights $w$
  • we use these weights to produce an output given a specific input
  • we then compute an error (or cost, or loss) between the input and output, i.e. we compute at which point of the field of the cost function we end up due to our current weights
  • we then compute the gradient of the cost function, i.e. we look at all points of the field of the cost function to understand which vector has the biggest magnitude (we care most about the magnitude rather than the direction) and finally
  • we update the weight in such a way that the next value produced by the weights when computing the cost function will be in the same direction as the previously found vector.
",34098,,18758,,11/6/2021 23:09,11/6/2021 23:09,,,,1,,,,CC BY-SA 4.0 30279,2,,2795,8/20/2021 15:05,,1,,"

4 years and 6 months have past since your unanswered question.

That is an eternity in terms of Machine Learning and things have evolved a lot. So I will answer about the present, not the past.

Today, there are some code generating models, like GitHub Copilot and OpenAi Codex which are based on NLG (Natural Language Generation). The principle is very simple:

  1. Make a huge (billions of parameters) model to predict the next "word" (token) in a sequence of text.
  2. Train that model with all available data you can find, and it will understand our language and all kinds of concepts, from philosophy to medicine (that's a GPT).
  3. Then you fine-tune it to perform well in programming code, specially in a database filled with code comments.

And now you have a highly capable model for code prediction (generation) from a comment (imperative natural language query).


There are also other AI strategies for code prediction, like https://www.tabnine.com/

",49188,,,,,8/20/2021 15:05,,,,0,,,,CC BY-SA 4.0 30281,2,,18115,8/20/2021 15:51,,2,,"

One way to handle an arbitrarily large sequence is by adding a STOP signal as one possible token in the sequence, just like LSTM.

So you could divide your game in turns:

  • What you now call a single action (composed by multiple sub-actions) would become a turn.
  • Now, you can have as many actions you'd like inside a turn.
  • Each action is simply a list accumulated inside the environment, but won't evaluate the game yet.
  • When the player is satisfied with their actions, they can call the action: "End Turn".
  • When turn ends, you can concatenate every sub-actions, evaluate the game and proceed as usual.
",49188,,,,,8/20/2021 15:51,,,,0,,,,CC BY-SA 4.0 30282,1,30283,,8/20/2021 15:55,,0,34,"

I try to solve a navigation problem with PPO; my actions space have three-part:

  1. robot linear velocity that is in [-3,3] range (getting from a tanh activation func)
  2. robot linear angular that is in [-pi/6, pi/6] range (getting from a tanh activation func)
  3. robot step-time duration that is from [0.2, 0.5, 0.8] (getting from a softmax activation func)

The problem that I face is how to calculate the ratio of probability from this separate disturbing? Mean or sum? or was there another way to calculate log_prob from different distributions? Something like log_prob from multivariable distribution!

",49132,,,,,8/20/2021 16:15,How to calculate policy probability ratio in multiple action space,,1,0,,,,CC BY-SA 4.0 30283,2,,30282,8/20/2021 16:15,,0,,"

It depends on how you have modelled the probability distributions that you are sampling from when you turn the neural network output into a specific action.

If your three action dimensions are sampled independently (which would be a standard way to model a policy like this), then the probabilities or probability densities multiply. Which means the log probabilities add.

",1847,,,,,8/20/2021 16:15,,,,1,,,,CC BY-SA 4.0 30284,1,,,8/20/2021 16:52,,2,43,"

In Sutton & Barto (2nd edition), the following is mentioned on page 150 (p. 172 of the pdf), section 7.4:

the importance sampling ratio has expected value one (Section 5.9) and is uncorrelated with the estimate.

How can we prove the importance sampling ratio is uncorrelated with the estimate?

",46214,,2444,,8/21/2021 10:34,8/31/2021 18:05,How to prove importance sampling ratio is uncorrelated with action-value (or state-value) estimate?,,1,0,,,,CC BY-SA 4.0 30285,1,,,8/20/2021 23:37,,2,165,"

Consider the following paragraph from Numerical Computation of the deep learning book.

When $f'(x) = 0$, the derivative provides no information about which direction to move. Points where $f'(x)$ = 0 are known as critical points, or stationary points. A local minimum is a point where $f(x)$ is lower than at all neighboring points, so it is no longer possible to decrease $f(x)$ by making infinitesimal steps. A local maximum is a point where $f(x)$ is higher than at all neighboring points so it is not possible to increase $f(x)$ by making infinitesimal steps. Some critical points are neither maxima nor minima. These are known as saddle points.

In short, points where $f'(x) =0 $ are called critical points, or stationary points.

But, according to mathematical terminology, the definitions are as follows:

#1: Critical point

A function $y=f(x)$ has critical points at all points $x_0$ where $f'(x_0)=0$ or $f(x)$ is not differentiable.

#2: Stationary point

A point $x_0$ at which the derivative of a function $f(x)$ vanishes, $f'(x_0)=0$. A stationary point may be a minimum, maximum, or inflection point.

It can be noticed that the definitions that are given in the deep learning book do match exactly with stationary points since the only premise is $f'(x)=0$. The definition for critical point is not apt since a critical point can also be a point where $f'(x)$ is nonexistent.

Is there any reason for using the terms critical points and stationary points interchangeably? Is there no need to address the points where $f'(x)$ does not exist?

",18758,,18758,,10/25/2021 0:30,10/25/2021 0:30,Why does critical points and stationary points are used interchangeably?,,2,0,,,,CC BY-SA 4.0 30286,2,,15479,8/20/2021 23:48,,1,,"

As said in the comments, I wouldn't use Machine Learning for that.

You can achieve that result using something like OpenCV.

For example:

  1. Get the "Naked" Background image: If you don't have it, you can easily calculate it by making an average of each image: background = np.mean(images, axis=0)
  2. For each image, calculate the pixel difference between image and background. diffs = [img - background for img in images]
  3. Diff's pixels can be negative, so take the absolute value of each pixel before converting it to grayscale.
  4. If all goes well, you now have a dark noised image, with a bright silhouette of your object.
  5. Set a threshold (i.e. threshold = diff.percentile(95)) and make a binary mask, so now each pixel indicates 1 for image silhouette and 0 for background.
  6. Find the centroid of the object (like calculating the average coordinates for each pixel=1). And there you have it!

Of course, I just described one clear and easy way to do it. But you can find your own best solution.

  • ✅ Don't need to train a neural network
  • ✅ Don't need to label data
  • ✅ Works for any set of image / background
  • ✅ Precise coordinates
  • ✅ Easy to make, debug and adapt.
  • ✅ Runs fast
",49188,,,,,8/20/2021 23:48,,,,0,,,,CC BY-SA 4.0 30287,2,,21526,8/21/2021 0:06,,1,,"

The model you are looking for is probably a Siamese Neural Networks.

It's very used for Biometrics. This kind of model learns to extract relevant and features (For example, facial metrics) that are invariant on several conditions (that won't change on light, angle, or even occlusions) presented on the dataset.

But keep in mind no model is an Scy-Fi AI, like FBI movies enhancing the image 1000x. There is natural limitation, so maybe not even the best AI possible could give you a satisfactory answer.

",49188,,,,,8/21/2021 0:06,,,,0,,,,CC BY-SA 4.0 30288,1,,,8/21/2021 0:18,,0,21,"

I have a dataset that contains pairs of a question and an answer. My problem is to train a model that can search for the right answer from the pool of my answers given the newly input question, so this is a kind of answer retrieval problem.

Can anyone provide me a survey and effective approaches for this problem?

",43929,,2444,,8/21/2021 10:45,8/21/2021 10:45,Is there a survey that describes the most effective approaches for an answer retrieval problem?,,0,2,,,,CC BY-SA 4.0 30289,1,31579,,8/21/2021 0:31,,3,93,"

For part of a paper I am writing on Clinical Decision Support Systems (computer-aided medical decision making, e.g. diagnosis, treatment), I am trying to compare Expert Systems with systems based on Machine Learning approaches (Deep Learning, Artificial Neural Networks, etc.).

Specifically, I am currently trying to make a general comparison (if possible) of expert systems with machine learning systems across dimensions of efficiency and complexity, i.e.

  • run-time-efficiency
  • time complexity
  • space complexity

My current line of thinking, after having tried to find literature with limited success, is that, in the case where one is trying to answer questions in a very specific, limited, domain that only requires a few rules (for an expert system), expert systems are relatively "cheap" in terms of these three criteria. However, when a problem/domain becomes more complex, expert systems seem to suffer from the fact that the number of rules needed "explodes", which, I would think, could lead to things such as large search trees or other problems. My feeling from what I have generally read about machine learning approaches is that these adapt better to more complex problems with higher dimensionalities.

I would like to find some information that either confirms/backs up my general impression, or guides me to some other understanding of this.

Unfortunately, I can't seem to find any sources that specifically deal with this kind of comparison. I'm not sure if I this is because my problem statement is to wide/vague, I am not searching correctly, there just isn't much literature, or my question doesn't make sense.

Some of the sources I did manage to find are:

Expert systems are still used and important in areas such as robotics and monitoring. However, the complexity of advanced rules systems can lead to performance issues. ANNs are currently managing to overcome such performance issues through scale-out.

Source: Forbes

Unfortunately, this is the most explicit source I've found. However, it doesn't really provide any details on which this claim could backed up, nor would I consider this a solid source, especially not in an academic setting.

Checking for the logical consistency of a set of interrelated logical rules results in the formulation of a satisfiability (SAT) problem [Bezem, 1988]. If one assumes only binary variables, say n of them, then the corresponding search space is of size 2n . That is, it can become very large quickly. This is an NP-complete problem very susceptible to the “dimensionality curse” problem [Hansen and Jaumard, 1990]

Source: Yanase J, and Triantaphyllou E, 2019, A Systematic Survey of Computer-Aided Diagnosis in Medicine: Past and Present Developments, page 7

This mentions "dimensionality curse", but in the context of checking for logical consistency of the rules of an expert system, and not really in the context of run-time-efficiency & complexity.

I have found numerous other articles comparing expert systems and machine learning approaches, e.g. Ravuri et al., 2019, Learning from the experts: From expert systems to machine-learned diagnosis models, but none of them, from what I have seen, compare expert systems and machine learning approaches across the dimensions I am interested in.

Would anyone be able to provide some input on what would be aspects in comparing expert systems and machine learning approaches in terms of the efficiency and complexity criteria listed above, and/or, be able to point me in the right direction?

",32471,,,,,10/6/2021 19:01,A comparison of Expert Systems and Machine Learning approaches in terms of run-time-efficiency and time/space complexity,,1,1,,,,CC BY-SA 4.0 30290,2,,30285,8/21/2021 0:45,,1,,"

A critical point of a function $f$ can be

  1. a stationary point (i.e. $f'(x) = 0$), or
  2. a point where the derivative is undefined (for example, in the case of the absolute value function $f(x)$, $x=0$ is a critical point, as $f$ is not differentiable at $x=0$).

So, all stationary points are critical points.

These notes provide more examples of how to find critical points of a function, so they could be useful.

",2444,,,,,8/21/2021 0:45,,,,4,,,,CC BY-SA 4.0 30292,2,,30285,8/21/2021 4:03,,1,,"

From reading the text, it's clear that the authors are using critical point to mean the same thing as stationary point, so they are not using the proper mathematical definition.

More generally, automatic differentiation will return a gradient at nondifferentiable points. Ask tensorflow or pytorch to take the gradient of $\text{ReLU}(x)\big|_{x=0}$. They return 0, even though it should technically return NaN, since it doesn't exist. Theoretically, the question of how to deal with nonsmooth functions is well understood - see subdifferential for example. In the relu example it actually returns an element of the subdifferential but there are pathological examples where this is not true.

",47080,,,,,8/21/2021 4:03,,,,0,,,,CC BY-SA 4.0 30293,2,,29928,8/21/2021 6:25,,2,,"

TL;DR: Alpha Zero removed rollout altogether from MCTS and just used current DNN estimates instead.


The single Deep Neural Network has 2 heads:

  • A Value Head (which assigns a score to each state).
  • And a Policy Head (which predicts the score for all possible moves).

Instead of doing rollouts to determine the outcome, it uses the DNN estimations, so it don't need to explore too deep.

By relying on the DNN, the MCTS gets even simpler:

  • Probability of each action is a simple normalization of previously obtained Policy Head.
  • Selection chooses move with “low count, high move probability, and high value”
  • Expansion is done by DNN, outputing Value and Policy.
  • Simulation with rollouts: no longer needed.
  • Back-propagation updates nodes using DNN's value (instead of a rollout outcome).

Source

",49188,,,,,8/21/2021 6:25,,,,0,,,,CC BY-SA 4.0 30294,1,,,8/21/2021 6:39,,2,688,"

I am a newbie in Computer Vision.

I have a scenario in which I have a stationary camera in a factory. I want to detect whether the technician is working on the machine or not.

Images are like the following:

Technician working:

Technician absent:

Technician not working:

I am confused whether is it a Image classification issue or an Object Detection/Pose Detection problem.

As per my knowledge this should be a classification problem, I should take multiple images of a condition in which the machine is unattended, and a condition in which the technician is working on the machine.

I will train the model if with different individual technicians on different days with different clothes.

Now if I am in the right direction, how much images do I need to have a good accuracy?

I see there are different models on Tensorflow Hub on image classification like EfficientNet, etc. Which model/architecture will work for me?

I am sorry if I sound noobish.

I can train the model using simple classifiers' code (like Cat vs Dog), but I want the my architecture to understand that there is an area in the image which should only be checked if it is occupied or not to classify properly.

OR

Shall I cut out the middle area (where technician stands) simply using opencv. And then feed that cutout image to some classifier to detect if there is a human standing there?

Thanks in advance!

",19788,,,,,1/19/2022 18:54,How to classify two very similar images using Deep Learning?,,2,0,,,,CC BY-SA 4.0 30295,2,,11431,8/21/2021 6:48,,1,,"

any subjective ideas or comments are more than welcome

Not a complete answer, but some ideas:

Your goal is subdivided into many tasks. It's not exactly the same as OCR because you also need to find the vertical alignment for each note.


One should be able to read musical notes with lower quality images.

If you want your model to perform on low quality image, you'll need such database.

But instead of labeling and taking pictures of printed sheets, you could just generate the images and virtually apply all sorts of distortion on them.

",49188,,,,,8/21/2021 6:48,,,,0,,,,CC BY-SA 4.0 30296,2,,8429,8/21/2021 7:03,,1,,"

That's a very nice challenge!

As always, the hardest part is to get a labeled database (maybe by scrapping).

You'd probably need some thousands of drawings and their respective drawer age.

From there, you need to make an image regression model. Here is a simple example that predicts age from a face photo. It's the same principle, but applied to another database, with other relevant features to be learned by the model.

",49188,,,,,8/21/2021 7:03,,,,0,,,,CC BY-SA 4.0 30297,2,,30294,8/21/2021 10:15,,0,,"

What you are asking about can be treated as a classification problem, indeed. But I would treat it rather as a detection problem.

Given an image, the goal is to draw the bounding box or any other geometric shape around the object of interest. In other words, you would like to localize and identify the object.

I recommend reading this link https://machinelearningmastery.com/object-recognition-with-deep-learning/.

Good repository for detection - https://github.com/ultralytics/yolov5.

",38846,,,,,8/21/2021 10:15,,,,0,,,,CC BY-SA 4.0 30298,1,30323,,8/21/2021 14:22,,1,319,"

I have been working through some search tree problems and came across this one:

Assume that that the algorithm has a closed list and that nodes are added to the frontier in the following order: Up, Right, Down, Left. For example, if node J is expanded: there is no node up from J so nothing is added to the frontier for up. K is right from J so it is added to the frontier, H is down from J so it is added to the frontier, there is no node left from J, so nothing is added to the frontier.

a) Assume that the start node is node F and the goal node is node M. Provide the entire search tree if Depth First Search is employed.

b) Provide the frontier at the time the search terminates

Because I understand how a depth-first search works with regards to the frontier (it is a LIFO queue), I know that the last node added to the frontier would be the next node you need to expand. Using that knowledge, the frontier would be as follows after each expansion:

  1. F
  2. F I B E
  3. E is expanded: F I B H A
  4. A is expanded: F I B H
  5. H is expanded: F I B J
  6. J is expanded: F I B K
  7. K is expanded: F I B L
  8. L is expanded: F I B M

The solution has been found, as we have reached M.

I thus seem to have answered part b of the question, but as for how to draw the search tree, I am stumped. Any ideas would be appreciated.

",48158,,2444,,8/23/2021 12:51,8/23/2021 13:00,How do I create the search tree for DFS applied to a grid map?,,1,0,,,,CC BY-SA 4.0 30299,1,,,8/21/2021 22:38,,1,51,"

Sequence-to-sequence models with attention are known to be limited by a maximum sequence length. So how can we handle sequences of arbitrarily large size? Do we just set a very large maximum sequence length?

",49366,,,,,8/22/2021 12:15,How is Google Translate able to translate texts of arbitrarily large length?,,1,0,,,,CC BY-SA 4.0 30300,1,,,8/21/2021 22:52,,2,139,"

I am studying a chapter named Numerical Computation of a deep learning book. Afaik, it does not deal with flat regions with desired points.

For example, let us consider a function whose local/global minimum or maximum values lies on flat regions. You can take this graph (just) for example.

Can gradient-based algorithms work on those curves with their local/global minima, or do maxima lie on flat regions?

",18758,,2444,,8/23/2021 23:13,8/23/2021 23:13,Do gradient-based algorithms deal with the flat regions with desired points?,,1,0,,,,CC BY-SA 4.0 30301,1,,,8/21/2021 23:23,,1,43,"

A function $f$ is said to be continuous at a point $c$ if it satisfies three properties:

  1. Should be defined at the point $c$
  2. Left and right-hand limits at $c$ must be equal i.e., the limit must exist
  3. Limit value at point $c$ is equal to the actual value of the function at c

In short: $\lim \limits_{x \rightarrow c} f(x) = f(c)$

I want to know whether the functions that we want to learn through real-world data, say generator in GAN, such as images, audio, video, text corpora, etc., are continuous or highly discontinuous in general? If discontinuous, what might be the reason for discontinuity? I mean, which among the three properties mentioned got a violation in the majority of cases?

",18758,,40434,,8/22/2021 6:39,8/22/2021 6:39,Is it true that real world data is highly discontinuous?,,0,3,0,,,CC BY-SA 4.0 30302,1,30305,,8/22/2021 0:21,,3,144,"

The Sutton and Barto reinforcement learning textbook states that

the value of a state under an optimal policy must equal the expected return for the best action from that state.

That is, $$v_*(s) = \max_a q_*(s, a).$$

I am having trouble gaining intuition for this. Since state values can be written as an expectation of the action values under a given policy, I am not sure I see how

$$v_*(s) = \sum_a \pi_*(a|s)q_*(s,a) = \max_a q_*(s, a).$$

I'd appreciate any insights!

",49372,,49372,,8/23/2021 19:24,8/23/2021 19:24,Why must the value of a state under an optimal policy equal the expected return for the best action from that state?,,1,0,,,,CC BY-SA 4.0 30303,1,,,8/22/2021 1:59,,1,36,"

Deep learning is a field in which we need neural networks that are deep enough to carry on our task. The important fucntions in deep neural networks can be classified in to three classes: activation function, neural network function and loss function.

Activation functions are a part of neural network function and neural network functions may be a part of loss functions.

Consider the following paragraphs from Numerical Computation of a deep learning book

Optimization algorithms that use only the gradient, such as gradient descent, are called first-order optimization algorithms. Optimization algorithms that also use the Hessian matrix, such as Newton’s method, are called second-order optimization algorithms (Nocedal and Wright, 2006).

The optimization algorithms employed in most contexts in this book are applicable to a wide variety of functions but come with almost no guarantees. Deep learning algorithms tend to lack guarantees because the family of functions used in deep learning is quite complicated. In many other fields, the dominant approach to optimization is to design optimization algorithms for a limited family of functions.

The last passage is talking about the family of functions used in deep learning. Which class of functions, among the three I mentioned, they are referring to?

",18758,,32621,,8/22/2021 9:46,8/22/2021 9:46,Which class of functions are quite complicated in deep learning?,,1,0,,,,CC BY-SA 4.0 30304,1,,,8/22/2021 6:38,,1,159,"

We can enforce some constraints on functions used in deep learning in order to guarantee optimizations. You can find it in Numerical Computation of the deep learning book.

In the context of deep learning, we sometimes gain some guarantees by restricting ourselves to functions that are either Lipschitz continuous or have Lipschitz continuous derivatives.

They include

  1. Lipschitz continuous functions
  2. Having Lipschitz continuous derivatives

The definition given for Lipschitz continuous function is as follows

A Lipschitz continuous function is a function $f$ whose rate of change is bounded by a Lipschitz constant $\mathcal{L}$:

$$\forall x, \forall y, |f(x)-f(y)| \le \mathcal{L} \|x-y\|_2 $$

Now, what is meant by having Lipschitz continuous derivatives?

Does they refer to the derivatives of Lipschitz continuous functions? If yes, then why do they mention it as a separate option?

",18758,,,,,1/15/2023 12:33,"What does it mean ""having Lipschitz continuous derivatives""?",,2,0,,,,CC BY-SA 4.0 30305,2,,30302,8/22/2021 7:37,,4,,"

You have an optimal policy $\pi_*$, and you are in the state $s$. Because the policy is optimal, it will only give probability to optimal actions. Let's say there are 5 actions $a_1, ..., a_5$ from your current state, and two of those are optimal, $a_2$ and $a_4$. Because they are both optimal, their action values will be equal $q_*(s, a_2) = q_*(s, a_4) = q_\text{optimal}$ and the optimal policy can decide to take either action in any possible ratio with the obvious restriction that the policy has to choose an action. This means that $\pi_*(a_2|s) + \pi_*(a_4|s) = 1$ because all the other actions are sub-optimal and the optimal policy would not take those actions, $\pi_*(a_i|s) = 0$ when $i = 1, 3, 5$. Then, you have:

$$ \begin{align*} v_*(s) &= \sum_i \pi_*(a_i|s)q_*(s, a_i) \\ &= q_\text{optimal} \Big( \pi_*(a_2|s) + \pi_*(a_4|s) \Big) + \sum_{a \in a_1, a_3, a_5} \pi_*(a|s)q_*(s, a) \\ &= q_\text{optimal} \end{align*} $$

The optimal actions are optimal in this scenario because they have the largest action-value, so $q_\text{optimal} = \max_a q_*(s, a)$.

$v_*(s)$ is expected future returns given that you start from state $s$ and follow the optimal policy. $q_*$ is the expected future returns given that you start from state $s$ and take action $a$, then follow the optimal policy. So without the equations above, you can look at $q_*(s, a)$ as a one-step lookahead to evaluate all actions from the current state, and you take the action with the highest action value.

",,user42664,,,,8/22/2021 7:37,,,,0,,,,CC BY-SA 4.0 30306,2,,30303,8/22/2021 9:28,,1,,"

From the phrasing, it seems that complicated refers to the non-convexity of the loss landscapes of neural networks. We do not have formal guarantees of convergence in general for such landscapes. This non-convexity is a property of both the function defined by the neural network, and the particular loss function we use.

In practice though, non-convexity stems from the non-linear activation functions as we almost exclusively use cross-entropy loss when training neural networks.

",32621,,,,,8/22/2021 9:28,,,,0,,,,CC BY-SA 4.0 30309,2,,30299,8/22/2021 12:15,,2,,"

You simply split the sequence into smaller sequences; while there are some long-distance dependencies in language, that is generally not a problem for this.

A sentence would typically be short enough, and very long sentences are composed of shorter clauses which would form independent units (albeit connected with each other).

",2193,,,,,8/22/2021 12:15,,,,2,,,,CC BY-SA 4.0 30311,2,,30300,8/22/2021 19:02,,1,,"

Can gradient-based algorithms work on those curves with their local/global minima, or do maxima lie on flat regions?

Yes, with some minor caveats.

All the points on the flat region are equivalent (and in your example, are all valid global minimum points). Gradients outside of the region will point correctly away from that region and gradient descent steps will therefore move parameters towards it.

Provided the step size multiplied by the gradient near the flat region is not too large, then a step taken near it will end up with parameters inside the region. After that, then any further gradients will be zero, so it is not possible to use basic gradient steps to escape it.

In the case of a global minimum, that's fine, you don't care which point in the global minimum you have converged to (otherwise your function to optimise would be different).

In the case of local minima or saddle points, you might care to use optimisation methods that can escape flat areas. Minibatch or stochastic gradient descent can do this because gradient measurements are noisy, whilst momentum algorithms can continue making steps when the immediate gradient signal is zero.

The example function you used is not something you would expect to come across when optimising a machine learning algorithm, although some loss functions do have components that have similar behaviour. For example, triplet loss uses a $\text{max}(d_1 - d_2 + \alpha, 0)$ where $d_1$ and $d_2$ are distances between an anchor image and desired class versus different class respectively, and $\alpha$ is a margin or minimum desirable distance between classes. The details of this are not important unless you want to create a face recogniser or similar - the important detail for your question is that $\text{max}(x, 0)$ is really used in ML as a loss function, and may have a similar shape to your example function. Once used in aggregate with many data examples though, and with regularisation, the shape would not be so simple, and proabably would not have any reachable flat minima regions like this.

",1847,,2444,,8/23/2021 23:13,8/23/2021 23:13,,,,0,,,,CC BY-SA 4.0 30312,2,,17772,8/22/2021 20:41,,3,,"

Yes and No.

Yes, it is possible to achieve that result. But instead of a self-optimizing Neural Network, I'd recommend another approach:

1. Don't let training time interfere.

If you are trying to train the agent DURING the environment runtime, that's probably the problem. Training time is usually much bigger than evaluation time. And deployed models usually don't train, so it won't be a problem in production.

You can do 2 things about that:

1.1 "Pause" the game during training.

It might look like "cheating", but in your agent's point of view, they are not actually playing during train time. And once again, it's just simulating how it would behave in production.

But if you can't pause it:

1.2 Disable training during runtime.

Store all states and decisions. Wait until the game is over and then you train the whole batch.

Professional chess players don't try to learn during a blitz challenge. But they do study their own games later to learn from their mistakes.

2. Optimizing hyper-parameters for speed.

You could tweak some hyper-parameters (like the size of your NN) looking for a faster model. Keep in mind it would still be an algorithm that always runs in a fixed time, but you might find some way to make it always

2.1 Using Machine Learning for this Optimization

There are some meta-learning techniques and other methods like NEAT, that can automate your search for a simple effective topology. NEAT already rewards the simplest architectures, penalizing complexity (which is usually attached to speed), but you could also force it to consider running time specifically.

3. Another Network for Another Task

You could a make another short network for deciding if the next move should be accurate or fast. Based on this result, it will choose between precision or speed. This choice could be by flagging a parameter (like branch prediction) or even running a whole new algorithm:

NeedForSpeed = TimeEstimation(state)
#Sorry, I couldn't resist the pun!

if NeedForSpeed > 0.8
  decision = agent.instantReactionDecisionTree(state)
elif NeedForSpeed > 0.5
  decision = agent.decideStandard(state, branchPrediction=True )
elif NeedForSpeed > 0.2
  decision = agent.decideStandard(state)
else:
  decision = agent.DeepNN(state)

Bonus: Use Other ML algorithms

Some algorithms have explicit parameters that directly affect the tradeoff for time x precision.

For instance, MCST (Monte Carlos Search Tree) can keep running forever until it explores all possibilities, but it usually finishes before, giving you the best solution found so far.

So, one possibility would be trying some other method instead of Neural Networks.

",49188,,,,,8/22/2021 20:41,,,,0,,,,CC BY-SA 4.0 30314,2,,30304,8/22/2021 21:08,,1,,"

Consider a function $f(x) : \mathcal{R}^m\rightarrow\mathcal{R}^n$ defined for $x \in X$. If $f$ is Lipschitz continuous, it has three main properties:

  1. $f(x)$ is continuous for all $x \in X$
  2. $\frac{d f(x)}{d x}$ exists almost everywhere. Meaning, if the derivative is not defined for $x \in \mathcal{B}$, where the set $\mathcal{B} \subset X$, then $\mathcal{B}$ has measure zero.
  1. $\underset{x \in X}{\sup} \left\lVert \frac{d f(x)}{d x} \right\rVert_2 \leq L$, where $L$ is the Lipschitz constant, and the norm indicates the induced matrix norm (or if $f$ is scalar, just the regular 2 norm).

So it follows that a Lipschitz continuous function is continuous and has a bounded jacobian.

Now if $f$ has a lipschitz continuous derivative, then it means $\frac{d f}{d x}$ is Lipschitz continious, i.e. \begin{align*} \left\lVert \dfrac{d f}{d x}\big|_{x = s} - \dfrac{d f}{d x}\big|_{x = t} \right\rVert_2 \leq M \left\lVert s - t \right\rVert_2 \quad s, t \in X \end{align*} where $M$ is the lipschitz constant. So a function with Lipschitz continuous gradient is continuously differentiable and has a bounded hessian.

",47080,,,,,8/22/2021 21:08,,,,0,,,,CC BY-SA 4.0 30315,1,30360,,8/22/2021 22:34,,1,76,"

Sutton and Barto, in their book (Reinforcement Learning 2nd Edition) begin the discussion of policy improvement by comparing the action value $q_\pi(s, \pi'(s))$ to the state value $v_\pi(s)$.

What is the intuition behind this comparison?

It seems more natural to me to compare $q_\pi(s, \pi'(s))$ and $q_\pi(s, \pi(s))$. I understand that for deterministic policies $q_\pi(s, \pi(s))$ is the same as $v_\pi(s)$ so mathematically it makes no difference but perhaps conceptually it does?

",49372,,2444,,8/24/2021 21:10,8/25/2021 7:32,What is the intuition behind comparing action values to state values in the policy improvement theorem?,,1,0,,,,CC BY-SA 4.0 30316,1,,,8/23/2021 1:14,,2,198,"

Consider the following from Numerical Computation chapter of Deep Learning book

Machine learning algorithms usually require a high amount of numerical computation. This typically refers to algorithms that solve mathematical problems by methods that update estimates of the solution via an iterative process, rather than analytically deriving a formula to provide a symbolic expression for the correct solution. Common operations include optimization (finding the value of an argument that minimizes or maximizes a function) and solving systems of linear equations. Even just evaluating a mathematical function on a digital computer can be difficult when the function involves real numbers, which cannot be represented precisely using a finite amount of memory.

The paragraph clearly mentions that solving system of linear equations is a common operation in machine learning. I just know that solving system of linear equations is useful in reinforcement learning and some basic algorithms of machine learning including regression.

Is solving system of linear equations useful anywhere in deep learning?

I think that we use them nowhere since optimization is the only algorithm generally used in deep learning.

",18758,,,,,10/25/2022 14:03,Do solving system of linear equations required anywhere in contemporarty deep learning?,,2,0,,,,CC BY-SA 4.0 30317,1,,,8/23/2021 2:28,,1,147,"

I know that mAP (mean Average Precision) is the common evaluation metric for the object detection tasks. It uses IoU (Intersection over Union) threshold such as mAP@0.5 to evaluate whether the predicted box is TP (True Positive), FP (False Positive), or FN (False Negative).

But I am confused about the role of classification score in this metric since the positive and negative is determined by the IoU, not the classification score. So, what is the role of classification scores in mAP evaluation?

Let's describe it by example, suppose there is a single object in an image with the ground-truth as follows:

  • Bounding boxes: [[100, 100, 200, 200]]
  • Class Index: [0]

Then the prediction of the object detection model resulting as follows:

  • Bounding boxes: [[100, 100, 200, 200], [100, 100, 200, 200], [100, 100, 200, 200]]
  • Class Indexes: [3, 2, 0]
  • Class Scores: [0.9, 0.75, 0.25]

When I tried to calculate the mAP using this library: https://pypi.org/project/mapcalc/

The mAP score is 1.0. So I am confused in the mAP point of view, this prediction is calculated as the correct prediction? So what is the role of classification score in this case? Should we also define the classification score threshold when using mAP?

",49384,,,,,10/2/2022 19:04,Role of confidence or classification score in object detection mAP metrics,,1,1,,,,CC BY-SA 4.0 30318,1,,,8/23/2021 2:31,,0,59,"

A potential way to solving AI is via whole brain simulation. Currently we have the algorithm to model a human brain (albeit far from perfectly): https://thenextweb.com/news/theres-an-algorithm-to-simulate-our-brains-too-bad-no-computer-can-run-it

It is estimated that we might need 100 petaflops to 1 exaflop of computing power to run a brain simulation in real time. Google's Tensor Processing Unit pods, however, have already achieved 1 exaflop of computing power a while back: https://spectrum.ieee.org/heres-how-googles-tpu-v4-ai-chip-stacked-up-in-training-tests

Since these brain simulations are basically giant spiking neural networks, can they be run on Tensor Processing Units (TPUs), which are specifically designed for neural networks? Since TPU pods can do an exaflop, they might pack enough power to finally run a whole brain simulation?

",49385,,40434,,8/23/2021 12:02,8/23/2021 12:02,Can brain simulation be done using Tensor Processing Units?,,0,5,,,,CC BY-SA 4.0 30319,2,,30316,8/23/2021 8:56,,1,,"

I guess a first distinction should be made between deep learning as a whole or deep learning as architecture. I think the paragraph you quote refers to solving systems of linear equations as a simple operation involved in deep learning generically. And this is definitely the case, when training a deep model we're always solving systems of linear equations, think about the way weights and biases are applied to an input:

$W*x + b$

this is in itself a system of linear equations. Then, moving away from training and model architectures, in deep learning there is still a massive use of dimensionality reduction techniques involved in pre or post processing of the data like SVD or PCA that also consists in solving systems of linear equations (an we could add any matrix factorization technique, relevant especially in the early methods for word embedding training before the advent of transformers).

",34098,,,,,8/23/2021 8:56,,,,0,,,,CC BY-SA 4.0 30321,1,,,8/23/2021 11:23,,1,15,"

What methods are used to localize an object in an video and classify that object?

Example: I have a camera which detects an pickup truck driving into a garage of three (1,2,3). In need to know if the truck was loaded or not (classification) and which garage it picked (localization). How would a schematic workflow of this problem look like?

It is assumed that the camera is mounted in a fixe position.

",48548,,,,,8/23/2021 11:23,How to localize and classify objects in video,,0,0,,,,CC BY-SA 4.0 30322,2,,28245,8/23/2021 11:31,,2,,"

you do not need ai for that, just a little bit of math / statistics:

audio: https://m.soundcloud.com/user-919775337/sets/algorithmic-reinterpretation

method: https://stats.stackexchange.com/questions/541044/a-new-method-for-processing-music-scores

source code: https://github.com/githubuser1983/algorithmic_python_music/blob/main/12RootOf2.py

",49368,,,,,8/23/2021 11:31,,,,2,,,,CC BY-SA 4.0 30323,2,,30298,8/23/2021 12:48,,2,,"

To draw the search tree, you just need to add as children the nodes that you found (i.e. the nodes that you add to the queue and that you may expand next). So, in your case, the root node of the tree would be $F$, which would have the children $I$, $B$, and $E$. Then $E$ would have the children $H$, $F$ and $A$, and so on.

So, here's a simple illustration of this partially constructed search tree.

   F
  /|\
 / | \
I  B  E
     /|\
    / | \
   H |F| A
        /|\
       / | \

Note that I added $F$ again to the search tree, but you should not expand it again, otherwise, you end up looping forever. I denoted it by |F| to differentiate it from the others. Moreover, note that the creation of the search tree does not really depend on the actual problem, but on the search algorithm and how you expand nodes/states.

Here you can find a nice step-by-step example of how to construct the search tree of DFS, in case my explanation above is not clear enough. You can also find more info about this topic in the book Artificial Intelligence: A Modern Approach by Russell and Norvig (you can also find freely downloadable pdfs of the 3rd edition on the web), specifically, chapter 3 "Solving Problems by Searching".

",2444,,2444,,8/23/2021 13:00,8/23/2021 13:00,,,,0,,,,CC BY-SA 4.0 30324,1,30325,,8/23/2021 14:45,,1,33,"

Has there been any experimentation in designing an AI to prompt a human to judge the accuracy of it's outcomes? instead of using a loss function, a human can judge the accuracy of it's estimation using some kind of metric, where it can then use that too update it's weights.

I was looking for some feedback on whether this is a plausible idea.

I was thinking that for domains that lack sufficient training data to solve problems this could be a possible solution.

Of course, it isn't feasible to judge every iteration of a training loop. So maybe feedback could be provided for the average of a number of estimations. Maybe every 100 estimations you could provide feedback.

It may not be a great training method because of the sparsity of feedback, but it could provide a place to start if you don't have a lot of data to throw at your problem initially.

",20271,,,,,8/23/2021 16:39,Using Human Confirmation in place of a loss Function for Training,,1,0,,,,CC BY-SA 4.0 30325,2,,30324,8/23/2021 16:39,,1,,"

What you are suggesting is similar to active learning and reward modelling.

To summarize both quickly, active learning is used when data are scarce or when the labeling process is too time consuming (almost always the case in NLP). To speed up the process, the idea is to train a model in performing not only a task, but also an estimation of its uncertainty when performing that task. For example, the model could output a classification and a probability associated with that classification that tells how much the model is 'unsure' of that classification. These probabilities can then be leveraged to collect a bunch of out of train data where the model performs poorly (i.e. low probability scores). These data are then annotated by a human, and used to expand the initial dataset and retrain the model. The whole process is meant to minimize the amount of annotation while maximizing the performance of the model. Some people use active learning also on large datasets, the idea being that many training instances are similar to each other and redundant for the model, so by focusing on training the model in instances that are particularly difficult for the model to capture, the training time can be reduce by a large margin.

Reward modelling is more interesting, and it is relate to reinforcement learning. Unlike in classic deep learning, which focus mainly in the design of loss functions, in reinforcement learning the main component is the reward function. A reward function is much trickier to design compare to a loss function, cause it's usually impossible to predict all possible non useful strategies that an agent might learn (e.g. running in circles to prevent to hit obstacles). So, to compensate for this difficulties, some people came up with the idea of letting artificial agents to design their own reward function, with the only constrain of an external human annotator penalizing dumb functions that emerge during the process. It's a bit hard to explain the idea shortly, but I personally find it really fascinating, and I suggest you also to take a look at this video for a more complete explanation.

",34098,,,,,8/23/2021 16:39,,,,0,,,,CC BY-SA 4.0 30328,1,30362,,8/23/2021 19:52,,0,59,"

According to Reinforcement Learning (2nd Edition) by Sutton and Barto, the policy improvement theorem states that for any pair of deterministic policies $\pi'$ and $\pi$, if $q_\pi(s,\pi'(s)) \geq v_\pi(s)$ $\forall s \in \mathcal{S}$, then $v_{\pi'}(s) \geq v_\pi(s)$ $\forall s \in \mathcal{S}$.

The proof of this theorem seems to rely on $\pi$ and $\pi'$ being identical for all states except $s$. To my best understanding, this is what allows us to write the expectation $\mathbb{E}[R_{t+1} + \gamma v_\pi(S_{t+1})|S_t = a, A_t = \pi'(s)]$ as $\mathbb{E}_{\pi'}[R_{t+1} + \gamma v_\pi(S_{t+1})|S_t = a]$ in line 2, which is central to the proof (re-produced from the book below).

\begin{aligned} v_{\pi}(s) & \leq q_{\pi}\left(s, \pi^{\prime}(s)\right) \\ &=\mathbb{E}\left[R_{t+1}+\gamma v_{\pi}\left(S_{t+1}\right) \mid S_{t}=s, A_{t}=\pi^{\prime}(s)\right] \\ &=\mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma v_{\pi}\left(S_{t+1}\right) \mid S_{t}=s\right] \\ & \leq \mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma q_{\pi}\left(S_{t+1}, \pi^{\prime}\left(S_{t+1}\right)\right) \mid S_{t}=s\right] \\ &=\mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma \mathbb{E}_{\pi^{\prime}}\left[R_{t+2}+\gamma v_{\pi}\left(S_{t+2}\right) \mid S_{t+1}, A_{t+1}=\pi^{\prime}\left(S_{t+1}\right)\right] \mid S_{t}=s\right] \\ &=\mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma R_{t+2}+\gamma^{2} v_{\pi}\left(S_{t+2}\right) \mid S_{t}=s\right] \\ & \leq \mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma R_{t+2}+\gamma^{2} R_{t+3}+\gamma^{3} v_{\pi}\left(S_{t+3}\right) \mid S_{t}=s\right] \\ & \vdots \\ & \leq \mathbb{E}_{\pi^{\prime}}\left[R_{t+1}+\gamma R_{t+2}+\gamma^{2} R_{t+3}+\gamma^{3} R_{t+4}+\cdots \mid S_{t}=s\right] \\ &=v_{\pi^{\prime}}(s) \end{aligned}

Does this mean that the proof is merely proving the special case of the policy improvement theorem for when the policies are identical except at $s$? I am having trouble seeing why the proof holds for the more general case of the two policies being potentially different for all states. In that case, line 2 would not hold and the theorem would not hold for all states as it claims to do.

",49372,,49372,,8/24/2021 22:33,8/25/2021 9:09,Policies for which the policy improvement theorem holds,,1,0,,,,CC BY-SA 4.0 30329,1,,,8/23/2021 20:59,,1,345,"

In the vanilla version of deep Q-learning, there are three places where the Q-network is queried:

  1. When exploring.

  2. When training:

    a. When calculating the optimal value of the state reached by an action (so as to compute a target discounted reward).

    b. When calculating the optimal Q-value for a given state, during training (so as to nudge the network weights and better reproduce the observed reward).

Now, during which steps should batch normalization and dropout be activated?

I couldn't find anything through a Google search.

Here are my guesses, for each step:

  1. When exploring: activate batch normalization and dropout: this lets the normalizations to be learned, and gives a chance to uncertain Q-values to be selected even if they are relatively low (because the dropout can result in a Q-value prediction higher than its average).

  2. When training:

    a. Do not activate batch normalization and dropout for calculating the optimal state value of the state reached by an action, because we want the Bellman equation to converge faster and therefore prefer stable (optimal state value) targets.

    b. Activate batch normalization and dropout when calculating the Q-value of a chosen action, as this is the whole idea of dropout (we use it during training).

What is the common wisdom on this?

",10583,,,,,8/26/2021 18:34,When to activate batch normalization and dropout in deep Q-learning?,,1,0,,,,CC BY-SA 4.0 30330,1,,,8/23/2021 23:15,,0,421,"

Calculus is a branch of mathematics that primarily deals with the rate of change of outputs of a function w.r.t the inputs.

It contains several concepts including limits, first-order derivatives, higher-order derivatives, chain rule, derivatives of special and standard functions, definite integrals, indefinite integrals, derivative tests, gradients, higher-order gradients, and so on...

Calculus has been heavily used in optimization and maybe in several other aspects of artificial intelligence.

What are the Calculus textbook(s) recommended that cover all the concepts required for a researcher in artificial intelligence?

",18758,,18758,,11/14/2021 23:43,11/14/2021 23:43,What are the Calculus books recommended for beginner to advanced researchers in artificial intelligence?,,1,1,,,,CC BY-SA 4.0 30331,2,,30185,8/23/2021 23:30,,1,,"

So let's first draw the network in each case, for better visualizing the problem:

Initial:
┌─┐
│A│
└┬┘
┌▽┐
│B│
└┬┘
┌▽┐
│C│
└─┘

1st Generation:

1st Gen
Mutation #1   Mutation #2
┌────┐        ┌────┐
│A   │        │A   │
└┬──┬┘        └┬───┘
┌▽┐ ┆         ┌▽───┐
│D│ ┆DIS      │B   │
└┬┘ ┆         └┬──┬┘
┌▽───┐        ┌▽┐ ┆
│ B  │        │E│ ┆DIS
└┬───┘        └┬┘ ┆
┌▽───┐        ┌▽───┐
│ C  │        │ C  │
└────┘        └────┘

2nd Generation

1st Hypothesis (Same Result)
┌────┐
│A   │
└┬──┬┘
 ┆ ┌▽┐
DIS│D│
 ┆ └┬┘
┌───▽┐
│B   │
└┬──┬┘
 ┆ ┌▽┐
DIS│E│
 ┆ └┬┘
┌───▽┐
│C   │
└────┘

In order to converge the 2 mutations like this, we would need to:

  • Somehow make topology accessible globally.
  • Make one mutation search for the population, looking for an occurrence of a similar split.

Both items go against the principle of evolution, as only 2 mating individuals should trade genetic information.

2nd Hypothesis (Two Mutants)
┌────┐   ┌────┐
│A   │   │A   │
└┬──┬┘   └┬──┬┘
 ┆ ┌▽┐    ┆ ┌▽┐
DIS│D│   DIS│G│
 ┆ └┬┘    ┆ └┬┘
┌───▽┐   ┌───▽┐
│B   │   │B   │
└┬──┬┘   └┬──┬┘
 ┆ ┌▽┐    ┆ ┌▽┐
DIS│F│   DIS│E│
 ┆ └┬┘    ┆ └┬┘
┌───▽┐   ┌───▽┐
│C   │   │C   │
└────┘   └────┘

Yes. This 2nd approach is the most usual. Each mutation generates a completely new gene.

┌────┐
│A   │
└┬──┬┘
┌▽┐┌▽┐
│8││D│
└─┘└┬┘
┌───▽┐
│B   │
└┬──┬┘
┌▽┐┌▽┐
│u││F│
└─┘└┬┘
┌───▽┐
│C   │
└────┘


Looks like a too complicated genome for the 3-rd generation, doesn't it?

And yes. A 3rd generation could theoretically come up like this complicated genome.[1]

┌───────┐
│A      │
└┬──┬──┬┘
┌▽┐┌▽┐ ┆
│G││D│DIS
└┬┘└┬┘ ┆ 
┌▽──▽───┐
│B      │
└┬──┬──┬┘
┌▽┐┌▽┐ ┆ 
│F││E│DIS
└┬┘└┬┘ ┆
┌▽──▽───┐   
│C      │   
└───────┘   

But keep in mind that more complex structures are naturally harder to train and therefore will not survive if the performance does not justify the complexity.

[1] - Usually disjoint genes and excess genes (those that do not match) are chosen from the fittest parent. However, in (very specific) cases where 2 parents have the same fit, it could be chosen randomly.

Main source: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.28.5457&rep=rep1&type=pdf


In the end, you propose an interesting method, to use hashes instead of integers for identifying genes. This is a bold suggestion, and I'm not an expert, so this part is mostly opinionated:

I believe original genes numeration idea is:

  • Inspired on the natural genes sequencing
  • For assuring uniqueness genes identifications
  • And tracking history changes, just like Git.

But it could also work using a hash.

On one side, that could be a great feature, for allowing distant relatives to breed with consistency. On the other side, distant relatives tend to have a very different structure and the gene might not be so relevant to them.

I encourage you to test it. And if it works better than the vanilla, you should release a paper!

",49188,,,,,8/23/2021 23:30,,,,2,,,,CC BY-SA 4.0 30333,1,,,8/24/2021 1:21,,0,46,"

Consider the following paragraph from Numerical Computation of deep learning book that says derivative as a slope of the function curve at a point

Suppose we have a function $y= f(x)$, where both $x$ and $y$ are real numbers. The derivative of this function is denoted as $f'(x)$ or as $\dfrac{dy}{dx}$ . The derivative $f'(x)$ gives the slope of $f(x)$ at the point $x$. In other words, it specifies how to scale a small change in the input to obtain the corresponding change in the output: $f(x+ \epsilon) \approx f(x)+\epsilon f'(x)$.

Slope of a function $f(x)$ at a point $a$ is generally defined as the $\tan$ of the angle made by the tangent line at the point $a$ on the curve of the function $f(x)$ to the positive x-axis in anti-clock wise direction. That is, if $\theta$ is the angle made by the tangent of the curve $f(x)$ at a point $(a, f(a))$ to the positive x-axis in anti-clock wise direction. Then the slope of $f(x)$ at point $a$ is $\tan \theta$.

In theory, tangent line should touch the curve of $f(x)$ at a single point only. Most of the textbooks draws nice convex curves and then show slope as $\tan \theta$. But, i think it is not possible for many functions to draw a tangent line at a point that touches the curve at that single point only. Else it may be a tangent line or some other traversal.

How to understand slope as $\tan \theta$ in such cases? Where am I going wrong?

",18758,,18758,,8/24/2021 8:07,1/16/2023 11:01,How to understand slope of a (non-convex) function at a point in domain?,,1,0,,,,CC BY-SA 4.0 30334,2,,30333,8/24/2021 5:22,,0,,"

Slope is not defined like this. You are confusing slope with angle. The slope definition will be more natural, as seen below.

Intuitively, the slope of a curve at a point is directly how steeply it changes as a function of change of horizontal distance: a flat road doesn't change in height as you drive on it, but driving on a ramp constantly elevates you as you drive up it. Typically this is understood as a ratio, so a 45 degree angle has a slope of 1, and a slope of 30 degrees has a slope of $\sqrt3$. This will be formalized in trigonometry, but if I recall correctly, the definition can go the other direction as well (that the ratio approach defines the angle).

We use ratios measurements of slopes instead of degree measurements of angles because calculating the derivative using the limit definition gives the slope as a ratio directly. This graphic shows the idea rather nicely, but this article provides a clearer exposition.

There's also the case that the slope as a ratio makes more sense since the backpropagation technique is effectively adding/subtracting $ε$ to some $x$ given known $\Delta f(x)$, the error signal, relying on the equation you posted above: $f(𝑥+𝜖)≈𝑓(𝑥)+𝜖𝑓′(𝑥)$.

We can rewrite the equation above to obtain $𝑓(𝑥) - f(𝑥+𝜖) ≈ - 𝜖𝑓′(𝑥) $. Since $\Delta f(x)$ and 𝑓′(𝑥) are known, we can also compute $𝜖$, but note that only the righthand expression of $𝜖$ is computed since $𝑓(𝑥) - f(𝑥+𝜖)$ is obtained as $Δ𝑓(𝑥)$. We can resolve this with the update rule $x \rightarrow x - \epsilon$

",6779,,6779,,8/24/2021 5:42,8/24/2021 5:42,,,,5,,,,CC BY-SA 4.0 30335,1,,,8/24/2021 6:28,,1,101,"

I've recently been reading a lot about style transfer, its applications and implications. I understand what the Gram matrix is and does. I can program it. But one thing that has been boggling me is: how does the VGG style loss incorporate color information into the style?

In the paper "Texture Synthesis by CNNs", Gatys et al. show that minimizing the MSE between the Gram matrices of a random white noise image and a "target texture" yields new instances of that texture, with stochastic variation. I understand that this must work, as the Gram matrix measures the correlation between features detected by the VGG activations across channels, without spatial relation. So if we optimize the white noise image to have the same Gram matrix, it will exhibit the same statistics, and hence look like an instance of the original texture.

But how does this work with color? Of course, the VGG could learn something like a mean filter, with all ones, whose output would be the avg. color over that filter kernel. After all, "color" is just another statistic. But then when using that in conjunction with the Gram loss, wouldn't this information be lost, as it's all just correlation and hence "relative" to each other?

While writing this question, I'm starting to think of it like this: Maybe the feature correlation expresses these color constraints in some form like: "if one part is red, there must be a green part close to it" (for the radish), or "if there is a rounded edge, one side of it must be in shadow (=darker)" in case of the stone texture. This would tie color to the surrounding statistics (e.g., edges, other colors) and is the only reason I can think of why this works at all.

Can somebody confirm/refute this, and share their thoughts? Happy to discuss!

Image Source: Texture Synthesis by Convolutional Neural Networks, Gatys et al.

",44342,,,,,1/16/2023 15:05,How does a VGG-based Style-Loss incorporate color information?,,1,0,,,,CC BY-SA 4.0 30336,2,,28388,8/24/2021 7:11,,3,,"

First, allow me to draw it for better visualization:

1. (α=-∞,β=∞) from B ➡ D

             B (α=-∞,β=∞)
          ↙ / \
(α=-∞,β=∞) D   E
           |   |
        -7=J   K=0
--------------------------
2. (v=-7) J ➡ D α=max(-7,-∞)=-7

             B (α=-∞,β=∞)
α=max(-7,-∞)/ \
(α=-7,β=∞) D   E
      ↖    |   |
        -7=J   K=0
--------------------------
3. (α=-7) D ➡ B β=min(∞,-7)=-7

               β=min(∞,-7)
             B (α=-∞,β=-7)
         ↗  / \
(α=-7,β=∞) D   E
           |   |
        -7=J   K=0
--------------------------
4. (α=-∞,β=-7) from B ➡ E

             B (α=-∞,β=-7)
            / \       ↓
(α=-7,β=∞) D   E (α=-∞,β=-7)
           |   |
        -7=J   K=0
--------------------------
5. (v=0) K ➡ E β=min(0,-7)=-7

             B
            / \ β=min(0,-7)
(α=-7,β=∞) D   E (α=-∞,β=-7)
           |   |    ⬈?
        -7=J   K=0
--------------------------


Update: I found this simulator. It behaves exactly like you describe:

http://homepage.ufp.pt/jtorres/ensino/ia/alfabeta.html

  • Enter Tree Structure: 2 3 1 1 1 1 1
  • Enter Values -7 0 -4 -10

I noticed that it tries to update β on node K->E. As β=min(0,-7), it won't change.

It's possible to check their internal code by inspecting the page, to debug even further.

",49188,,49188,,8/29/2021 13:22,8/29/2021 13:22,,,,3,,,,CC BY-SA 4.0 30338,1,,,8/24/2021 8:21,,1,79,"

In AIMA, performance measure is defined as something evaluating the behavior of the agent in an environment.

Rational agents are defined as agents acting so as to maximize the expected value of the performance measure, given the percept sequence they have seen so far.

Goal-based agents are those acting to achieve their goals. Utility-based agents are those trying to maximize their own expected "happiness".

Now, can we say these two design approaches induce performance measures?

What I suggest is that in goal-based agent design we want to find a point satisfying some conditions, so it's an optimization problem with a zero objective function and either this function is the performance measure or performance measure is optimized if and only if we find a solution to this optimization problem with zero objective function. In the utility-based agent design, we have an objective function (as a performance measure) that we want to optimize, and the agent has its own utility function, which it wants to optimize, and this utility function is optimized if and only if our objective function is optimized.

",48908,,48908,,8/29/2021 9:17,8/29/2021 9:17,Are goal-reaching and optimizing the utility function special cases of performance measure?,,0,1,,,,CC BY-SA 4.0 30339,2,,30239,8/24/2021 8:28,,1,,"

An Image is Worth 16X16 Words:

TRANSFORMERS FOR IMAGE RECOGNITION AT SCALE

A Transformer consists of alternating layers of multiheaded self-attention.

The Transformer Paper adapts a NLP architecture for making Image Classification. For that, it first need to tokenize the image (like a piece of text). The tokenization is done by splitting the image into fixed-size patches and then embedding it.

In other words, they treat a picture as a set of sub-images, just like a phrase is a set of words.

Relational Deep Reinforcement Learning

The Reinforcement Learning paper uses self-attention to iteratively reason about the relations between entities in a scene and to guide a model-free policy.

Focused on game playing, they decide to model a non-local concept of entities (players, objects) and the relationship between them.

So instead of tokenizing the image, they first extract entities from it, using a more standard approach, which is convolution. So the multi-head attention layer receives a set of entities, the output is normalized so it's once again formatted as the entity set, so it can proceed with an iterative process.


TL;DR: Same base (very promising) technique, but applied to 2 drastically different models.

",49188,,49188,,8/24/2021 17:15,8/24/2021 17:15,,,,0,,,,CC BY-SA 4.0 30340,2,,30330,8/24/2021 8:49,,2,,"

Answer: Calculus James Stewart is the best for a beginner.

I started to learn Calculus studying engineering with James Stewart Calculus ( maybe the best for beginners and is really didactic ), Problems in Mathematical Analysis Demidovich ( best for me because simplicity, fast, but few multivariable focus and difficult for learn ), Nikolai Piskunov - Differential and Integral Calculus (again difficult to learn but teachers used for prepare his test), Calculus with Analytic Geometry Swokosky, Louis Leithold Calculus and Purcell Calculus. this books are the popular base books for an engineering degree in mostly all universities.

However the best way for approach Calculus to Artificial Intelligence is focus in the chapter that are directly related to IA and we have:

  1. Multivariable Calculus (also could help for understand faster linear algebra, eigen values&vectors, $ R ^n $ spaces,etc )
  2. Directional Derivatives ( For Gradient Descent )
  3. Infinite Sequences and Series
  4. Partial Derivates ( you need know one variable derivatives for go to Partial Derivatives )
  5. Vector Calculus
  6. Jacobian
  7. Of course all that need a deep understanding of integrals and derivatives no forget. Calculus.
  8. none of this can be learned without knowing algebra, matrices, geometry, trigonometry and logic math ( Elementary Subjects )

I can tell you my experience learning Calculus James Stewart 7th and build a summary about the topics.

  • Multivariable Calculus 6 Chapters about that.
  • Directional Derivatives Section 14.6
  • Infinite Sequences and Series Chapter 11
  • Vector Calculus Chapter 16
  • Jacobian Transformation ( 15.10 Change of Variables in Multiple Integrals)
  • Of course all that need a deep understanding of integrals and derivatives no forget. 6 Chapters about that.

As a reference this is the index:

I am looking forward for more books about advanced calculus especially with focus in multidimensional calculus or applied math for artificial intelligence, I have found more books about statistics approach than calculus/math approach.

",30751,,30751,,8/24/2021 9:04,8/24/2021 9:04,,,,1,,,,CC BY-SA 4.0 30341,1,30392,,8/24/2021 9:55,,3,2136,"

I was hoping someone could explain to me why in the transformer model from the "Attention is all you need" paper there is no activation applied after both the multihead attention layer and to the residual connections. It seems to me that there are multiple linear layers in a row, and I have always been under the impression that you should have an activation between linear layers.

For instance when I look at the different flavors of resnet they always apply some sort of non linearity following a linear layer. For instance a residual block might look something like...

Input -> Conv -> BN -> Relu -> Conv -> (+ Input) -> BN -> Relu

or in the case of pre-activation...

Input -> BN -> Relu -> Conv -> BN -> Relu -> Conv -> (+ Input)

In all the resnet flavors I have seen, they never allow two linear layers to be connected without a relu in-between.

However in the the transformer...

Input -> Multihead-Attn -> Add/Norm -> Feed Forward(Dense Layer -> Relu -> Dense Layer) -> Add/Norm

In the multihead attention layer it performs the attention mechanism and then applies a fully connected layer to project back to the dimension of its input. However, there is no non linearity between that and feed forward network (except for maybe the softmax used in part of the attention.) A model like this would make more sense to me...

Input -> Multihead-Attn -> Add/Norm -> Relu -> Feed Forward(Dense Layer -> Relu -> Dense Layer) -> Add/Norm -> Relu

or something like the pre-activated resnet...

Input -> Relu -> Multihead-Attn -> Add/Norm -> Input2 -> Relu -> Feed Forward(Dense Layer -> Relu -> Dense Layer) -> Add/Norm(Input2)

Can anyone explain why the transformer is the way it is?

I have asked a similar question when I was looking at the architecture of wavenet on another forum but I never really got a clear answer. In that case it did not make sense to me again why there was no activation applied to the residual connections. (https://www.reddit.com/r/MachineLearning/comments/njbjfb/d_is_there_a_point_to_having_layers_with_just_a/)

",49400,,,,,5/9/2022 11:11,Why does a transformer not use an activation function following the multi-head attention layer?,,1,1,,,,CC BY-SA 4.0 30342,1,30343,,8/24/2021 10:59,,-1,37,"

Unsupervised learning using neural networks is clearly machine learning since it is utilising neural nets.

However, some algorithms, k-means clustering, for example, are considered unsupervised learning, while they look just regular algorithms (non-ML).

What should be the borderline (criteria) to differentiate between unsupervised learning and a non-ML algorithm?

",2844,,2444,,8/24/2021 11:36,8/24/2021 11:36,What is the borderline between unsupervised learning and regular algorithms?,,1,0,,,,CC BY-SA 4.0 30343,2,,30342,8/24/2021 11:32,,2,,"

Any algorithm that uses data (in some form) to improve some performance measure (aka objective function), or to find some function, can be considered a machine learning algorithm. See this answer for more complete definitions of ML.

k-means does that. It uses the data to find some division of the data itself into groups, in order to maximize some objective function. So, k-means is a machine learning algorithm.

The use of neural networks is not a requirement for something to be called a machine learning approach. In fact, there are many other machine learning approaches/algorithms that do not use them, such as tabular Q-learning, support vector machines or hidden Markov models.

",2444,,,,,8/24/2021 11:32,,,,3,,,,CC BY-SA 4.0 30344,2,,30335,8/24/2021 13:03,,0,,"

My two cents on this topic:

"After all, "color" is just another statistic", I think this is the simple (and correct) answer to the question. To go a bit deeper, you can check this paper, which shows how minimizing a loss based on the Gram matrix is mathematically equivalent to minimizing the Maximum Mean Discrepancy between the inputs and targets distribution. The two distributions inevitably contains information about colors, so while disentangle spatial features is rather simple (you could simply show one pixel at the time instead of an image), disentangling colors is much more tricky, cause it's an intrinsic characteristic of each point.

A last remark from my side is that the main problem when working with style transfer is precisely that "style" mean everything. This is not a problem for papers that simply try to achieve it per se, i.e. without a real use case in mind, but it becomes fundamental in real applications. A concrete example of this is super resolution. Many papers try to achieve it with style transfer, coupling low resolution and high resolution images. Ideally the features you would like to transfer are enhanced sharpness and maybe texture injection for details generation. Problem is that along with them there are always side features that hinder the quality of the resulting image, among which noise specific to the target domain, and also colors.

",34098,,,,,8/24/2021 13:03,,,,0,,,,CC BY-SA 4.0 30346,1,,,8/24/2021 14:43,,1,123,"

I am working with a dataset with about 400 features, all binary (1 or 0). What approach would you recommend? Data set is about 500k records.

",49410,,49188,,8/24/2021 23:10,1/22/2022 1:01,Best algorithms/approaches for data sets of binary (1/0) features,,1,6,,,,CC BY-SA 4.0 30347,1,,,8/24/2021 16:33,,2,66,"

I am working on modeling a transportation problem as an MDP. Multiple trucks move material from one node to various other nodes in a network. However, the time it takes a truck to travel between any 2 nodes is different based on distance, and decisions are made when a truck arrives at a node. There lies the problem. Is it possible to have an MDP where the length of time between decision epochs is not uniform?

The most similar MDP formulation I could find was the Semi-Markov Decision process, but that uses a random length epoch.

",49413,,2444,,8/26/2021 11:05,1/23/2022 14:05,Markov Decision Processes with variable epoch lengths,,1,1,0,,,CC BY-SA 4.0 30348,1,,,8/24/2021 16:54,,2,100,"

I have some data with soft labels and I am trying to figure out the best approach to solve the problem with Machine Learning (since regular classification is of the table, i.e. hard labels). However, whenever I look up "soft label" materials, I keep getting pointed to label smoothing. Is this the main/only technique to deal with soft labels?

",49395,,2444,,8/24/2021 22:59,8/24/2021 22:59,Is soft labeling the same thing as label smoothing?,,0,1,,,,CC BY-SA 4.0 30350,1,30358,,8/24/2021 17:27,,1,310,"

Suppose we have a vacuum cleaner operating in a $1 \times 2$ rectangle consisting of locations $A$ and $B$. The cleaner's actions are Suck, Left, and Right and it can't go out of the rectangle and the squares are either empty or dirty. I know this is an amateur question but how does randomization (for instance flipping a fair coin) avoid entering the infinite loop? Aren't we entering such a loop If the result of the toss is heads in odd tosses and tails in even tosses?

This is the text from the book "Artificial Intelligence: A Modern Approach" by Russell and Norvig

We can see a similar problem arising in the vacuum world. Suppose that a simple reflex vacuum agent is deprived of its location sensor and has only a dirt sensor. Such an agent has just two possible percepts: [Dirty] and [Clean]. It can Suck in response to [Dirty]; what should it do in response to [Clean]? Moving Left fails (forever) if it happens to start in square A, and moving Right fails (forever) if it happens to start in square B. Infinite loops are often unavoidable for simple reflex agents operating in partially observable environments. Escape from infinite loops is possible if the agent can randomize its actions. For example, if the vacuum agent perceives [Clean], it might flip a coin to choose between Right and Left. It is easy to show that the agent will reach the other square in an average of two steps. Then, if that square is dirty, the agent will clean it and the task will be complete. Hence, a randomized simple reflex agent might outperform a deterministic simple reflex agent.

And this is the agent program from the same source:

function REFLEX-VACUUM-AGENT([location,status]) returns an action
 if status = Dirty then return Suck
 else if location = A then return Right
 else if location = B then return Left
",48908,,2444,,8/25/2021 13:56,8/25/2021 13:56,How does randomization avoid entering infinite loops in the vacuum cleaner problem?,,1,0,,,,CC BY-SA 4.0 30351,2,,30346,8/24/2021 17:55,,1,,"

Most standard algorithms will work well on binary data, like:

  • Decision trees (and random forest)
  • Nearest Neighbors
  • Neural Networks
  • etc

But your choice depends on many other things, like:

  • What is the expected output?
    • Are you doing classification?
    • Regression?
    • Is it deterministic (same features should always give the same outcome) or stochastic (random factor).
  • What is the nature of the database and relationship between the features?
    1. A 20x20 black-white image.
    2. A phrase embedded as 20 sequences of 20-size-token.
    3. A 400 questions true/false exam.
    • They can all have the same shape, but are very different in nature and would perform better with different algorithms.
    • How disperse / smooth is your data?
    • Do all 400 features have the same importance?
    • How independent are they?
  • How complex the problem really is?
  • How much performance do you really need?
  • How much work and tuning are you willing to put on this?
",49188,,,,,8/24/2021 17:55,,,,5,,,,CC BY-SA 4.0 30352,1,30457,,8/24/2021 18:44,,1,169,"

Many machine translation metrics such as BLEU or ROUGE are used to evaluate sequence to sequence models where, usually, the sequences are pieces of natural language.

Is it possible to use these metrics when the dataset is not constituted of natural language sequences? For instance, if the sequences are source code (in some programming language), does it still make sense to use BLEU or ROUGE? How "good" are these metrics in general?

",49366,,2444,,8/24/2021 23:15,8/31/2021 6:48,Does it make sense to use BLEU or ROUGE for any machine translation task?,,1,0,,,,CC BY-SA 4.0 30353,1,30401,,8/24/2021 18:58,,1,98,"

I'm trying to build a DQN Agent to take a set of 10 best actions simultaneously (integer values from 1 to 100) as outputs per episode. The input is a float. The goal is to find the optimal combination of (10) actions per episode.

Currently, the set up is having a single NN output 10 actions w/ the highest q-valules for each episode. But in the Memory Replay process, each individual Set (of 10 fixed actions obtained from the exploration phase) is being treated as a single action. Because the target network also takes the output of the list of 10-action from the main NN. Hence I can see the agent repeatedly trying certain Set (with a fixed 10 actions) in the replay/retrain part, whereas our goal is to find the optimal combination of 10 actions per episode, Not the optimal Set of fixed combinations. So in essence, I would like the agent to pick out and mix up the actions from the Sets with higher Q-values (known from the exploration phase) to form new optimal "Sets" in the Replay process.

I was thinking maybe instead of using a single NN with 10 outputs I could do 10 NNs with single outputs for each episode so that each action is treated separately. And I suppose I will have 10 q-networks and target networks as well, then I could combine the results by the end of each episodes. But, I am not sure if that is necessarily the best way to fix the problem of having repetitive sets of fixed action in the replay process.

Alternatively, I think the problem could be treated as a multi-armed bandit problem, except each arm here has "sub-arms" too so to speak, but that could require some changes to the custom environment I am working with and I don't want to touch that unless necessary.

Maybe there is a clever manipulation within the retrain process given my current setup that I am not seeing. Here is a snippet of the code for some more clarity.

class DQNAgent():

    def __init__(self,optimizer):
        # Initialize atributes
        self._state_size = 1
        self._action_size = 76
        self._optimizer = optimizer
        
        self.experience_replay = deque(maxlen=2000)
        
        # # Initialize discount and exploration rate
        # self.gamma = 0.6
        # self.epsilon = 0.5
        self.gamma = 0.95
        self.epsilon = 1.0
        self.epsilon_min = 0.01
        self.epsilon_decay = 0.95
        self.learning_rate = 0.01
        
        # Build networks
        self.q_network = self._build_compile_model()
        self.target_network = self._build_compile_model()
        
    
    def store(self, state, action, reward, next_state, terminated):
        self.experience_replay.append((state, action, reward, next_state, terminated))
        
    def _build_compile_model(self):
        
        model = Sequential()
        model.add(InputLayer(input_shape=(self._state_size,)))
        model.add(Dense(100, activation='relu'))
        model.add(Dense(100, activation='relu'))
        model.add(Dense(self._action_size, activation='linear'))
        model.compile(loss='mse', optimizer=self._optimizer)
        return model
    
    def alighn_target_model(self):
        self.target_network.set_weights(self.q_network.get_weights())

    def retrain(self, batch_size):
        if len(self.expirience_replay) < batch_size:
            return
        minibatch = random.sample(self.expirience_replay, batch_size)
    
        for state, action, reward, next_state, terminated in minibatch:
            
            target = self.q_network.predict(np.reshape(np.array(state), (-1,1)))
            print('target size :', np.shape(target))
            
            
            if terminated:
                target[0][action] = reward
            else:
                t = self.target_network.predict(np.reshape(np.array(next_state), (-1,1)))
                target[0][action] = reward + self.gamma * np.amax(t)
    
            self.q_network.fit(np.reshape(np.array(state), (-1,1)), target, epochs=1, verbose=0)
        
    
    def act(self,state):
        self.epsilon *= self.epsilon_decay
        self.epsilon = max(self.epsilon_min, self.epsilon)
        action_space = [ 1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17,
       18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34,
       35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,
       52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68,
       69, 70, 71, 72, 73, 74, 75, 76] #all 76 available nodes
        if np.random.rand() <= self.epsilon:
            return np.array(random.sample(action_space,10))-1 #-1 to match control's index

        q_values = self.q_network.predict(np.reshape(np.array(state), (-1,1)))
        print("q_vals shape",np.shape(q_values))
        print('q_vals type',type(q_values))
  
        top_actions_idx = q_values[0].argsort()[-10:][::-1]
",49418,,2444,,8/26/2021 22:06,8/27/2021 10:27,Implementing Multiple NNs in one DQN model?,,1,0,,,,CC BY-SA 4.0 30355,1,,,8/25/2021 0:28,,0,91,"

Any parametric model may have parameters as well as hyperparameters. Learning algorithm deals with parameters and hyperparameters should be dealt outside learning algorithm. Consider the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)

Most machine learning algorithms have settings called hyperparameters, which must be determined outside the learning algorithm itself; we discuss how to set these using additional data.

My doubt is about the usage of the word 'additional' in the paragraph. Afaik, a small part of dataset under consideration is used to validate and hence in determining the hyperparameters, called as validation data. It is also a part of the dataset as training and testing data. You can check the section 5.3 for more details.

If yes, what is the need for the usage of the word 'additional'? Is it true that data for setting hyperparameters is taken outside of the underlying dataset?

",18758,,18758,,8/26/2021 9:45,1/20/2022 20:43,Why data required for hyperparameter tuning is considered as an additional data?,,2,0,,,,CC BY-SA 4.0 30356,1,,,8/25/2021 0:51,,0,73,"

Consider the following excerpt from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)

Machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and a decreased emphasis on proving confidence intervals around these functions;

This excerpt says that machine learning focuses on estimating complicated functions, but not on proving confidence intervals around those functions. What is meant by or definition for a confidence interval around a complicated function mentioned here?

",18758,,2444,,8/25/2021 13:04,12/13/2021 9:24,"What is the definition of ""confidence interval"" around a (complicated) function?",,2,0,,,,CC BY-SA 4.0 30357,2,,30356,8/25/2021 6:09,,0,,"

The confidence interval is a way of quantifying expected error in predictions.

For example, lets say you are trying to model your dart throwing scoring metrics. If you play many games, and assume the score obtained follows a normal distribution, you can do a statistical calculation known as maximum likelihood estimation which provides you with parameters of the normal distribution that best fits the data. This allows you to quantify the error more precisely, so you can say things like: "95% of my games will be at least 200 points", or "having a score this good/bad is a 1 in 1000 chance."

Much of machine learning is about outputting best predictions given some input, and not about quantifying error. So in the above example, we might just output a single value that best captures the underlying statistic, which in the above case, would be the mean of all scores, which is confirmed by the maximum likelihood estimate approach in statistics.

A quick online search returns the definition: "the probability that a population parameter will fall between a set of values for a certain proportion of times.", which seems to agree with the above.

Bar graphs with confidence intervals would be a very clear visual intuition of the idea.

",6779,,6779,,8/25/2021 14:45,8/25/2021 14:45,,,,0,,,,CC BY-SA 4.0 30358,2,,30350,8/25/2021 7:06,,1,,"

how does randomization (for instance flipping a fair coin) avoid entering the infinite loop?

The coin is flipped on each occasion that a decision is required (as opposed to once in order to define the actions that will be taken from that point onwards). Which means the agent can make a different decision each time it encounters the same (faulty detected) state.

This prevents tight loops in a single location that the quote is discussing. The "infinite loop" that is being broken is repeatedly and deterministically making a movement decision that results in no effective movement, thus leaving the agent in exactly the same state as before for all remaining time steps.

Aren't we entering such a loop If the result of the toss is heads in odd tosses and tails in even tosses?

It is not important that the agent might repeat ineffective movements multiple times. Eventually the agent will make a correct decision - this is very likely after only a few failed attempts at most. It is guaranteed in the limit of infinite time steps, unlike the situation where no randomness is added.

",1847,,,,,8/25/2021 7:06,,,,0,,,,CC BY-SA 4.0 30359,2,,30355,8/25/2021 7:10,,1,,"

I think it just tries to emphasize that you need three, non-intersecting, chunks of datasets: training, validation, and test. So, you need some data in addition to the training data to tune hyperparameters. You can simply create the train/test/validation splits by sampling without replacement from an initial dataset. You don't need anything additional than this initial dataset.

",32621,,32621,,1/20/2022 20:43,1/20/2022 20:43,,,,1,,,,CC BY-SA 4.0 30360,2,,30315,8/25/2021 7:32,,2,,"

Working with $v_\pi(s)$ is sometimes simpler because you don't need to worry about which action to take. When you use $q_\pi(s, a)$, the action $a$ need not come from the policy. This applies in general, but to your comment of comparing $q_\pi(s, \pi'(s))$ with $q_\pi(s, \pi(s))$, if you look at the second and remember the definition of the value functions as expectations:

$$v_\pi(s_t) = \mathbb{E}\big[ G_t | S_t = s_t \big]$$

and

$$q_\pi(s_t, a_t) = \mathbb{E}\big[ G_t | S_t = s_t , A_t = a_t \big]$$

, then you can see that $q_\pi(s_t, \pi(s)) \equiv v_\pi(s_t)$. The expectation is over all trajectories. $q_\pi$ allows you to deviate from the trajectories prescribed by a (deterministic) policy, but only for the next timestep after that, you follow the policy.

We need to find a way to evaluate a change to the policy to decide whether that change improved it or not. We could change the policy so that in a certain state $s$ it takes a different action $a'$, but as it is a deterministic policy, it will always choose that action in the future. Now, you have to consider all possible trajectories that go through a state, including those that go through that state multiple times, choosing the action $a'$ every time. It's quite messy.

Instead, you can say that you don't want to change your policy. If you happen to revisit your state in the future as part of your trajectory, you will follow the policy, but just this once you take a different action in the state. If you have an optimal policy, deviating from it will give you worse performance or another optimal policy. However, you cannot expect to get better than the optimal policy. If you do find that deviating from the optimal policy gives you better performance, then the policy wasn't optimal in the first place. You can incorporate that deviation from the policy to get an improved policy.

A similar scenario in driving: Let's pretend that there is no one else on the roads, so no traffic (deterministic policy). You know the best route from home to work, and you take that route every day. This is your policy. You believe that this is the best route to take, so there's no point in changing. One morning, you decide to take a turn, not in your policy. You don't expect to change your preferred route in general, but you do it just this once. You find that taking that turn gets you to work sooner, so your previous route wasn't indeed the best. You just found a better one. Again, this is assuming there is no traffic, i.e. deterministic policy. Even with a stochastic policy, it isn't too difficult to extend this idea. Instead of immediately updating your best route, you collect more observations (more trips to work) to see if it is better in general. Then you can also consider changes to the policy that are not deterministic, e.g. taking this turn only 30% of the times.

",,user42664,,,,8/25/2021 7:32,,,,0,,,,CC BY-SA 4.0 30362,2,,30328,8/25/2021 8:50,,1,,"

There are two policies in the derivation, $\pi$ and $\pi'$. When you have an expectation in the form

$$\mathbb{E}_{\pi_A}[R_{t+1} + \gamma v_{\pi_B}(S_{t+1}) | S_t = s]$$

, it means that from your current state $s$, you take an action according to $\pi_A$. Let's say this action is $a$, then the reward function from the environment will transition to the next state, represented by the random variable $S_{t+1}$ and also gives you a reward $R_{t+1}$ which can depend on the current state $s$, the action taken $a$ and the next state $S_{t+1}$. That's where your reliance on $\pi_A$ ends because $v_{\pi_B}$ is an expectation itself:

$$v_{\pi_B}(s) = \mathbb{E}_{\pi_B}[G_{t+1}|S_{t} = s]$$

So going from line 2 to line 3 in your derivation just means that you are taking the next action according to $\pi'$, but nothing inside the expectation changes, it still uses the same policy $\pi$. The next state, $S_{t+1}$, comes from the environment and depends on $\pi'$. The same is true for the observed reward $R_{t+1}$, but nothing inside $v_\pi$ needs the policy $\pi'$.

The proof then replaces $v_\pi(S_{t+1})$ with $q_\pi(S_{t+1}, \pi'(S_{t+1}))$ but the equality changes to a less than or equal to sign because the whole discussion in the book starts with the condition $q_\pi(s, \pi'(s)) \ge v_\pi(s)$.

It does not matter in how many states the policies differ. If the two policies are the same in states $s_i$, then $q_\pi(s_i, \pi'(s_i)) = v_\pi(s_i)$. If policy $\pi'$ is better than policy $\pi$ in states $s_j$, then $q_\pi(s_j, \pi'(s_j)) > v_\pi(s_j)$. The initial condition $q_\pi(s, \pi'(s)) \ge v_\pi(s)$ combines these two statements. If you know in which states the policy $\pi'$ is better, then you could think of replacing the $\le$ signs with $<$ signs in the derivation, and the rest of the states with $=$. You can't really do that because these are random variables, so even if you knew in which states your new policy performs better, you often don't know the environment dynamics that determine the next state, $S_{t+1}$.

",,user42664,,user42664,8/25/2021 9:09,8/25/2021 9:09,,,,0,,,,CC BY-SA 4.0 30364,1,,,8/25/2021 11:53,,1,134,"

While training a standard VAE, we assume that the prior on the latent variable Z is the standard gaussian and we use KL divergence to push the posterior as close as possible to the standard gaussian. Why not assume any other gaussian as the prior? What are the intuitive reasons for this?

",35576,,,,,10/4/2021 11:50,Why is the prior on the latent variable standard gaussian in VAE?,,1,1,,,,CC BY-SA 4.0 30365,1,,,8/25/2021 12:55,,0,30,"

In this paper, the authors suggest using the following loss instead of the traditional ELBO in order to train what basically is a Variational Autoencoder with a Gaussian Mixture Model instead of a single, normal distribution: $$ \mathcal{L}_{SIWAE}^T(\phi)=\mathbb{E}_{\{z_{kt}\sim q_{k,\phi}(z|x)\}_{k=1,t=1}^{K,T}}\left[\log\frac{1}{T}\sum_{t=1}^T\sum_{k=1}^K\alpha_{k,\phi}(x)\frac{p(x|z_{k,t})r(z_{kt})}{q_\phi(z_{kt}|x)}\right] $$ They also provide the following code which is supposed to be a tensorflow probability implementation:

def siwae(prior, likelihood, posterior, x, T):
  q = posterior(x)
  z = q.components_dist.sample(T)
  z = tf.transpose (z, perm=[2, 0, 1, 3])
  loss_n = tf.math.reduce_logsumexp(
  (−tf.math.log(T) + tf.math.log_softmax(mixture_dist.logits)[:, None, :]
  + prior.log_prior(z) + likelihood(z).log_prob(x) − q.log_prob(z)), axis=[0, 1])
  return tf.math.reduce_mean(loss_n, axis=0)

However, it seems like this doesn't work at all so as someone with nearly no tensorflow knowledge I came up with the following:

def siwae(prior, likelihood, posterior, x, T):
  q = posterior(x) # distribution over variables of shape (batch_size, 2)
  z = q.components_distribution.sample(T)
  z = tf.transpose(z, perm=[2, 0, 1, 3]) # shape (K, T, batch_size, encoded_size)
  l1 = -tf.math.log(float(T)) # shape: (), log (1/T)
  l2 = tf.math.log_softmax(tf.transpose(q.mixture_distribution.logits))[:, None , :] # shape (K, 1, batch_size), alpha
  l3 = prior.log_prob(z) # shape (K, T, batch_size), r(z)
  l4 = likelihood(tf.reshape(z, (K*T*x.shape[0], encoded_size)))
  l4 = l4.log_prob(tf.repeat(x, repeats=K*T, axis=0)) # shape (K*T*batch_size, )
  l4 = tf.reshape(l4, (K, T, x.shape[0])) # shape (K, T, batch_size), p(x|z)
  l5 = -q.log_prob(z) # shape (K, T, batch_size), q(z|x)
  loss_n = tf.math.reduce_logsumexp(l1 + l2 + l3 + l4 + l5, axis=[0, 1])
  return tf.math.reduce_mean(loss_n, axis=0)

There are no errors when I try to use this as

siwae(prior, decoder, encoder, x_test[:100, ...], T)

but after a few training steps I get only nans. I really don't have any idea of this is an due to a wrong implementation or wrong usage of the loss - especially as I don't have much experience with tensorflow. So any help would be greatly appreciated. For a full, minimal example I created this colab.

",49432,,49432,,8/25/2021 14:37,8/25/2021 14:37,Tensorflow Probability Implementation of Automatic Differentiation Variational Inference with Mixtures,,0,3,,,,CC BY-SA 4.0 30366,2,,30356,8/25/2021 13:40,,1,,"

Off the top of my head, I don't know the very specific definition of confidence interval (or whether it's only defined for the parameters of a model), as I am not a statistician. In any case, intuitively, a confidence interval is an interval (or range) of values where some true value of something (e.g. your parameter) lies. (Confidence intervals are also very related to hypothesis testing, but I will not dwell on this topic here). You can find the specific definition of a confidence interval in any statistics book (e.g. this one).

Having said that, I interpret that statement as saying that most machine learning approaches do not take into account any type of uncertainty about either the true value of the parameters (e.g. of the neural networks) or the predictions or do not deal with hypothesis testing (recall above that I said that hypothesis testing is closely related to confidence intervals). Typically, in machine learning, you will find a lot of approaches that just provide you with a point estimate for the parameters (i.e. you estimate a single number for each parameter). Consequently, your final neural network (or model) just represents a single function, but what if that function is not really correct (which is probably the case given the typically limited amount of data)? In this case, you cannot say anything about your uncertainty of the true target function that you were trying to approximate or about the prediction for a new input.

Here's where probabilistic/Bayesian machine learning (PML) comes into play. Probabilistic machine learning is a relatively new subfield of machine learning that deals with uncertainty quantification/estimation or that uses tools from Bayesian statistics, like the Bayes theorem. If (deep) neural networks are involved, it is also called Bayesian deep learning (BDL).

For a gentle overview of PML, you can read this paper. If you want to know more about Bayesian neural networks (i.e. neural networks that learn a probability distribution over functions that are consistent with the data), you can read this paper.

So, nowadays, I wouldn't say that that statement is "true", although it was probably true when that book was published. More and more, people in the machine learning community do research in PML and, in particular, BDL, because, if you want to adopt neural networks in areas like healthcare, you need to provide the doctors with some kind of uncertainty quantification of the predictions. Let's say that that a doctor needs to take an action, such as giving some kind of medicine to a patient based on the condition of the patient (e.g. temperature). The doctor doesn't just want to know "yes, give the medicine to the patient". The doctor wants to have an idea of how confident or uncertain the model is about its prediction. This is also where another subfield of AI comes into play, i.e. explainable AI, but I will not dwell more on this topic here.

",2444,,2444,,12/13/2021 9:24,12/13/2021 9:24,,,,0,,,,CC BY-SA 4.0 30367,1,,,8/25/2021 14:42,,3,2317,"

The book Artificial Intelligence: A Modern Approach by Russell and Norvig has two editions: global and the US. It looks like these two are generally the same, but have some differences in the order of the chapters and in the context, is this correct?

",48908,,2444,,8/30/2021 11:19,6/21/2022 23:11,What is the difference between the US and global edition of the AIMA book by Russell and Norvig?,,2,1,,,,CC BY-SA 4.0 30368,2,,11226,8/25/2021 15:33,,1,,"

It's hard to say because Euclidean space is defined with respect to some kind of metric, so without any clearer exposition on the nature of the data/problem, the phrase itself may or may not be clear.

A metric $d: A \times A \rightarrow \mathbb{R}$ is a function that defines distance between any two points in the space with respect to axioms that 1. two points have zero distance iff they are the same: $d(a,b) = 0 \Leftrightarrow a = b$. 2. symmetric: $d(a,b) = d(b,a)$. 3. triangle inequality: $d(a,b) + d(b,c) ≥ d(a,c)$.

A Euclidian metric is a metric that also obeys Pythagoras theorem , or at least: the distance between some point $(x,y) \in \mathbb{R}^2$ is equal to $\sqrt{x^2 + y^2}.$ You will find that all Euclidian spaces are isomorphic to $\mathbb{R}^n$, meaning that the two notions are in some sense identical.

Any graph/data whose underlying data does not "naturally come" from $\mathbb{R}^n$, or a graph that does not admit a natural embedding in $\mathbb{R}^n$ might not be Euclidian, since $\mathbb{R}^n$ is isomorphic to any euclidian space.

",6779,,,,,8/25/2021 15:33,,,,0,,,,CC BY-SA 4.0 30369,2,,11166,8/25/2021 15:45,,0,,"

I didn't read the paper in depth, but one example of where assumptions of Euclidean space are made in the design of the networks are with ConvNets in image processing.

Specifically, Euclidean spaces are transformationally invariant, meaning that $d(a,b) = d(a+c,b+c)$. Each convolution layer iterates over the image with a certain amount of stride, which guarantees a certain amount of translational invariance in exchange for a smaller size of network parameters.

ConvNets would not naively work for a dataset which comes from, say, a graph, because its not clear how to stride over it, as a possible example.

",6779,,,,,8/25/2021 15:45,,,,1,,,,CC BY-SA 4.0 30370,2,,30355,8/25/2021 16:04,,0,,"

Surely you understand the concept of overfitting: if you are optimizing over hyperparameters under the same underlying data, then technically you are optimizing a new dataset $(\theta, D)$, where $\theta$ are hyperparameters, and $D$ is the original dataset.

This is not good, because in theory, $\theta$ should be independent of the underlying specific $D$ in question, so anything learned is likely to be overfitted to the specific $D$. We mitigate this by requiring additional data for training hyperparameters, such that anything learned about $\theta$ would be about the underlying data-generating statistic and not just any specfic $D$.

",6779,,,,,8/25/2021 16:04,,,,0,,,,CC BY-SA 4.0 30374,1,35279,,8/25/2021 22:46,,2,63,"

In order to train a face recognition system you need to have access to a large database with thousands of photos containing different faces. Companies like facebook and amazon have these databases but most average people do not.

If you don't have access to a sufficiently large dataset with faces, could you use computer generated random faces instead? I'm asking this because computers are becoming better and better in rendering hyper realistic faces. An example is the meetmike digital human showcase video. Another example is the unreal engine project spotlight video.. Lastly you also have websites like https://thispersondoesnotexist.com/ that can generate random faces.

What if you generate a couple of photos of the same computer generated face and you make sure that each photo shows the face in a different setting or from a different angle. Could you then use such photos to train a facial recognition system that can accurately recognize real people?

",49442,,,,,4/20/2022 12:33,Can a face recognition system be trained using only computer generated hyper realistic faces?,,1,1,,,,CC BY-SA 4.0 30375,1,,,8/26/2021 2:16,,1,837,"

I've learnt that idea that the residual block was invented to solve the vanishing gradient problem due to the deep layer to layer multiplication.

I understand that for example if I have 10 layers, and I add another 5 layers, that the output of the 10th layer will 'skip' the 5 layers. Although, the output of the 10th layer will also pass through the 5 layers as well. Just before the 15th layer Relu, the output from the 10th layer is element-wise summed with the 15th layer, just prior to the final Relu. I have some confusion with this.

  1. Identity mapping/function. I keep reading that it creates an identiy function or it learns an identity function. What exactly is this? Isn't is just F(x) = 5 added layers, and x =output of 10th layer and thus it is just F(X) + X?

  2. By summing the output of the 10th layer to the 15th layer, will this not affect what was learnt in the 5 layers? I.e. from 11th -15th layer.

  3. I believe it also helps with backpropagation so that it doesn't have to update all the weights layer by layer and it can skip back to shallow layers. Therefore, are the weights inside the residual block, i.e layers 11-15 not updated? If not, then what is the point of the 11-15th layer if they are not designed to "do anything".

",49443,,,,,4/20/2022 3:46,Residual Blocks - why do they work?,,2,6,,,,CC BY-SA 4.0 30376,1,30380,,8/26/2021 4:38,,3,266,"

I understand that a stochastic environment is one that does not always lead you to the desired state by giving a particular action $a$ (But the probability to change to a not desire state is fixed, right?).

For example, the frozen lake environment is a stochastic environment. Sometimes you want to move in one direction and the agent slips and moves in another direction. Unlike an environment with multiple agents that the probability of the actions of the other agents is changing because they keep learning (a non-stationary environment).

Why is it difficult to learn in a stochastic environment, if, for example, Q-learning can solve the frozen lake environment? In what cases would it be difficult to learn in a stochastic environment?

I have found some articles that address that issue, but I don't understand why it would be difficult if Q-learning can solve it (for discrete states/actions).

",49444,,2444,,1/7/2022 18:07,1/7/2022 18:07,Is it really hard to learn in a stochastic environment?,,1,0,,,,CC BY-SA 4.0 30378,1,30381,,8/26/2021 9:55,,1,185,"

Consider the following statements from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)

Machine learning tasks are usually described in terms of how the machine learning system should process an example. An example is a collection of features that have been quantitatively measured from some object or event that we want the machine learning system to process. We typically represent an example as a vector $\mathbf{x} \in \mathbb{R}^n$ where each entry $x_i$ of the vector is another feature. For example, the features of an image are usually the values of the pixels in the image.

Here, an example is described as a collection of features, which are real numbers. In probability theory, a random variable is also a real-valued function.

Can I always interpret features in machine learning as random variables or are there any exceptions for this interpretation?

",18758,,2444,,8/26/2021 10:49,1/14/2022 23:56,Can I always interpret features as random variables in machine learning safely?,,1,0,,,,CC BY-SA 4.0 30379,1,30404,,8/26/2021 10:02,,0,1242,"

I created my first transformer model, after having worked so far with LSTMs. I created it for multivariate time series predictions - I have 10 different meteorological features (temperature, humidity, windspeed, pollution concentration a.o.) and with them I am trying to predict time sequences (24 consecutive values/hours) of air pollution. So my input has the shape X.shape = (75575, 168, 10) - 75575 time sequences, each sequence contains 168 hourly entries/vectors and each vector contains 10 meteo features. My output has the shape y.shape = (75575, 24) - 75575 sequences each containing 24 consecutive hourly values of the air pollution concentration.

I took as a model an example from the official keras site. It is created for classification problems, I only took out the softmax activation and in the last dense layer I set the number of neurons to 24 and I hoped it would work. It runs and trains, but it does worse predictions than the LSTMs I have used on the same problem and more importantly - it is very slow - 4 min/epoch. Below I attach the model and I would like to know:

I) Have I done something wrong in the model? can the accuracy or speed be improved? Are there maybe some other parts of the code I need to change for it to work on regression, not classification problems?

II) Also, can a transformer at all work on multivariate problems of my kind (10 features input, 1 feature output) or do transformers only work on univariate problems? Tnx

def build_transformer_model(input_shape, head_size, num_heads, ff_dim, num_transformer_blocks, mlp_units, dropout=0, mlp_dropout=0):

    inputs = keras.Input(shape=input_shape)
    x = inputs
    for _ in range(num_transformer_blocks):

        # Normalization and Attention
        x = layers.LayerNormalization(epsilon=1e-6)(x)
        x = layers.MultiHeadAttention(
            key_dim=head_size, num_heads=num_heads, dropout=dropout
        )(x, x)
        x = layers.Dropout(dropout)(x)
        res = x + inputs

        # Feed Forward Part
        x = layers.LayerNormalization(epsilon=1e-6)(res)
        x = layers.Conv1D(filters=ff_dim, kernel_size=1, activation="relu")(x)
        x = layers.Dropout(dropout)(x)
        x = layers.Conv1D(filters=inputs.shape[-1], kernel_size=1)(x)
        x = x + res

    x = layers.GlobalAveragePooling1D(data_format="channels_first")(x)
    for dim in mlp_units:
        x = layers.Dense(dim, activation="relu")(x)
        x = layers.Dropout(mlp_dropout)(x)
    x = layers.Dense(24)(x)
    return keras.Model(inputs, x)

model_tr = build_transformer_model(input_shape=(window_size, X_train.shape[2]), head_size=256, num_heads=4, ff_dim=4, num_transformer_blocks=4, mlp_units=[128], mlp_dropout=0.4, dropout=0.25)
model_tr.compile(loss="mse",optimizer='adam') 
m_tr_history = model_tr.fit(x=X_train, y=y_train, validation_split=0.25, batch_size=64, epochs=10, callbacks=[modelsave_cb])
",49450,,,,,12/26/2022 19:09,Transformer model is very slow and doesn't predict well,,1,3,,,,CC BY-SA 4.0 30380,2,,30376,8/26/2021 10:09,,2,,"

A stochastic environment does not necessarily mean that the reward distribution is stationary. It can be, as in the case of FrozenLake. The paper you linked also mentions that other algorithms already addressed the non-stationary case.

If you have a simple stationary stochastic environment, then you just need more sample trajectories to determine which action is better. If the environment is fully observable, then based on the estimated action values you can build a deterministic optimal policy.

",,user42664,,user42664,8/26/2021 15:17,8/26/2021 15:17,,,,1,,,,CC BY-SA 4.0 30381,2,,30378,8/26/2021 10:19,,2,,"

In general terms yes. Because what the ML algorithms do in general is to learn the hidden probability density function of the target examples (cats, dogs..). And that is done by learning the conditional probability function between inputs, $X$, and target outputs, $y$, for discriminative models or by learning the joint probability function for generative models.

More on discriminative VS generative models and their relation with probabilities in this nice article from where I took the picture.

It also helps to think that given a set of different examples (say the image of cats). What the ML algorithm really learns is the invariant features between all the examples, i.e. the cats. So the ML algorithm learns to discriminate all the variability (backgrounds, occluding objects) and learn what is invariant: the representation of a cat.

This representation, since images are pixels, can be described as a probability density function of values that would represent a cat. It would be like the ML algorithm said: "If those pixels values behave close to the probability density function I learned, then those pixels must represent a cat!"

",26882,,18758,,1/14/2022 23:56,1/14/2022 23:56,,,,2,,,,CC BY-SA 4.0 30383,2,,30347,8/26/2021 13:08,,1,,"

Timesteps in an MDP do not need to be even, and they have no units, just an index. It is OK for real world clock time between time steps to vary as needed.

The MDP formulation assumes a "turn-based" process where the agent picks and action, the environment processes that action plus its own inherent rules, then the agent is contacted again when it is time to make a new decision.

This most common scenario assumes you always have adequate time to make a decision, and whenever a decision is needed, the timestep will be incremented, the agent presented with the reward and next state by the environment (or code that interfaces with the environment).

It looks like this would match your case, and that there is nothing to be concerned about.

There are real time systems where actions may be required reactively, and as fast and accurately as possible given sensor data collected at high frequency compared to the time it would take to analyse state and make a decision. There are some different approaches to this, and it is an area of active research, since the MDP formulation does not capture this. For instance the paper Real-Time Reinforcement Learning attempts to use a variation of MDP with time steps based on a real time metric, and allowances for decisions taking longer than one time step.

In practice, even video-game-playing systems such as classic Atari games are often treated as turn-based instead of real time. During training the emulator may be paused whilst the agent processes state information and learns.

",1847,,1847,,8/26/2021 13:14,8/26/2021 13:14,,,,0,,,,CC BY-SA 4.0 30384,2,,30189,8/26/2021 13:08,,0,,"

The conclusion that I finally arrived at was that I don't have enough data. The reason I believe this is true is that I slowly lowered model complexity and watched as overfitting decreased and underfitting increased.

If the model was not overfitting, i.e. if the validation loss decreased along with the training loss, then the model underperformed. It was little better than a flip of a coin on binary classification.

Increasing model complexity at all from this point lead to immediate overfitting.

I believe that the solution is to find more data, or use a model that generalizes better with less data.

",37691,,,,,8/26/2021 13:08,,,,0,,,,CC BY-SA 4.0 30386,1,30411,,8/26/2021 13:15,,1,73,"

I've been playing with PyTorch's nn.EmbeddingBag for sentence classification for about a month. I've been doing some feature engineering, playing with different tokenizers, etc. I'm just trying to get the best performance out of this simple model as I can. I'm new to NLP, so I figured I should start small.

Today, by chance, I stumbled on this paper Bag of Tricks for Efficient Text Classification, which very well may be the inspiration for nn.EmbeddingBag. Regardless, I read the paper and saw that they increased performance through using "n-grams as additional features to capture some partial information about the local word order"

So by the wording of this sentence, specifically "additional features", I take it to mean that they made n-grams as part of their vocabulary. For example "abc news" is treated as a single word in the vocabulary, and then appended to the training data that is being embedded like so:

dataset = TextFromPandas(tweet_df)
label, sentence, ngrams = dataset[0]
label, sentence, ngrams

# out:

(1,
 'quake our deeds are the reason of this # earthquake may allah forgive us all',
 ['quake our',
  'our deeds',
  'deeds are',
  'are the',
  'the reason',
  'reason of',
  'of this',
  'this #',
  '# earthquake',
  'earthquake may',
  'may allah',
  'allah forgive',
  'forgive us',
  'us all'])

I just wanted to check my assumption, because the paper is not very explicit. I already tried to string n-grams together as a new sentence in place of the old, but performance dropped significantly.

I will continue to experiment, but I was wondering if anyone knows the specific mechanism?

",37691,,2444,,8/26/2021 21:47,8/28/2021 3:46,Bag of Tricks: n-grams as additional features?,,1,0,,,,CC BY-SA 4.0 30388,1,30393,,8/26/2021 14:02,,0,88,"

I have a control problem for a heating device of a building with the goal to minimize the electricity costs for one day under a varying price for electricity in every hour (more details can be seen here:Reinforcement learning applicable to a scheduling problem?). Although the problem is basically a scheduling problem, I want to implement it like a control problem for every time step.

Now, I have 2 questions:

  1. Is it possible to somehow consider future values (e.g. of the electricity price) while during a control action for every time slot? E.g. when the agent knows that in 2 hours the price will fall significantly, then it should tend to consume electricity in 2 hours to get closer to the optimal solution.

  2. Related to 1: Is it possible to get the reward just at the end of the day instead of every hour (although the control actions are every hour)? If you get the reward at every hour, this might lead to a greedy behaviour, which often results in bad results.

",48758,,2444,,12/15/2021 15:43,12/15/2021 15:43,Can future information be included in a control problem with Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 30390,2,,7202,8/26/2021 15:34,,0,,"

The non-linear kernel SVMs can be slow if you have too many training samples. This is due to the fact that the algorithm creates an NxN matrix as @John Doucette answered.

Now there are a few ways to speed up the non-linear kernel SVMs:

  • Use the SGDClassifier instead and provide proper parameters for loss, penalty etc. to make it behave like an SVM. The optimisation process is different than libsvm though.
  • Use a kernel approximator like Nystroem
  • Since you are yourself trying out feature combinations, a Linear SVM can also be good and fast :)
",1942,,,,,8/26/2021 15:34,,,,0,,,,CC BY-SA 4.0 30391,2,,7202,8/26/2021 16:00,,0,,"

SVM scales rather badly with the number of training samples - from $O(n^2)$ to $O(n^3)$ as told in this answer https://stackoverflow.com/questions/16585465/training-complexity-of-linear-svm.

The vanilla approach requires inversion of $n \times n$ matrix, which is $O(n^3)$ operations in general.

As suggested in the other answers, the most apparent way to reduce the computational and storage complexity is the reduction of number of training samples.

I am even surprised that all this data fits into the memory.

",38846,,,,,8/26/2021 16:00,,,,0,,,,CC BY-SA 4.0 30392,2,,30341,8/26/2021 18:05,,3,,"

This goes back to the purpose of self-attention.

Measure between word-vectors is generally computed through cosine-similarity because in the dimensions word tokens exist, it's highly unlikely for two words to be colinear even if they are trained to be closer in value if they are similar. However, two trained tokens will have higher cosine-similarity if they are semantically closer to each other than two completely unrelated words.

This fact is exploited by the self-attention mechanism; After several of these matrix multiplications, the dissimilar words will zero out or become negative due to the dot product between them, and the similar words will stand out in the resulting matrix.

So, as Tom points out in the comments below, self attention can be viewed as a weighted average, where less similar words become averaged out faster (toward the zero vector, on average), thereby achieving groupings of important and unimportant words (i.e. attention). The weighting happens through the dot product. If input vectors were normalized, the weights would be exactly the cosine similarities.

The important thing to take into consideration is that within the self-attention mechanism, there are no parameters; Those linear operations are just there to capture the relationship between the different vectors by using the properties of the vectors used to represent them.

Read this blog post by Peter Bloem for a more in-depth explanation of self-attention.


Edit

I should add that this explanation is less satisfactory considering how Transformers also seem to work for tasks without learned embeddings, like time-series forecasting. I have no idea why that is. However, the model was originally used for NLP, and they did use learned embeddings. So, I bet that's why that particular architecture looks the way it does.

Bloem, in the blog post above, does discuss the mathematical properties of self-attention without bringing up the fact that the original architecture does have learned embeddings.

All this shows is that having learned embeddings does not matter that much; The layers following the multi-headed attention will learn the relationships between the vectors. The general point about the properties of the dot-product being exploited does stand.

",31879,,31879,,5/9/2022 11:11,5/9/2022 11:11,,,,0,,,,CC BY-SA 4.0 30393,2,,30388,8/26/2021 18:14,,2,,"

Typically in a control problem, it is OK to include data about a future event, if it can be reliably predicted at the time that a decision is required.

This would include known rules such as a pricing schedule. You could even use some feature engineering to help the agent by making a state feature that counted down to changes, or presented it as a relative change to current price. E.g. it would be fine to have a couple of features that pre-calculated "in 10 minutes, prices should change by -0.003 per kWH" - whether or not that helps your agent in practice I could not say.

What you should not include is any resolution of random variables in advance. It would be OK to predict these using a model, or give the agent details about the distribution, but it is not OK to work with all data from all time steps already resolved, when the agent will in reality be required to make a decision without that data. An example of this kind of data is the measured internal or external temperature on future time steps. (Predicted temperature from a weather forecast would be OK though)

This constraint, of not showing the controller the whole future when it is making a decision, should also be applied for other controller optimisations as well as reinforcement learning - no real world controller has perfect knowledge of how stochastic results will turn out, or the ability to rewind time and make the correct decision in the past based on observations that happened after it made that decision. However, to repeat the point made at the start, it is OK to make those decisions based on predictions of what will happen, where it is reasonable to have the knowledge in advance, and those predictions could be very accurate in some environments. A pricing schedule that is agreed in advance with your power provider should be one of those types of knowledge that it is fine to use.

",1847,,1847,,8/27/2021 16:38,8/27/2021 16:38,,,,1,,,,CC BY-SA 4.0 30394,2,,30329,8/26/2021 18:34,,1,,"

Batch Normalization should be applied between all layers and their activation functions excluding the output layer. This squishes the ranges of numbers in a better range for neural network to build appropriate sized gradients.

I've not seen much use of Dropout in Deep RL because the networks are usually small and overfitting isn't as much of a problem as in supervised learning.

",49455,,,,,8/26/2021 18:34,,,,13,,,,CC BY-SA 4.0 30395,2,,30252,8/26/2021 18:37,,1,,"

Because the value of the terminal state is 0 by definition. There is no further reward to be obtained once you reach the terminal state.

",49455,,,,,8/26/2021 18:37,,,,0,,,,CC BY-SA 4.0 30396,2,,26184,8/26/2021 22:05,,1,,"

Alignment:

We all know that a good translation cannot be done just by splitting words, converting them, and concatenating them back. Otherwise, a dictionary would be just enough. One translation problem is on the alignment of the words. For example:

Uma maçã grande e vermelha
(1)  (2)  (3)  (4)   (5)
 |     \ /   _______/
 |      X   /
 |     / \ /
 |    /   X
 |    /  / \
(1) (3) (5)  (2)
 A  big red apple

RNN

This article starts by showing how a RNN translator works and their underlying difficulties. And alignment is a huge pain for RNN because either you'd need another method* to solve it, so RNN could focus on smaller tasks each time. *And this method usually requires a labeled datasets (like the example above), which is quite tedious to create.

What if, instead of hacking an external element to guess the alignment, we could just send the whole text and train a single neural network to both:

  • Somehow solve the alignment problem.
  • Use that to predict the next word.

Wouldn't that be awesome? Introducing:

Transformers

It has a bult-in self-attention component that scores all previous words according to their relevance for the next translated word!

TL;DR:

Transformers will automatically solve alignment while translating.

",49188,,,,,8/26/2021 22:05,,,,0,,,,CC BY-SA 4.0 30398,2,,17538,8/27/2021 8:25,,0,,"

If the loss is zero, all gradients should be zero as well, so you should take a look at the computed gradients. There might be a problem with momentum or the scheduled lr which might still apply very little updates which eventually lead to this policy colapse.

On a side note I would also call reduce_mean on the actor loss since you're optimizing the expected value.

",49455,,,,,8/27/2021 8:25,,,,6,,,,CC BY-SA 4.0 30401,2,,30353,8/27/2021 10:27,,0,,"

But, I am not sure if that is necessarily the best way to fix the problem of having repetitive sets of fixed action in the replay process.

The best way to handle this is using a Multi Discrete Action Space. You don't need a neural network for every action on its own.

",49455,,,,,8/27/2021 10:27,,,,2,,,,CC BY-SA 4.0 30404,2,,30379,8/27/2021 12:13,,2,,"

Edit

I just noticed that the model you are referring to is built very differently than the transformer from Attention is All You Need since it only uses one half of the architecture. Thus my answer below is not be complete. I thus have to add the following: (The final two paragraphs still apply as they are, though)

The Keras model is quite weird, and while it is a time-series model, it's not a regression one. Thus my general description regarding how transformers work still apply (that is, you only get one prediction at a time), since their model is a many-to-one classification model. So, you have adjust your model accordingly.

The main thing you can ignore from my original answer is my emphasis on decoders; Keras only use encoders for their model.


Original answer

Transformers only output one prediction at a time, not twenty-four. Lets break a transformer during runtime step by step.

In all of the steps, the encoder input will be the same since your sequences are not that long. The decoder gets the output as its input, which is done in the following manner:

Step one: The decoders input is only one token which stands for "start of sequence" and padding. $(start,0,0,..,0,0)$

Step two: The decoder input is the start-token and the prediction from step one + padding $(start,y_1,0,0,..,0,0)$

Step three: The decoder input is the decoders input in step two and the prediction from step two + padding $(start,y_1,y_2,0,0,..,0,0)$

And, so on.. If you want 24 outputs, the transformer will have to run 24 times. The only reason the original transformer model has multiple outputs is that it needs to make one prediction for each word token (and, pick the most likely one). Training will thus have to reflect this.

However, I am quite skeptical that you will have any significant benefit from using transformers over LSTMs. The main benefit of transformers, or any other attention based model, is that they allow for longer-term dependencies than LSTMs. According to a Stanford study, the point at which LSTMs are no longer efficient is a sequence of around 200 tokens. Of course, that study is specific to language models, but I bet it won't be that much different in your case; Your sequences are quite short.

Anyhow, the output layer for a regression task generally use linear activation functions. If all your outputs are positive, that won't make much of a difference.

",31879,,31879,,8/28/2021 10:29,8/28/2021 10:29,,,,3,,,,CC BY-SA 4.0 30405,2,,30375,8/27/2021 13:32,,2,,"

First a little intro, skip to the end for the straight answers: residual networks were proposed after observing that deeper models tend to perform worse than their shallow counterpart if we just keep adding hidden layers without applying any other change to the architecture, as we can see in the very first picture of the original paper.

The reason of this phenomena is indeed gradient vanishing. The more the hidden layers, the more the information of the original input get lost, due to the fact that a hidden layer receives only information from the previous hidden layer. How to solve this? Using residual connections. A residual connection is just an identity function that map an input or hidden state forward in the network, so not to the immediate next layers, that's why these connections are also called skip connections. The only purpose they serve is to force deep layers to retain information learned in the early layers of the network.

From a numerical perspective you can think about information getting lost as weights becoming smaller and smaller. By brutally summing the hidden states of previous layers you make sure to avoid this problem, giving the weights a broader range of adjustment even in very deep layers.

In conclusion, to answer your questions:

1 Residual connection don't create or learn an identity function, they simply use it. The formulation of such connections in the paper is:

$y = F(x, W_{i}) + x$

where x could be rewritten as $I(x)$, $I$ being the identity function.

2 No, we don't loose any information by summing the residuals, on the contrary, they are designed to retaining information also in very deep layers, for the above mentioned reasons.

3 All layers are updated, there's no frozen layers in a residual network. The "skip" term refers to the fact that a hidden layers is copied in forward layers, which is a legit operation, but it doesn't refer at all to skip in training or weight updates.

",34098,,34098,,8/27/2021 20:15,8/27/2021 20:15,,,,2,,,,CC BY-SA 4.0 30406,1,,,8/27/2021 18:25,,2,282,"

I'm new to neural networks and trying to figure out its fundamentals but I cannot fully understand the back propagation algorithm.

In back propagation, I understand we want to go backwards from the last neurons to adjust the weights and biases to that predicted final neurons. For it to calculate error and derivative, It needs to have the last inputs, the predicted output based on the layer's weights and the actual value ( target ).

As In the final neuron layers we have all this information. But how do we calculate the inputs of middle and hidden layers?

Suppose we have the final output ( 0.73 ), we calculate the error and derivatives of W31, W32 and W33; and adjust them to match the final output, Then we shall go one layer back in our network.

Now we need the N11, N12, N13 and N14 values and the target values of N21, N22 and N23 to calculate errors and derivatives, but we don't have them

Should we feed forward the whole network and map all the labels and values of each neuron in memory to be able to access it later? Because it would be very, very memory and resource intensive on large networks.

",49466,,18758,,8/27/2021 22:57,8/28/2021 12:09,How does back propagation adjust the hidden layers' weights and biases?,,0,2,,,,CC BY-SA 4.0 30408,2,,28842,8/27/2021 18:35,,0,,"

A simpler model seems to be the best option

model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([-1.0,  0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys2 = np.array([4/3*r*np.pi**3 for r in xs])    
model.fit(xs, ys2, epochs=500,validation_split=0.2)

def masa_circulo(x):
    return 4/3*x*np.pi**3

Testing it graphically

x = [x for x in range(1,int(1e6),int(1e3))]
y_masa_circulo = [ masa_circulo(m) for m in x]
y_masa_predicha= [model.predict([m])[0] for m in x]

import matplotlib.pyplot as plt
fig,axes = plt.subplots(1,2)
axes[0].plot(y_masa_circulo);
axes[0].set_title("y_masa_circulo")
axes[0].set_ylabel("y_masa_circulo")
axes[0].set_xlabel("Radio")

axes[1].plot(y_masa_predicha);
plt.title("y_masa_circulo")
plt.title("y_masa_predicha");
axes[1].set_ylabel("y_masa_predicha");
axes[1].set_xlabel("Radio");

No need of scaling the data.

",33566,,33566,,8/27/2021 19:53,8/27/2021 19:53,,,,0,,,,CC BY-SA 4.0 30409,1,,,8/27/2021 23:31,,2,122,"

Among the list of tasks in machine learning, synthesis and sampling is one of the key task. Consider the following explanation regarding synthesis and sampling task from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)

In this type of task, the machine learning algorithm is asked to generate new examples that are similar to those in the training data. Synthesis and sampling via machine learning can be useful for media applications when generating large volumes of content by hand would be expensive, boring, or require too much time. For example, video games can automatically generate textures for large objects or landscapes, rather than requiring an artist to manually label each pixel (Luo et al., 2013). In some cases, we want the sampling or synthesis procedure to generate a specific kind of output given the input. For example, in a speech synthesis task, we provide a written sentence and ask the program to emit an audio waveform containing a spoken version of that sentence. This is a kind of structured output task, but with the added qualification that there is no single correct output for each input, and we explicitly desire a large amount of variation in the output, in order for the output to seem more natural and realistic.

The explanation does not mention any difference between the two tasks. Both sampling and synthesis, apart from the linguistic differences, I don't know any discriminating criteria, qualities or properties, that separate both tasks in machine learning.

What is the fundamental difference between sampling task and synthesis task in machine learning?

",18758,,18758,,8/28/2021 0:18,8/31/2021 16:55,What is the fundamental difference between the synthesis task and sampling task?,,1,0,,,,CC BY-SA 4.0 30411,2,,30386,8/28/2021 3:46,,2,,"

Yes, N-grams is about joining $n$ words as one single token.

Keep in mind it will greatly increase your features size:

If you originally have 1000 unique words, notice you could get up to 1000² 2-gram (usually you don't get ALL the combinations, but notice the number of features can potentially grow huge!)

If your dataset contains between thousands or millions samples, it could be enough to train a simple bag-of-words. But when you use a bi-gram, you'll probably need at least a million samples and lot more training steps. Besides, you'll probably have to tweak the hyperparameters.

That's a common Machine Learning trade-off. A simple model is not very accurate. But a more complex model requires more data, more training and can overfit more easily.

",49188,,,,,8/28/2021 3:46,,,,2,,,,CC BY-SA 4.0 30413,1,,,8/28/2021 12:46,,0,34,"

In deep learning, models may learn the probability distribution that generated the dataset. Observe the following paragraph from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)

Unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. In the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly, as in density estimation, or implicitly, for tasks like synthesis or denoising. Some other unsupervised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples.

I read about density estimation in the same chapter, as given below

In the density estimation problem, the machine learning algorithm is asked to learn a function $p_{model} : R^n \rightarrow R$, where $p_{model}$(x) can be interpreted as a probability density function (if $x$ is continuous) or a probability mass function (if $x$ is discrete) on the space that the examples were drawn from.

This question is focused on explicit probability density estimation in continuous case i.e., learning density function $p_{model}$ directly.

Suppose I have a dataset $D$ with $n$ continuous random variables (features) $X_1, X_2, X_3,\cdots, X_n$. And I don't know anything about the probability density function of individual random variables. That is, I don't know about any information about any $X_i$ , such as, whether $X_i$ follows normal distribution or any other distribution. Then, is it possible to learn density function explicitly? Or do I need to provide some necessary information such as the class of probability distribution function to be learned?

I am thinking as follows:

If I have some information about $X_i$, such as: $X_i$ falls to a well known distribution, then I can learn the parameters of the underlying density function from $D$. So, is it mandatory to know some information about the underlying probability density function.

",18758,,,,,1/20/2023 18:00,Is knowing the class of probability density function mandatory for explicit density estimation?,,1,0,,,,CC BY-SA 4.0 30414,1,,,8/28/2021 14:11,,0,51,"

I'm trying to train a model that would generate stories. I have a dataset of 2000 stories prepared. They are tokenized and one-hot encoded. I can't load them all at once as a one big dataset, because of memory limits.

What would be the best way to fit my network so that i can reset the states after each story?

I tried doing it in a nested for loop (for epoch/for story: model.fit) but it's working really slow cause it takes 3 seconds to fit a single story but almost 10 to load the next file and setup model.fit again.

",49481,,18758,,8/28/2021 14:39,8/28/2021 14:39,What's the best way to feed stories to a neural network?,,0,4,,,,CC BY-SA 4.0 30416,2,,30413,8/28/2021 15:14,,0,,"

Neural Networks can approximate any function. Quoting the essence in case the article is removed in the future.

The key to neural networks’ ability to approximate any function is that they incorporate non-linearity into their architecture. Each layer is associated with an activation function that applies a non-linear transformation to the output of that layer.

So no, knowing the class of the probability density function is not required to approximate it via Deep Learning. With a large enough number of samples you could construct an approximation.

",49455,,,,,8/28/2021 15:14,,,,0,,,,CC BY-SA 4.0 30418,1,,,8/28/2021 20:13,,1,61,"

I implemented a parallel backpropagation algorithm that uses $n$ threads. Now every thread gets $\dfrac{1}{n}$ examples of the training data and updates its instance of the net with it. After every epoch the different threads share their updated weights. For that I simply add the weights of the threads and then divide each weight by the number of threads. The problem now is that the more threads I use the worse the result. For me this means that my way of synchronizing the threads is not as good as it should be.

Is there a better way to do it?

",49485,,18758,,8/28/2021 22:13,9/27/2021 23:03,Parallelize Backpropagation - How to synchronize the weights of each thread?,,1,0,,,,CC BY-SA 4.0 30419,2,,30418,8/28/2021 20:32,,1,,"

After a whole epoch, with multiple update steps, the neural networks in each thread will have diverged in a way where it may not make sense to take means of the weights. Ideally you should be combining data for each update step. In turn that means you will want to avoid making updates on every example, because the overhead of starting, stopping and combining the threads may lose most of the benefits.

It is common in neural networks to use mini-batches (larger than 1, smaller than the whole dataset), to get more accurate gradients, and for parallelisation. There is often a sweet spot in terms of learning speed (or sample efficiency) with some size of mini-batch. Each mini-batch calculates gradients for all examples, combines them into a mean gradient, then performs a single weight update step.

Use your threads to calculate the gradients for a mini-batch, divided up between the threads, and average the gradients across all threads in order to make a single shared weight update. Using larger mini-batches will make more efficient use of multiple threads, but smaller mini-batches can be beneficial because you get to make more weight updates per epoch.

",1847,,1847,,8/28/2021 20:53,8/28/2021 20:53,,,,2,,,,CC BY-SA 4.0 30420,1,,,8/28/2021 22:40,,0,13,"

Let's say I have a video that contains 3 grayscale sequential frames having a combined shape of (3, 24, 24). After inputting these frames together into a CNN, multiple feature maps will be generated from each of the images. Would it be possible for me to separate the temporal aspect by identifying which frame generated which feature maps?

",31755,,,,,8/28/2021 22:40,Is it possible to identify which feature maps were generated from a particular image after convolutional operation,,0,2,,,,CC BY-SA 4.0 30421,1,30430,,8/28/2021 22:46,,1,49,"

While going over PyTorch image augmentations, https://pytorch.org/vision/stable/transforms.html, I see that some augmentations can be applied with a certain probability. What is the purpose of applying stochastic augmentations rather than consistently applying a certain augmentation?

",31755,,2444,,8/30/2021 18:38,8/30/2021 18:38,Why do some techniques use random augmentations during convolution processes,,1,0,,,,CC BY-SA 4.0 30422,1,,,8/29/2021 2:14,,2,282,"

GAN has two components: generator and discriminator.

Discriminator in the original GAN is a regressor and always gives value in $[0, 1]$. You can read it in original paper

$D(x)$ represents the probability that $x$ came from the data rather than $p_g$

Is it true with most of the (advanced or) contemporary GANs? Or the do nature of discriminator, either as a regressor or as a classifier entirely depends on the context?

",18758,,,,,8/29/2021 7:31,Is discriminator a regressor or classifier in implementations?,,1,0,,,,CC BY-SA 4.0 30423,1,,,8/29/2021 6:15,,0,357,"

I have a bunch of bank transaction records from which I want to extract merchants' names. In a few subsets of these records, the structure of the string is the same within the subset with only the merchant name changing. For example

subset 1

  • XXXXX_ID_TIME_STAMP MERCHANT1 CREDIT
  • XXXXX_ID_TIME_STAMP MERCHANT2 CREDIT

subset 2

  • BILL PAYMENT BANK_NAME MERCHANT NAME 3
  • BILL PAYMENT BANK_NAME MERCHANT NAME 4

In the above two subsets, the structure of the string is the same, only the merchant names changes

and so on ...

Using NLP, I want to extract merchant names in such cases. How should I approach this?

Using regex is not feasible because I'd have to manually go through the complete data, identify all such patterns and create regex strings that'll extract the name. I would also have to do this for every new pattern.

Is there a way where I can train a model that can identify/extract merchants in such cases?

",31449,,,,,8/29/2021 12:35,Get the name of a merchant from records,,1,0,,,,CC BY-SA 4.0 30424,1,,,8/29/2021 6:44,,0,80,"

The following derivation is taken from Chapter 5: Machine Learning Basics from the book titled Deep Learning (by Aaron Courville et al.)

I am facing difficulty in understanding the zero derivative related to minimizing gradient of mean squared error $\nabla_w \text{MSE}_{train} = 0$

$\nabla_w \text{MSE}_{train} = 0$

$\implies \nabla_w(Xw - y)^T(Xw-y) = 0$

$\implies \nabla_w(w^T X^T- y^T)(Xw-y) = 0$

$\implies \nabla_w(w^T X^T- y^T)(Xw-y) = 0$

$\implies \nabla_w(w^T X^TXw - w^T X^Ty -y^TXw+y^Ty) = 0$

$\implies \nabla_w(w^T X^TXw - 2 w^T X^Ty +y^Ty) = 0$

$\implies 2 X^TXw - 2 X^Ty = 0$

$\implies w = 2 (X^TX)^{-1} X^Ty = 0$

I have had difficulty in understanding the flow of the following two lines

$\implies \nabla_w(w^T X^TXw - w^T X^Ty -y^TXw+y^Ty) = 0$

$\implies \nabla_w(w^T X^TXw - 2 w^T X^Ty +y^Ty) = 0$

The first doubt is about in the first two lines, it is possible only if (which I feel is untrue)

$w^T X^Ty = y^TXw$

Note: $X,y$ here refers to input and outputs of train data

",18758,,18758,,8/30/2021 22:19,8/30/2021 22:19,Isssue in understanding the derivation regarding mean squared error,,1,0,,,,CC BY-SA 4.0 30425,1,30426,,8/29/2021 6:48,,0,520,"

I am new to deep learning so feel free to correct me where I am wrong.

Imagine this scenario where we have a 7 * 7 input. We want to slide a 3 * 3 filter with a stride of 3 and padding of zero over this input. As you know, it is not possible to do this.

Also, CNNs have a fixed input shape(Correct me if I am wrong) or at least the input should be of the multiples of the CNN's intended input shape(e.g., 112 * 112, 224 * 224, etc)(Although the situation that this may work is rare.)

According to this PyTorch page, ResNet (for example) accepts images of any size as long as they are bigger than 224.

So my question is, how does it handle images of different sizes? Does it dynamically tweak parts of the structure (e.g., kernels, strides, paddings) based on the input? If yes, wouldn't that change the network architecture? Or it changes the input sizes to the intended size automatically?

Also, this does not answer my question.

",37414,,2444,,8/29/2021 23:43,8/29/2021 23:43,How do CNNs handle inputs of different sizes and shapes?,,1,0,,,,CC BY-SA 4.0 30426,2,,30425,8/29/2021 7:03,,1,,"

There is a transformation applied to the image before fed through the neural network described in the 3rd codeblock from that page:

preprocess = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])

This will first resize every image (regardless of size) and then crop the centre of the image, so the input to the NN is always the same size.

Also, you mentioned that an input of shape 7x7 cannot be convolved with a 3x3 filter with padding zero and stride 3, but that is possible. Let's say this is the original image (grayscale, so no channels):

$$ \begin{matrix} . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ \end{matrix} $$

The size 3x3 kernel with stride 3 and padding 0 moves through this as follows:

$$ \begin{matrix} K & K & K & . & . & . & . \\ K & K & K & . & . & . & . \\ K & K & K & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ \end{matrix} $$

Then

$$ \begin{matrix} . & . & . & K & K & K & . \\ . & . & . & K & K & K & . \\ . & . & . & K & K & K & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ \end{matrix} $$

Then

$$ \begin{matrix} . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ K & K & K & . & . & . & . \\ K & K & K & . & . & . & . \\ K & K & K & . & . & . & . \\ . & . & . & . & . & . & . \\ \end{matrix} $$

Finally,

$$ \begin{matrix} . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & . & . & . & . \\ . & . & . & K & K & K & . \\ . & . & . & K & K & K & . \\ . & . & . & K & K & K & . \\ . & . & . & . & . & . & . \\ \end{matrix} $$

Your output has a shape of 2x2.

Check out the formula for calculating the next layer sizes from https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html (scroll down to the Shape section) and the images on this repository to see how the kernel moves through the image: https://github.com/vdumoulin/conv_arithmetic.

",,user42664,,,,8/29/2021 7:03,,,,3,,,,CC BY-SA 4.0 30427,2,,30424,8/29/2021 7:20,,1,,"

The topic you're looking for is called Linear Algebra, and the equation you highlighted at the end is true. This is the reason:

  1. If you have a scalar $x \in \mathbb{R}$, then you can take its transpose without changing the value, its transpose is exactly the same number.
  2. When you have two vectors $w \in \mathbb{R}^n$ and $y \in \mathbb{R}^n$ and a matrix $X \in \mathbb{R}^{n \times n}$, then $w^TX^Ty \in \mathbb{R}$, i.e. it is a scalar.
  3. From Linear Algebra, when you have a product of matrices ($A$, $B$, etc. matrices) you can take the transpose as follows: $(AB)^T = B^TA^T$. You can extend that two three matrices: $(ABC)^T = ((AB)C)^T = C^T(AB)^T = C^TB^TA^T$. You can look at vectors and scalars as special kinds of matrices.
  4. When you have $w^TX^Ty$, you can take its transpose without changing the value because it is a scalar, so $w^TX^Ty = (w^TX^Ty)^T = y^TXw$.

Most books and free online resources on Linear Algebra are enough to understand these concepts. Everyone has their favourite, and it really doesn't matter much which one you pick but I like this book: https://link.springer.com/book/10.1007/978-3-319-24346-7.

",,user42664,,user42664,8/29/2021 7:30,8/29/2021 7:30,,,,0,,,,CC BY-SA 4.0 30428,2,,30422,8/29/2021 7:31,,2,,"

Discriminator in the original GAN is a regressor

No, it is a classifier. It classifies an image as "real" or "fake", with the output usually being probability that the image is "real" (you could reverse this and use generated images as the target class, provided you change the generator training to match).

Is it true with most of the (advanced or) contemporary GANs?

In WGANs the W stands for Wasserstein, and these GANs use Wassersten loss, which measures the distance in "realness" between real and fake images. This measure of realness is a regression problem, with the caveat that there is no true measure of realness for any images that you can train with separately. The architecture of the critic, which plays the same role as discriminator for classic GANs, is the same as a neural network used for regression.

In general, if you see the term "discriminator", you can assume a classifier is being used. If you see the term "critic", you can assume a regressor. This may not be true for everything published about GANs, as some authors may use the terms loosely, but it is reasonable to expect if you are reading original papers or learning from a course.

As far as I can tell, StyleGAN2, which produces state-of-the-art results, uses a standard discriminator/classifier setup. There are plenty of other architectural details in the discriminator that contribute to the performance of the GAN. There is a link to the paper describing these from the linked Github implementation.

",1847,,,,,8/29/2021 7:31,,,,0,,,,CC BY-SA 4.0 30430,2,,30421,8/29/2021 7:48,,1,,"

Stochastic augmentations are used to sample from all possible augmentations, when the dataset size of all possible augmentations would be too large.

This is usually done on-demand, and as a result each training epoch should result in slighty different inputs. This approach can be beneficial in preventing overfitting when the variability in possible inputs is far larger than you could ever properly sample from (e.g. natural images).

",1847,,,,,8/29/2021 7:48,,,,2,,,,CC BY-SA 4.0 30432,2,,30423,8/29/2021 12:35,,1,,"

The problem:

You are facing a Natural Language problem called Named Entity Recognition (that's the key word you are looking for). But before you dive deep into it, have in mind it's best suited for user input data (where users are absolutely chaotic) and it looks like you have a system data.

The right way:

You should have some kind of tabular (structured) data, instead of a string. So:

  1. Triple check if you have some white-space separator (like tabs).
  2. Review you content retrieval source and method.
  3. If you have influence over the data source, ask them to generate it the right way.

If none of them help...

The programmatic way:

I can't see a specific Machine Learning model to solve that, but you can combine several techniques (some of them NLP) to help. It's an analytical and explorative process. Here are some insights:

the structure of the string is the same within the subset with only the merchant name changing

  • If you have some absolutely identical records, except for the data you are looking for, you could simply look for the diff between them.

  • Dictionary retrieval: If you have the list of Merchants, it's as simple as checking if a record contains any of the listed words. If you don't, you can build it as you run other methods. So maybe one subset can help you solving the other, like a Sudoku puzzle.

  • Track special characters:

    • If some columns (XXXXX_ID_TIME_STAMP CREDIT BILL PAYMENT) contain numbers (or any special character), you can eliminate them right away.
  • Tokenization: you can convert every word to a unique token. If the Merchant Name is composed by 2 or more words, you can use n-grams.

  • Frequency Analysis: The names (tokens / n-grams) will probably have similar frequency inter and intra subsets. For example:

    • The bank name might be much more frequent than a merchant name.
    • A merchant might be frequent in one subset but rare in others.
  • Divide each record into smaller substrings.

    • If you can eliminate some n-grams inside a record (using methods like special characters or frequency analysis) you'll have smaller (and more structured) problems to handle. For example: [_][_]MERCHANT[_]BANK[_]
",49188,,,,,8/29/2021 12:35,,,,0,,,,CC BY-SA 4.0 30435,1,30438,,8/29/2021 15:05,,2,37,"

Let's say that you want to detect if a man is running, walking, or dancing instead of just detecting a man still. What type of neural networks will you use for this purpose?

",49498,,2444,,9/1/2021 1:37,9/1/2021 1:37,What type of neural network do you need if you want to detect an action or dynamic pattern instead of a static pattern?,,1,0,,,,CC BY-SA 4.0 30437,2,,17538,8/29/2021 16:56,,1,,"

Disclaimer: Without the full code, we can only speculate. I encourage you to post the full code on Google Colab or something like this. In the meanwhile, here is my point of view:


The Problem

Looks like your model has found some "master action" that always leads to zero loss, no matter what the state is. So it's not necessarily bad, it's just unexpected according to your point of view.

An example for that would be pausing the game - so you never loose.

You might not like it, but in de model's point of view, it's absolutely nailing it!

The Solution

So how to convince the actor not to pause the game?

Not by changing the model, or tuning hyper-parameters, but by reformulating the problem. In this example, instead of just penalizing the model for failing, you should reward if for winning, so pausing is no longer the best option.

Conclusion

It might not be a problem in the Machine Learning model, but in your environment and reward models. As we don't have access to that, it's hard to provide an answer.


Edit:

You are the CartPole-v0 environment:

A reward of +1 is provided for every timestep that the pole remains upright. The episode ends when the pole is more than 15 degrees from vertical, or the cart moves more than 2.4 units from the center.

Source: https://gym.openai.com/envs/CartPole-v0/

It is a solvable problem. Probably your model has just learned how to solve it after a few hundreds generations. (The link shows a table with "Episodes before solve" for each algorithm, showing numbers consistent to yours).

TL;DR: It's not a bug, it's a feature!

",49188,,49188,,8/29/2021 21:05,8/29/2021 21:05,,,,2,,,,CC BY-SA 4.0 30438,2,,30435,8/29/2021 20:49,,0,,"

It all depends on the kind of data you have and the type of output you expect.

Input:

For example, if you have a series of numbers (heart rate at every minute, or accelerometer data at every second), you can use some time-series approach, like LSTM.

Output / Label

If you want to classify a given segment, you should look for classifiers.

What Bob was doing from 1PM to 2PM?

If you want to find and extract a sub-sample inside a longer period, that's a very different approach.

Find the start and finish time for all running sessions Bob took last week.

In any case, keep in mind the expected prediction format should match the training data format. And some data are more easy to find.

",49188,,,,,8/29/2021 20:49,,,,0,,,,CC BY-SA 4.0 30440,1,,,8/30/2021 1:59,,0,33,"

Text representation, in simple words, is representing text in sensible numeric form. You can read in detail from the following paragraph

Text representation is one of the fundamental problems in text mining and Information Retrieval (IR). It aims to numerically represent the unstructured text documents to make them mathematically computable. For a given set of text documents $D = \{d_i, i=1, 2,...,n\}$, where each $d_i$ stands for a document, the problem of text representation is to represent each $d_i$ of $D$ as a point $s_i$ in a numerical space $S$, where the distance/similarity between each pair of points in space $S$ is well defined.

But, I came across the phrase "text feature representation" in research papers. Features, in general, are present in dataset. But, I think, features can be characters or words or documents or complete text (as a single feature?) in the case of the text. I am not sure about what we call features in the text.

So, I am not sure about what is meant by text feature representation. Is it the same as text representation?

",18758,,18758,,9/2/2021 2:02,9/2/2021 8:16,"Is there any difference between the phrases ""text representation"" and ""text feature representation""?",,1,0,,,,CC BY-SA 4.0 30442,1,,,8/30/2021 7:50,,1,228,"

In this tutorial https://www.tensorflow.org/tutorials/structured_data/time_series#feature_engineering (scroll down a bit to "Time" heading), they take the sin/cos of the time index, and give this as an input so that the model can see the periodicity.

Why use sin and cos (which map to a circle)? Why not map to a square, or a diamond?

What about if you just mapped time to 1d instead of 2d, so e.g. 23:59 would be 1 and 00:00 (1 minute later) would be 0. Would that "jump" actually cause problems? Any actual research or experiments which look at this issue?

",47080,,,,,8/30/2021 12:56,Why use sin/cos to give periodicity in time series prediction,,1,1,,,,CC BY-SA 4.0 30443,1,,,8/30/2021 8:16,,1,62,"

In Pix2Pix by Isola et al. they translate images from different pairs of image categories to one another. While most other example applications for the algorithm make sense to me, I'm having difficulties understanding why one would translate facade labels to facade images. As the title says, I already don't see how labeling a facade would help solving any real world problem.

I skimmed the related work of the paper and found little about what "facade parsing" could be used for, except maybe reconstruction from images. Where are facades reconstructed from facade labels? Can anyone tell me other example applications for facade labels and translating them to images?

",49508,,,,,1/7/2023 6:00,Why labeling facades?,,1,4,,,,CC BY-SA 4.0 30444,1,,,8/30/2021 9:00,,2,210,"

I know it's a simple question but the book Artificial Intelligence by Russel says that the number of reachable states from any initial state in the 8-puzzle problem is $\frac{9!}{2}$. However, I think it should be $9!$. Note that we can't say if we rotate the grid horizontally then state we get is the same so as to divide the total number of states by $2$. So why do we divide the total number of (initial) states by $2$? What extra states are we counting?

",48908,,,,,8/30/2021 9:00,Total number of states reachable from the initial state in 8-puzzle problem,<8-puzzle-problem>,0,6,,,,CC BY-SA 4.0 30445,2,,30442,8/30/2021 12:29,,1,,"

Why use sin and cos (which map to a circle)?

Specifically from the tutorial's point of view, this is the most usual piece of feature engineering applied to periodic input data.

It offers the following advantages - roughly in order of importance (in my opinion):

  • Removes discontinuities (e.g. as you suggest the "jump" from 1 to 0 when time of day wraps around). Non-linear learning algorithms - such as neural networks - can learn to approximate discontinuities from inputs or outputs, but why force the learning algorithm to learn about them, when you can make the necessary transform for it? After all, that is a core purpose of feature engineering, to inject knowledge about the problem that assists the learning algorithm.

  • Creates a consistent distance measure. The vector distance between two points on the circle is always the same if they are the same distance away linearly on the pre-transformed feature (including the periodicity). This is not the case with other mappings such as squares or diamonds.

  • Sin and cos are standard library library functions in most programming languages. Mapping to a square or diamond involves slightly more custom coding.

Why not map to a square, or a diamond?

Squares or diamonds are close enough to circles, that in a lot of cases you would not notice the difference in practice. There may even be cases where some kind of closed polygon is a more natural mapping for a given problem. So feel free to use them and experiment when performing feature engineering.

Any actual research or experiments which look at this issue?

This kind of low-level feature engineering is simple enough that you probably won't find many papers that pull out just solutions to mapping periodic variables. You will find using sin and cos to transform a periodic variable used as a standard transform in many situations though.

If you are interested to compare different options for mapping periodic variables, you will likely need to perform the experiments yourself. For any specific problem where you are not sure, this is good practice if you have the time. There is a chance that you will find a useful generic mapping for periodic data that is not based on circles, but my prediction is that you will find other shapes perform similarly to circles a lot of the time, but are sometimes worse and rarely if ever better.

",1847,,1847,,8/30/2021 12:56,8/30/2021 12:56,,,,0,,,,CC BY-SA 4.0 30450,1,,,8/30/2021 15:36,,2,210,"

Is there a way to make a certain output dimension of a neural network independent of a particular feature dimension? For example, I have a function $f_{\theta} : \mathcal{R}^{10} \rightarrow \mathcal{R}^2$, I want to make $f_{\theta}(\mathbf{x})_2$ independent of $\mathbf{x}_6$. How can this condition be imposed on a neural network?

I am thinking of penalizing the gradient of $f_{\theta}(\mathbf{x})$ w.r.t $\mathbf{x}_6$ for a considerable range of $\mathbf{x}_6 \in [-1, 1]$. Will this give me the similar effect? If so, how can this be coded in Pytorch or any other deep learning framework?

",21509,,21509,,8/30/2021 15:59,2/26/2022 20:03,How to make an output independent of input feature in neural networks?,,2,0,,,,CC BY-SA 4.0 30451,2,,30367,8/30/2021 17:03,,4,,"

I check again with the subchapters of Artificial Intelligence: A Modern Approach, 4th Global ed / US ed from this website the pdf subchapters reference of Global Edition and US Edition. I can confirm you the difference between Global US edition is this subchapter:

20 Knowledge in Learning 739
    20.1 A Logical Formulation of Learning 739
    20.2 Knowledge in Learning 747
    20.3 Explanation-Based Learning 750
    20.4 Learning Using Relevance Information 754
    20.5 Inductive Logic Programming 758
Summary 767
Bibliographical and Historical Notes 768

So Global Edition has more content than US Edition.

",30751,,30751,,8/30/2021 17:36,8/30/2021 17:36,,,,0,,,,CC BY-SA 4.0 30452,1,,,8/30/2021 18:31,,0,42,"

Suppose, I have two NN models:

  1. CNN model
  2. Sequential NN model

They are solving the same problem. The data points have the same number of features.

In the case of #1, we used 0.6 million data points, 35k epochs, and the model achieved 80% accuracy in the training.

In the case of #2, we used 1.4 million data points, 1k epochs, and the model achieved 90% accuracy in the training.

Which model is better/more efficient and why?

",20721,,,,,8/30/2021 18:31,Which model is more efficient and why?,,0,2,,,,CC BY-SA 4.0 30453,2,,30450,8/30/2021 19:02,,0,,"
    def call(self, x):
        x_no_x6 = tf.concat([x[:,:5], x[:,5+1:]], axis=1)
        f2 = ... model architecture goes here ... (function of x_no_x6)
        f1 = ... model architecture... (function of x)
        return tf.concat([f1, f2], axis=1)

You could have two models, one of which uses $x$ (including $x_6$) and the other which removes $x_6$ from the input. The overall model is just the concatenation of the two outputs. (Pseudo-code is assuming model is implemented by subclassing tensorflow.keras.Model link)

Not sure about the gradient penalization idea. Seems nontrivial to implement and it's not clear to me if it would work.

",47080,,,,,8/30/2021 19:02,,,,3,,,,CC BY-SA 4.0 30455,1,,,8/31/2021 3:53,,0,115,"

I am trying to get my head around a problem where the action by the agent can not change the environment. Without going into details, my problem is about error correction in an stochastic environment.

So, here the action by agent can not change the environment that causes these error and all we can do is to smartly correct as the errors happen. I am currently thinking about using Reinforcement learning for this agent who could correct the errors.

Now my questions are:

  1. Would reinforcing learning be an overkill since the agent can not influence the environment?
  2. How do RL, LSTM, and even random forest compare in such scenarios?

Thank you.

",33488,,49455,,8/31/2021 22:38,8/31/2021 22:38,Reinforcing Learning when action has no effect on the environment,,1,7,,,,CC BY-SA 4.0 30457,2,,30352,8/31/2021 6:48,,1,,"

Yes - and no. The important distinction is whether your data contains proper word boundaries and rigorous translation references.

BLEU and ROGUE both work by comparing a candidate (ie, model output) to reference text (ie, training data). In a translation task (what these metrics are typically used for) this works quite well, as you can normally assume the translation will use a small set of words common among all references, so naturally the candidate should be using these words in some order. The more references you have the more allowance the model has to interpret the translation it's own way and still achieve a relatively high score.

However, this does not always work. Both algorithms require word boundaries for them to work properly. You can modify your dataset so that words are properly separated for the algorithms to use, but using your example of source code, this becomes very tricky with indentation and more. Additionally, it's very hard to capture the possible outputs. Not only can a variable be reasonably named hundreds of different ways (making it unlikely an exhaustive reference list exists that would capture your models output), the amount of potentially correct code grows extremely quickly the larger the task becomes, meaning the references become increasingly less useful. If you keep the tasks very short, you could potentially find some use from these algorithms, but better metrics exist.

It's probably best to use these algorithms for what they're meant for - translation. However, there is absolutely no harm in using them in a task anyway, they're very simple algorithms and could still potentially give you some insight.

(For source code evaluation, you would probably be better actually just running the code and seeing if it completes the task, maybe making your own scoring system. Or you could draw inspiration from something such as OpenAI's Codex HumanEval)

",26726,,,,,8/31/2021 6:48,,,,0,,,,CC BY-SA 4.0 30458,1,,,8/31/2021 8:10,,0,51,"

I would like to know what are the standard approach to construct a model to predict the value of a time series $y_t$ that depends from other time series $\bar{X}_t$. I use to see around that for this kind of task there is a large amount of models but all autoregressive in a way. I'm thinking about, for example VAR, SARIMAX, RNN, LSTM. I'm looking for a model, or at least an approach, where there is no my lagged target variable as predictors. Does anyone has some references ?

",47904,,30751,,9/1/2021 20:31,1/25/2023 23:04,How to construct a model to predict the value of a time series $y_t$ that depends from other time series $\bar{X}_t$?,,1,0,,,,CC BY-SA 4.0 30459,1,30463,,8/31/2021 8:46,,0,42,"

Is there a method/algorithm to generate instances of objects from image that was segmented by the use of any image segmentation models?

For example, I have an image with one class and it was segmented in a given way, where 1s are objects of the same class and empty fields are of no class:

How can I now generate list of the two objects, where list's elements for example would be positions of all the pixels inside the object (list of list).

",22659,,,,,8/31/2021 11:01,How to divide a segmented image into classes instances?,,1,0,,,,CC BY-SA 4.0 30460,1,30466,,8/31/2021 10:45,,1,73,"

Frechet Inception Distance is a metric that calculates the distance between feature vectors calculated for real and generated images. It is used in evaluations how good the generated images are.

Consider the following citation of the research paper I want to study in detail, which I think is the first paper on Frechet distance

Fréchet, Maurice. "Sur la distance de deux lois de probabilité." Comptes Rendus Hebdomadaires des Seances de L Academie des Sciences 244.6 (1957): 689-692.

I have no clue on where to access the paper.

In general I get PDFs of almost any research paper due to my institute subscriptions in various publishers. But, I cannot see the pdf or contents of this research paper anywhere.

What can I do for accessing this paper?

",18758,,18758,,9/9/2021 13:39,9/9/2021 13:39,Where can I access this research paper on Frechet distance score?,,1,1,,,,CC BY-SA 4.0 30463,2,,30459,8/31/2021 11:01,,0,,"

Those segmented instances can be seen as connected components of the 1s. You can use the flood fill or BFS algorithm to detect these components or try SimpleBlobDetector by OpenCV.

",49458,,,,,8/31/2021 11:01,,,,1,,,,CC BY-SA 4.0 30464,1,,,8/31/2021 11:09,,1,121,"

In natural language processing, I came across the concept of "relevant document" several times. And several analytical formulas, such as precision, recall are based on the relevant documents.

Precision = $\dfrac{\text{Number of documents that are relevant and retrieved to the query Q}}{\text{Number of retrieved documents to the query Q}}$

Recall = $\dfrac{\text{Number of documents that are relevant and retrieved to the query Q}}{\text{Number of relevant documents to the query Q}}$

What is meant by "relevance" in such cases? Is it a universally objective term or subjective term, decided by the designer, based on that particular context?

",18758,,2444,,9/9/2021 12:58,9/9/2021 13:02,"What is meant by a ""relevant document"" in NLP?",,2,1,,,,CC BY-SA 4.0 30465,1,,,8/31/2021 11:59,,0,107,"

I'm new to RL. Recently, I took a course on Coursera. In the Off-policy MC method, I learned the concept of Importance Sampling as follows:

where the importance sampling ratio is the ratio of the target policy over the behavior policy.

But in Suton book the expectation under the target policy is estimated like this:

Given that both sources used the same importance sampling ratio. However, I ended up getting $E[G_{t}|s] = \sum{G_{t} b \frac{\pi}{\pi}} = \sum{G_{t} \frac{1}{\rho}\pi} = E[\frac{G_{t}}{\rho_{t:T-1}}|s]$ instead

Did I do something wrong?

",49532,,2444,,9/5/2021 17:47,9/5/2021 17:47,Derive Importance Sampling as Expected Value Notation,,1,0,,,,CC BY-SA 4.0 30466,2,,30460,8/31/2021 12:10,,2,,"

The first reference in the Google was link to that paper : https://www2.sonycsl.co.jp/person/nielsen/infogeo/Seminar/Frechet-Fondamental-Distance-Wasserstein.pdf

",38846,,38846,,8/31/2021 13:20,8/31/2021 13:20,,,,4,,,,CC BY-SA 4.0 30467,2,,30464,8/31/2021 12:29,,1,,"

It means precisely the same as true/false positive and true/false negative in the classic formulation of precision, recall, F-score for classification tasks.

  • relevant and retrieved: true positive
  • relevant and not retrieved: false positive
  • not relevant and retrieved: false negative
  • not relevant and not retrieved: true negative

And yes, the relevance depends on the task you're performing. To make an example, you might be interested in classifying, through sentiment analysis, offensive vs nonoffensive tweets. In this case, you'll have:

  • relevant and retrieved: offensive tweets labeled as such
  • relevant and not retrieved: offensive tweets labeled as nonoffensive
  • not relevant and retrieved: nonoffensive tweets labeled as offensive
  • not relevant and not retrieved: nonoffensive tweets labeled as such
",34098,,2444,,9/9/2021 13:02,9/9/2021 13:02,,,,2,,,,CC BY-SA 4.0 30468,2,,30440,8/31/2021 12:41,,1,,"

I think that literature is simply inconsistent in this regard. But here's a distinction that I think helps to shade a bit of light on this question:

text representation: as you said we have to convert text into numerical variables. This term refers to general strategies to convert text into numbers, like embedding, bag of words, and so on.

text features representation: this is what we feed to a real model. The difference is that it can be a combination of different text representation. For example, I could apply a TF-IDF vectorizer to a corpus and use it to encode each word of a sentence and then use also pre-trained embedding to encode the same sentence/document, concatenating for each word both vectors. Or another commonly used strategy is to use multiple n-grams, which also consist in applying multiple text representations to the same sentence/document.

",34098,,34098,,9/2/2021 8:16,9/2/2021 8:16,,,,2,,,,CC BY-SA 4.0 30469,1,31828,,8/31/2021 13:04,,1,182,"

I have a control problem for a heating device of a building with the goal to minimize the electricity costs for one day under a varying price for electricity in every hour. (more details can be seen here as well: Reinforcement learning applicable to a scheduling problem?).

I also want to test two further goals (minimize peak load and maximize PV self-consumption rate).

My problem also has about 10 constraints that should not be violated. I have two main questions about how to integrate the constraints into the Reinforcement Learning agent:

Here are my two main questions (with following minor questions):

(1) Basically I have three goals with normalized rewards between 0 and 1 for every time-slot and I have 10 constraints.

  Should the constraints reward also be normalized for all 10 constraints? And then should I choose a higher weight for the most important constraint than for all three goals combined such that a constraint violation is more crucial than getting a better objective value for all the three goals?

(2) Is it also possible to tell the Reinforcement Learning agent some rules directly without any constraints?

  E.g. I have two storage systems, and the agent is only allowed to heat up 1 for every time-slot. Further, the agent should not start and stop heating up frequently (like around four starts of the device daily is desirable).

  Can I explicitly tell these rules to the agent? Or do I have to do it indirectly by calculating a reward for every of these constraints and incorporate the weighted reward into the overall reward function of the agent?

I'll appreciate any suggestion and comment.

",48758,,27229,,9/20/2021 23:27,9/24/2021 18:56,How to Weigt Constraints in A Control Problem with Reinforcement Learning,,1,0,,,,CC BY-SA 4.0 30470,1,30475,,8/31/2021 14:00,,1,56,"

In Semi-supervised classification with Graph Convolutional Networks, I am unable to understand a few things.

Given an undirected graph having

  • adjacency matrix $A$,
  • degree matrix $D_{ii} = \sum_j A_{ij}$,
  • normalized graph laplacian $L = I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}} = U \Lambda U^T$, where $\lambda_{max} \approx 2$ (see page 3, 2nd paragraph, not sure which matrix they are talking about)

Then, $I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ has eigenvalues in the range [0, 2]. How?

",49338,,2444,,10/23/2021 17:48,10/23/2021 17:48,"Why does $I_N + D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$ have eigenvalues in the range [0, 2]?",,1,0,,,,CC BY-SA 4.0 30471,1,30484,,8/31/2021 14:37,,0,51,"

I am trying to implement a toy VAE project. My goal is to use a VAE to model the moon dataset from scikit-learn, with an extra constant (but noisy) z-dimension.

To this end I use an approximate posterior with the form of a beta distribution and a uniform prior in a 1D latent space, because essentially the data is 1D. The decoder is a NN-parameterized gaussian.

I cannot get it to work using the simple ELBO.

I tried so far :

  • Increasing the number of monte carlo samples in the SGVB
  • Various deterministic pretrainings which tend to raise nans
  • Increasing the width or depth of the networks
  • Gradient clipping
  • learning rate annealing
  • Remove the noise in the data and perform Batch gradient descent instead of mini-batch
  • ...

I use layers of residual blocks with Tanh nonlinearities, whose outputs are $\log \alpha$ and $\log \beta$ for the encoder, $\mu$ and $\log \sigma$ for the decoder.

I am starting to wonder whether the distribution is actually hard to model, because I ran out of bugs to fix and strategies to improve training.

Are some low dimensional distributions known to be hard to model this way ?

Additionally, what obvious or non obvious mistakes could I have made ?

ADDENDA

Code to generate the data:

# Adapted from sklearn.dataset.make_moons

def make_moons(n_samples=100, noise=None):
    generator = default_rng()

    n_samples_out = n_samples // 2
    n_samples_in = n_samples - n_samples_out

    outer_circ_x = np.cos(np.linspace(0, np.pi, n_samples_out))
    outer_circ_y = np.sin(np.linspace(0, np.pi, n_samples_out))
    inner_circ_x = 1 - np.cos(np.linspace(0, np.pi, n_samples_in))
    inner_circ_y = 1 - np.sin(np.linspace(0, np.pi, n_samples_in)) - .5

    X = np.vstack([np.append(outer_circ_x, inner_circ_x),
                   np.append(outer_circ_y, inner_circ_y),
                   np.zeros(n_samples)]).T
    y = np.hstack([np.zeros(n_samples_out, dtype=np.intp),
                   np.ones(n_samples_in, dtype=np.intp)])

    if noise is not None:
        X += generator.multivariate_normal(np.zeros(3), np.diag([noise, noise, noise])**2, size=n_samples)

    return X, y

# create dataset
moon_coordinates, moon_labels = make_moons(n_samples=500, noise=.01)
moon_coordinates = moon_coordinates.astype(np.float32)
moon_labels = moon_labels.astype(np.float32)

# normalize dataset
moon_coordinates = (moon_coordinates-moon_coordinates.mean(axis=0))/np.std(moon_coordinates, axis=0)

UPDATE

I have found a mistake that can explain poor performance.

In my post I said that the data is basically 1D, yet when I create the dataset I normalize the standard deviation in every dimension. This increases the magnitude of the z noise, and all of a sudden the third dimension accounts for a lot of variance and my model tries to fit to this noise.

Removing the normalization dramatically increases the performance.

",36040,,36040,,9/1/2021 6:30,9/1/2021 6:31,Are some low dimensional distributions known to be hard to model with VAEs?,,1,2,,,,CC BY-SA 4.0 30473,2,,30455,8/31/2021 15:29,,2,,"

Short Intro

It's very common for people to think that Deep Learning is a "superior form" of Neural Network, a "smarter model". And then they try to use DL for solving simple tasks and they'll find more problems than solutions.

We might think of plane as superior to a bike. But when we need to buy some bread for breakfast, taking a plane is not even an overkill. It's just nonsense.

Your problem

In a similar way, I've seen people thinking Reinforcement Learning as somehow superior to Supervised or Unsupervised Learning. So let's establish a baseline:

  • The choice between Reinforcement Learning or Supervised Learning is not about superiority, but rather the nature of the task and your available dataset.

Reinforcement Learning

Great when your problem can be modeled as an agent interacting with an environment. The agent will sense (input), process (policy) and act (output). The policy is learned based on the reward of each action.

In most RL tasks (like strategy games), you can't objectively rate each individual action. Instead, it takes multiple actions before the outcome is obtained. But you can't train your model when you can't measure it's performance. So how to train a RL model?

Policy:

A policy is like a function that maps states into actions. When you can't measure the performance of each individual action, you create a policy.

You let the agent interact with the environment (play the game) until you have a score. If the environment is stochastic (the game depends on luck), you might want several rollouts (play lots of times) before evaluating the policy.

So, for acquiring a single data sample, you might need to let an agent to play a game several times.

But if you can directly measure the agent's perform every each action, congratulations! You can probably save lots of computational power by modeling the problem as a:

Supervised Learning

The model is learned based on the input-output pairs. A training dataset.

LSTM

Great when your input is sequential and your output is a function of the previous state.

Random Forest

Great when your input is not sequential, but you have a lot of features.

Unsupervised Learning

Great when your input is not sequential and you want to discover hidden structure in your data.

I hope this helps you with both questions. :)

",49188,,49188,,8/31/2021 18:32,8/31/2021 18:32,,,,3,,,,CC BY-SA 4.0 30474,1,,,8/31/2021 15:36,,1,15,"

Everything is in the title.

Metric learning seems to be closer to our way of thinking than the best performing models (supervised learning CNNs-based models like resnet or efficientnet). I was looking for research papers that would have try a metric learning-based model for classic classification task on the Imagenet dataset benchmark but I could not find it. That is why I am asking the question here.

",41801,,18758,,9/1/2021 0:54,9/1/2021 0:54,is there any proof that metric learning cannot achieve better on image classification task than accepted models (resnet etc)?,,0,0,,,,CC BY-SA 4.0 30475,2,,30470,8/31/2021 16:44,,0,,"

The proof of the first statement can be found in these Lecture Notes.

Have a look of the Proof of the Claim 1.

Concerning the renormalization trick I do not see easy way to justify this statement. The paper claims:

i.e. adding self-loops to the graph, improves accuracy, and we demonstrate that this method effectively shrinks the graph spectral domain, resulting in a low-pass-type filter when applied to SGC.

",38846,,,,,8/31/2021 16:44,,,,0,,,,CC BY-SA 4.0 30476,2,,30409,8/31/2021 16:55,,2,,"

The terminologies can be confusing because of the different ways authors use them. The bottom line is this

The Synthesis task basically refers to creating or synthesizing new data. Creation of data can be purely deterministic, e.g.

.. we provide a written sentence .. to emit an audio waveform containing a spoken version of that sentence

But, a vast majority of the time, such creation has an ..

.. added qualification that there is no single correct output for each input, and we explicitly desire a large amount of variation

which is done by a Statistical Sampling procedure. This procedure can be as simple as IID sampling from $\mathcal{N}(\mu, \sigma)$ to more complex MCMC methods.

So basically, "Synthesis" is a broader task that may contain a "Sampling" procedure.

",11030,,,,,8/31/2021 16:55,,,,0,,,,CC BY-SA 4.0 30477,2,,30284,8/31/2021 18:05,,1,,"

Sutton and Barto explain it themselves in section 5.9. I post it with a bit of context. The equation you're looking for is 5.13.

",49455,,,,,8/31/2021 18:05,,,,0,,,,CC BY-SA 4.0 30478,2,,30465,8/31/2021 20:34,,1,,"

The importance sampling ratio is changing the measure of the expectation. We have the behaviour policy that generates trajectories in the environment, so we can calculate $\mathbb E_b[G_t|S_t = s_t] = v_b(s_t)$, but that's not what we want to calculate, we want to calculate $\mathbb E_\pi[G_t|S_t = s_t] = v_\pi(s_t)$. So, we need to find a scaling factor, $\rho$, that we can use in the first expectation to get to the second:

$$ \mathbb E_b[\rho G_t|S_t = s_t] = \mathbb E_\pi[G_t|S_t = s_t] = v_\pi(s_t) $$

That scaling factor is the importance sampling ratio. In your example, the first equation, $E[G_t|s]$, is something we aren't interested in because we don't want to improve the behaviour policy.

",,user42664,,,,8/31/2021 20:34,,,,0,,,,CC BY-SA 4.0 30480,1,,,8/31/2021 23:18,,0,206,"

I am trying to solve a classification problem by implementing the Least Squares algorithm in Python. To solve this problem, I am implementing the linear algebra formula to train the classifier, which is $w = (X^TX)X^Ty$, where $w$ is the final weight vector of the classification function, $X$ is an input matrix of training data and $y$ a matrix of training labels. The classifier must be able to classify into three classes. As seen in the following snippet, during the preparation of the data, I gather my training data in matrix $X$, adding the number one at the end of each sample. I also gather my training labels in matrix $y$, coding each class as a sequence of -1 and 1.

    X = np.matrix(np.zeros((len(train_set),4)))
    y = np.matrix(np.zeros((len(train_set),3)))

    for i, row in train_set.iterrows():
        X[i] = [row[1], row[2], row[3], 1]
        if row[0] == 'H':
            y[i] = [1, -1, -1]
        elif row[0] == 'D':
            y[i] = [-1, 1, -1]
        else:
            y[i] = [-1, -1, 1]

What we have in matrix $y$ in the end, is a matrix that each column is a representation of each class and can tell us which samples belong to the class of the corresponding column. Having explained what each matrix in my program contains, my question is this, which of the following two implementations of the training process is correct and why?

At first, I went with the implementation seen in the snippet below.

    Xtranspose = X.T
    dotProduct = Xtranspose.dot(X)
    inverse = np.linalg.pinv(dotProduct)
    A = inverse.dot(Xtranspose)
    w = A.dot(y)
    
    for i, row in test_set.iterrows():
        r = np.matrix([row[1], row[2], row[3], 1]).dot(w)

As you can see, considering that $A = (X^TX)X^T$, I multiply $A$ with the label matrix $y$ and then I use the weight vector $w$ in the loop to test the classifier on some test data. Later, though, after some research on the internet, I found this second implementation, which actually has a higher success rate.

    for i, row in test_set.iterrows():
        r = np.zeros([3])
        j = 0
        for column in y.T:
            w = A.dot(column.T)
            r[j] = np.matrix([row[1], row[2], row[3], 1]).dot(w)
            j += 1

Having calculated $A$ beforehand, I now calculate the weights for each column of $y$, for each class, separately. This second method, has a 10% greater success rate than the first one. So, why does the second training method have a better success rate? Is the second method the right training method?

",49544,,2444,,9/3/2021 16:13,1/26/2023 19:03,Which of the following two implementations of a Least Squares classifier in Python is correct?,,1,0,,,,CC BY-SA 4.0 30481,1,,,9/1/2021 1:38,,1,175,"

Consider the following paragraph from section 2: General Design Principles of the research paper titled Rethinking the Inception Architecture for Computer Vision

Avoid representational bottlenecks, especially early in the network. Feed-forward networks can be represented by an acyclic graph from the input layer(s) to the classifier or regressor. This defines a clear direction for the information flow. For any cut separating the inputs from the outputs, one can access the amount of information passing though the cut. One should avoid bottlenecks with extreme compression. In general the representation size should gently decrease from the inputs to the outputs before reaching the final representation used for the task at hand. Theoretically, information content cannot be assessed merely by the dimensionality of the representation as it discards important factors like correlation structure; the dimensionality merely provides a rough estimate of information content.

This paragraph warned us to avoid bottlenecks and also representational bottlenecks. What does it mean by a bottleneck of/in a neural network and representational bottleneck?

",18758,,18758,,11/4/2021 10:29,11/4/2021 10:29,What does it mean by bottleneck and representational bottleneck in feedforward neural networks?,,1,3,,,,CC BY-SA 4.0 30482,2,,30481,9/1/2021 4:21,,1,,"

A bottleneck layer is a layer that contains few nodes compared to the previous layers. It can be used to obtain a representation of the input with reduced dimensionality.

If you create bottlenecks in the initial layers it can cause loss of information.

Why it is done?

Take the example of an image dataset that contains high-resolution images. High resolution means more pixels means it will need more nodes in the input layer. Having more nodes will need more computational power to train the network. Hence in such cases, we can use fewer nodes in the next layer. As images are high resolution we might not lose any important information.

",21642,,21642,,9/1/2021 7:46,9/1/2021 7:46,,,,1,,,,CC BY-SA 4.0 30483,2,,21411,9/1/2021 6:17,,2,,"

Some suggestions:

  1. You have a loop in which illegal moves by the RL agent are ignored. In other words, when the agent makes illegal moves, it is not punished, nor is there any +/- rewards for it whatsoever. In my program I treat illegal moves the same as losing the game.

  2. Try to play a few "pre-moves" to make the game easier. For example I start with this (now is for Player 'X' to move):

    Then you may see convergence sooner. Always a good idea to start from a dead-easy case.

  3. Notice that the optimal value is not the "perfect score". If you take win=2, lose=-2, draw=1, then the optimal score for the above game for Player 'X' is 1.5, not 2.0. (You can check the math).

    I wrote a small Python program to calculate the optimal score for an RL agent playing against a random opponent (both players assumed to never make illegal moves). See this answer.

  4. For me it took much more than 50K games to see convergence. It's more like 1-2M games. (Correction: With more trails, I found that policy gradient seems unable to converge to the optimal value, but it did improve over time to reach a sub-optimal score.)

  5. Your neural network may be a bit too small.

Remark: This is a good question and also very significant to artificial intelligence, as Tic Tac Toe is a game suitable for logic-based agents, and it is important to see how deep learning performs in such "logical" domains. My own research is focused on combining logic structure with deep learning.

My code is here: https://github.com/Cybernetic1/policy-gradient

",17302,,17302,,9/11/2021 4:12,9/11/2021 4:12,,,,1,,,,CC BY-SA 4.0 30484,2,,30471,9/1/2021 6:31,,0,,"

I have found a mistake that can explain poor performance.

In my post I said that the data is basically 1D, yet when I create the dataset I normalize the standard deviation in every dimension. This increases the magnitude of the z noise, and all of a sudden the third dimension accounts for a lot of variance and my model tries to fit to this noise.

Removing the normalization dramatically increases the performance.

",36040,,,,,9/1/2021 6:31,,,,0,,,,CC BY-SA 4.0 30486,2,,30480,9/1/2021 7:21,,0,,"

The first implementation is better. In the second one, you calculate the weights based on information about a single class only, the $w$ inside the loop will be primed to find the classes. This is equivalent to training three models each trying to predict confidence in one class only. The first one would be able to do it for three classes.

A couple of extra points:

  • It's probably better to use np.array instead of np.matrix (Checkout the note here https://numpy.org/doc/stable/reference/generated/numpy.matrix.html)
  • For classification, it's more common to use cross entropy as the loss function, instead of the L2 loss. The normal equations work with the L2 loss.
  • I can't see that from the example, but it's worth normalising the features.
",,user42664,,,,9/1/2021 7:21,,,,1,,,,CC BY-SA 4.0 30487,1,,,9/1/2021 7:27,,5,64,"

In Salimans et al, 2016, the authors argue that ES should be considered a competitive alternative to MDP-based RL algorithms like Q-Learning, TRPO.

However, in practice, I notice that more often than not ES takes far more episodes to converge than MDP-based algorithms. So what would still be a reason to consider those, apart from pure academic interest?

The authors mention that ES will show less variance in long-horizon tasks, but didn't give an example. Is this aspect crucial ?

",34075,,,,,9/4/2021 16:40,When would you use Evolutionary Strategies over Step-Based Reinforcement Learning,,1,0,,,,CC BY-SA 4.0 30488,1,,,9/1/2021 7:31,,1,103,"

I know only about the Pearson's correlation coefficient in literature.

Covariance between two random variables $X$ and $Y$ is defined as

$$Cov[X, Y] = \mathbb{E}[(X - \mathbb{E}[X])(Y-\mathbb{E}[Y])]$$

(Linear) Correlation between two random variables $X$ and $Y$ is defined as

$$Corr[X, Y] = \dfrac{Cov[X, Y]}{\sigma(X)\sigma(Y)}$$

Covariance is a measure of association between two random variables whereas correlation measures how much dependent they both are on each other.

Consider the following excerpt mentioning "correlation structure" from section 2: General Design Principles of the research paper titled Rethinking the Inception Architecture for Computer Vision

Theoretically, information content cannot be assessed merely by the dimensionality of the representation as it discards important factors like correlation structure; the dimensionality merely provides a rough estimate of information content.

What is meant by the "correlation structure" mentioned here? Is it a graph on input random variables? Is it in any way related to the aforementioned correlation?

",18758,,,,,9/1/2021 7:51,What is meant by correlation structure?,,0,0,,,,CC BY-SA 4.0 30491,1,,,9/1/2021 8:41,,5,900,"

Self-attention layers have 4 learnable tensors (in the vanilla formulation):

  • Query matrix $W_Q$
  • Key matrix $W_K$
  • Value matrix $W_V$
  • Output matrix $W_O$

Nice illustration from https://jalammar.github.io/illustrated-transformer/

However, I do not know how should one choose the default initialization for these parameters.

In the works, devoted to MLP and CNNs, one chooses xavier/glorot or he initialization by default, as they can be shown to approximately preserve the magnitude in the forward and backward pass, as shown in these notes.

However, I wonder, whether there is some study of good initialization for Transformers. The default implementation in Tensorflow and PyTorch use xavier/glorot.

Probably, any reasonable choice will work fine.

",38846,,2444,,11/30/2021 15:02,11/30/2021 15:02,Is there a proper initialization technique for the weight matrices in multi-head attention?,,0,0,,,,CC BY-SA 4.0 30493,1,30501,,9/1/2021 11:34,,0,119,"

I have a basic understanding how a cell and a layer of an LSTM works. However, I get confused by what "number of units" (as termed in tensorflow) exactly means. A unit is, as far as I understand one "instance" of a LSTM, consisting of $t$ cells, for a sequence of length $t$. When I have more than one unit, do these work in parallel (i.e. 10 units not interacting with each other) or sequentially (ouput of unit 1 is the input of unit 2 and so on).

",49559,,18758,,9/1/2021 12:17,9/1/2021 19:15,Do LSTM in tensorflow work sequentially or in parallel,,1,1,,,,CC BY-SA 4.0 30494,2,,30464,9/1/2021 12:09,,1,,"

Precision and Recall are concepts that have been introduced in the field of information retrieval. Imagine you have a large set of documents, and you want to find the ones that are relevant to a particular issue.

You can be sure to find all relevant documents if you simply return the whole lot -- you won't miss a single relevant document. So your recall is 1.0, the maximum: you got all the relevant ones. That you also got a large number of irrelevant ones is not important for recall, but means your query wasn't very efficient (and actually quite pointless!).

If you do get all documents, your precision will be low: the number of retrieved documents that are relevant is the same, but the denominator is now much bigger, and depending on how many relevant documents there are, your precision value is small.

The opposite extreme is to not return any documents: Now your precision is high (effectively infinite, as it is zero divided by zero), but your recall is zero (as you don't get any relevant documents).

So in an ideal world, you want to optimise both precision and recall (which is why usually a third metric is used which combines the two, the F-Score.

Now, you can only calculate precision and recall if you know what the correct values are already, so you need to know how many documents are relevant. Obviously you will need to have inspected all documents to decide whether they are relevant to your issue or not. This is usually a subjective value judgment, as relevance tends to be non-binary. If I'm looking for articles on the efficiency of petrol engines, will those about Diesel engines be relevant? Probably, if you're looking at a general dataset. But if the documents are all about the efficiency of different types of engines, then I would most likely not be interested in those about Diesel engines.

From Information Retrieval, the concepts of precision and recall have been generalised to any binary classification tasks, where you can judge the quality of the classification by using these metrics. So you might come across situations where the 'relevance' criterion is more objective than whether a document is relevant to a certain topic. But that depends on the context of what you are classifying.

",2193,,,,,9/1/2021 12:09,,,,0,,,,CC BY-SA 4.0 30495,1,,,9/1/2021 13:39,,0,19,"

I have old video and I want to keep the person's face in the video but I want to transfer my facial expressions to that video. Is there any better alternative to first order motion model for that task ? I tried deepfacelab but it has kind of steep learning curve

",31427,,,,,9/1/2021 13:39,Expression Transfer Deep Learning Problem,,0,2,,,,CC BY-SA 4.0 30497,1,,,9/1/2021 15:08,,1,277,"

Are there some effective and robust solutions for scaling and rotation for image recognition with the neural networks (NN)?

I see tons of sources on the Web with explanation how neural network is used for image recognition, but all of them avoiding the topic of scaled or rotated images. The network trained for patterns won’t recognize it if the pattern scaled or rotated.

Of course, there are some intuitive/naive workarounds/approaches:

  1. Brute force – you can rotate and scale image until NN recognizes it. Too much expensive.
  2. You may teach NN for all cases of rotated and scaled image, could be hard and will lead to NN weakening.
  3. You may teach NN that some images (which are rotation and scale of the original image) are clusters and teach to recognize clusters and interpolate/extrapolate them. A little bit tricky in coding and debugging.
  4. For rotation you can move to polar coordinates, this gives a kind of invariant both for recognizing patterns and building histograms for specific portions of the image. But for this you need to recognized the pivot point and again this is quite expensive.

Are there any better solutions, ideas, hints, references?

(I read some answers there to the rotational problem, but what I saw doesn't cover the topic).

",49564,,,,,9/1/2021 15:08,Image recognition neural network: scaling and rotation,,0,3,,,,CC BY-SA 4.0 30498,1,30500,,9/1/2021 16:58,,1,1009,"

In many places, it says PPO and Actor-Critic methods in general use TD-updates, but in the loss function for PPO, the Value function loss component uses the difference between output of the value function and the value target, which I can only assume is the discounted sum of rewards that can only be obtained at the END of the episode?

So this might be a moment of stupidity for me, but

  1. Is the value target in PPO set only at the end of the episode using the discounted sum of rewards? or is there a secret way of setting these value targets that I am missing?

  2. If a learning update indeed takes place every learning step (before the end of the episode), then how does this TD-learning happen - does it use some other approximate of the value target?

Thank you. Please help.

Sincerely, a frustrated student

",21513,,,,,9/2/2021 7:56,PPO when does the update happen?,,1,0,,,,CC BY-SA 4.0 30500,2,,30498,9/1/2021 18:36,,2,,"

Updated response to include more information from the discussion.

Monte Carlo vs. Temporal Difference (TD)

Let's start with the distinction between these two. When you have a sequence of rewards observed from the environment and a neural network predicting the value of each state, then you can create target values that your predictions should move closer to in a couple of ways. You can look at the full episode and use the actual observed rewards with discounting to create your target, this is called the Monte Carlo estimation. This target value is an estimation of the value function from your initial state.

$$ \widehat v_\pi(s_0) = y^{MC} = r_1 + \gamma r_2 + \gamma^2 r_3 + \cdot\cdot\cdot + \gamma^{T-1}r_T $$

Then, you update the NN parameters to get better at predicting the value of the state based on this estimate. Another way is to update your neural network sooner by only using a partial trajectory (of length $k$) and rely on your NN to estimate the rest of the trajectory.

$$ \widehat v(s_0) = y^{TD} = r_1 + \gamma r_2 + \gamma^2 r_3 + \cdot\cdot\cdot + \gamma^{k-1} r_k + \gamma^k \widehat v(s_k) $$

The last term uses the NN to estimate the remaining discounted rewards from the episode without observing it.

Proximal Policy Optimisation

You don't need to wait until the end of an episode to receive rewards. If you have access to intermediate rewards, then you can update the value network sooner. PPO uses the advantage function when calculating the objective (and the loss) which is also done similarly to the TD approach. Both the n-step and the Generalised Advantage Estimation relies on the NN to fill in some of the unobserved values.

Original paper on PPO gives a nice description of the algorithm (the version with clipping the probability ratio is probably easier to understand): Proximal Policy Optimization Algorithms (Schulman et al., 2017)

OpenAI has a good description of general policy gradient algorithms and PPO as well, it's worth checking out.

",,user42664,,user42664,9/2/2021 7:56,9/2/2021 7:56,,,,4,,,,CC BY-SA 4.0 30501,2,,30493,9/1/2021 19:15,,0,,"

The layer contains multiple parallel LSTM units (hidden states in tensorflow and dimensionality of the output space); they are identical and in parallel learn different things.

",49564,,,,,9/1/2021 19:15,,,,0,,,,CC BY-SA 4.0 30502,1,31512,,9/1/2021 20:35,,2,293,"

I am planning to use OpenAI gym for my experiment in real life. In my experiment design, by the limits of a real-life scenario, I can only receive the state information or the rewards about 2-3 timesteps behind when the action has happened (in OpenAI gym term, ~3 cycles of step(action) function has occurred). For example, by the time the state at timestep i is observed, an action at timestep i+3 would have happened.

From how I perceive the function, step(action), is that it needs to return next_state, reward, done every step. And the agent will learn from state -> action -> next state -> reward tuple. So I was wondering if can I cache the action for future use along with the state with the correct time step in OpenAI gym? or delay the state observation/reward instead? Could the OpenAI be able to learn?

I am experimenting with PPO TD3 SAC which all uses actor-critic networks. Would the network eventually be trained well enough to the point where it would still perform well with the delayed state observation?

",46395,,,,,9/2/2021 13:35,Delayed state observation or caching action in OpenAI gym. Can it still learn?,,1,0,,,,CC BY-SA 4.0 30506,1,,,9/1/2021 23:44,,2,183,"

I was asked an interesting question today by a student in a cybersecurity and information assurance program related to getting spammed by chatbots on snapchat. He's tried many conventional means of blocking them, but he's still getting overwhelmed:

  • Theoretically, are there lines of code that could disrupt processing, such as commands or syntactic symbols?

My sense is no — the functions would be partitioned such that linguistic data would not execute. But who knows.

  1. Many programmers are sloppy.
  2. I've had friends in video game QA produce controller inputs that programmers claim is impossible — until demonstrated.
  • Theoretically, is it possible to "break" a chatbot in the sense of the Voight-Kampff test thought experiment?

This was, of course, popularized via one of the most famous films on AI, BladeRunner, adapted from one of the most famous books, ElectricSheep, and extended recently via WestWorld. In these contexts, it's a psychological test designed to send the automata into loops or errors.

My question here is not related to "psychology" as in those popular media treatments, but linguistics:

  • Are there theoretically linguistic inputs that could send an NLP algorithm into infinite loops or produce errors that halt computation?

My guess is no, all the way around, but still a question potentially worth asking.

",1671,,2444,,12/12/2021 21:25,12/12/2021 21:25,Are there theoretically linguistic inputs that could send an NLP algorithm into infinite loops or break the chatbot?,,2,3,,,,CC BY-SA 4.0 31506,1,,,9/2/2021 1:32,,1,54,"

Is attention useful only in transformer/convolution layers? Can I add it to linear layers? If yes, how (on a conceptual level, not necessarily the code to implement the layers)?

",44456,,44456,,9/3/2021 11:58,9/3/2021 11:58,Are there any benefits of adding attention to linear layers?,,0,0,,,,CC BY-SA 4.0 31507,1,31514,,9/2/2021 5:14,,0,62,"

I am currently implementing a Pointer Network to solve a simple Knapsack Problem. However, I am a bit puzzled over the correct (or common, or "best") way to give the agent the option to stop taking the item (terminate episode). Currently, I have done it in 2 ways, adding raw dummy features or adding encoded dummy features (dummy features are all zeros). If the agent selects the dummy item, then the agent will stop taking the item anymore and the episode is terminated.

I trained both methods for 500K episodes and evaluated their performance on a single predefined test case in each episode, after adding the gradient. I found that concatenating dummy features with the encoded features yielded a higher score earlier, but also scored 0 very often. On the other hand, adding the dummy features to the raw features learned to maximize the score very slowly. Therefore, my questions are:

  1. Is adding the raw dummy features make learning slower because of additional encoding layer learning?

  2. What is the most correct (or common or arguably best) way to give the agent the option to terminate the episode (in this case stop taking item)?

",44920,,2444,,9/5/2021 11:24,9/6/2021 6:19,"How to represent ""terminate episode"" for Knapsack problem with a Pointer Network?",,2,0,,,,CC BY-SA 4.0 31508,1,,,9/2/2021 7:26,,1,22,"

Consider the following paragraph from the section 3: Background of the research paper titled Generative Adversarial Text to Image Synthesis by Scott Reed et al.

Goodfellow et al. (2014) prove that this minimax game has a global optimium precisely when $p_g = p_{data}$, and that under mild conditions (e.g. G and D have enough capacity) $p_g$ converges to $p_{data}$. In practice, in the start of training samples from D are extremely poor and rejected by D with high confidence. It has been found to work better in practice for the generator to maximize $\log(D(G(z)))$ instead of minimizing $\log(1 −D(G(z)))$

I am guessing the bolded portion should be replaced by from G. Am I correct? If not, where am I going wrong?

",18758,,,,,9/2/2021 7:26,Is the following a typo or am I understanding wrongly regarding discriminator?,,0,1,,,,CC BY-SA 4.0 31509,1,,,9/2/2021 8:07,,1,55,"

Is there a crossover that also considers that every index in the vector also influences the cost function?

I have two vectors $v_1=[A_1, A_2, A_3, A_4, A_5]$ and $v_2=[A_5, A_3, A_2, A_1, A_4]$.

The fitness function considers the index where an element is located. So, basically, every vector represents a matching solution. Using a recombination method would deliver a new combination, but it won't be close to the previous solution, nor would they consider what makes the parents better than the other solution.

In TSP, the indices don't really matter on the sequence of cities.

",48548,,2444,,9/2/2021 16:53,9/2/2021 16:53,Is there a crossover that also considers that every index in the vector also influences the fitness function?,,1,5,,,,CC BY-SA 4.0 31510,2,,30506,9/2/2021 8:22,,1,,"

While it is certainly possible to have NLP algorithms ending up in infinite loops, chatbots will typically not be affected by this.

A first-year pitfall you learn is in the construction of grammars. If you do a top-down analysis of a sentence, the following grammar rule will send it into an infinite loop:

NP -> NP of NP | det N | N

This allows a noun phrase to be expanded to "noun phrase of noun phrase"; and the parser next tries to expand the non-terminal symbol 'NP', which handily expands to a rule which has the very same symbol at the beginning.

However, modern day chatbots don't tend to use parsers, as their input is not commonly well-formed enough to allow application of grammars. They either use pattern matching (Eliza-style), or machine learning, neither of which would be susceptible to this issue.

And commercial chatbots are typically tested with all kinds of junk input to make sure they don't break or crash (In my previous job I designed chatbots for five years).

One possibility I can think of is if the pre-processing step is poorly coded, that using eg non-ASCII characters or extremely long nonsense words etc might lead to problems (eg buffer overflows), but modern programming languages make it increasingly difficult to actually break anything this way. And as you rightly say, you would separate input from executable code, so no Bobby Tables issues should happen.

",2193,,,,,9/2/2021 8:22,,,,0,,,,CC BY-SA 4.0 31512,2,,30502,9/2/2021 13:35,,1,,"

To turn this delayed action approach into a normal, and theoretically valid MDP, add a "pending action resolution" array to the state representation, and ensure that the state transition (or step code) manages this array with each new action pushed into it, and the array shifting down as it goes. The array length should be minimum of the number of timesteps required to fully resolve a previous action.

This allows your agent to see the actions it has recently performed and account for them in calculations without any adjustment needed to the core MDP formulation or standard agent designs.

If delays are stochastic or vary (e.g. a scenario where a warehouse is ordering stock), then add more data to this pending action resolutions array as necessary, such as expected resolution time, or time since action was taken.

",1847,,,,,9/2/2021 13:35,,,,5,,,,CC BY-SA 4.0 31513,2,,31509,9/2/2021 13:57,,2,,"

Really you're entering the world in which you probably want to develop genetic operators that have meaning in your domain. You mention TSP, and correctly point out that the absolute position within the chromosome doesn't matter. There are other permutation problems where this isn't true. The Quadratic Assignment Problem (QAP) is one example. Like TSP, QAP solutions are represented as permutations of integers, and there are plenty of known recombination operators that work on permutations. But you need different operators for these cases. What works well for TSP won't be very good for QAP, and vice versa.

For reference, a good place to start might be the Cycle Crossover (CX) operator. It's defined on permutations, and basically works by looking for "cycles" -- subsets of the indices where the two parents share the same values. For example, if you have parents

P1=<2 3 7 1 4 6 5 8> 
P2=<4 1 2 6 8 5 3 7>

there's a cycle at the index set {0, 2, 4, 7}. Both parents contain the same four values at those positions in the string -- 2, 7, 4, and 8. You could create two new offspring by exchanging the values at those positions, yielding

C1=<4 3 2 1 8 6 5 7> 
C2=<2 1 7 6 4 5 3 8>

This gives me two children, both of whom have the property that they inherited information from their parents related to the absolute position of each value in the string.

That may or may not be exactly what you need for your problem. Maybe you don't have true permutations, or maybe something about that operator doesn't work well with your particular problem. The main point is that you need to build operators with intent. The idea has been called "respectful recombination". Understanding what's important in your representation and devising operators that try to respect that property as they pass down information is the name of the game.

",3365,,,,,9/2/2021 13:57,,,,1,,,,CC BY-SA 4.0 31514,2,,31507,9/2/2021 14:57,,1,,"

I do not think there is one standard way to do this, it will depend too much on context. Ultimately you want the agent to output a stop action that is different from a continue action.

That stop/continue choice could either be part of the existing action encoding, additional data in parallel with the action sequence, or an entirely separate action choice on time steps alternating with the main action choices.

I do not know anything about pointer networks, so not sure what your action coding options are. Your "dummy" object selection seems like a reasonable choice though, it is part of existing action encoding that the neural network can already output. In that sense it seems a lot like an <END> token that a seq2seq model might output for e.g. translation or summarisation language tasks.

An alternative might be to add a separate head to the pointer network, that output a stop classifier alongside each object choice. The combined action would be [selection, stop]. If you are feeding previous choice back into the RNN as input (it is not clear to me, but sampling seq2seq networks do this), you have free choice as to whether to use the combined action (with the stop flag as a raw confidence in $[0,1]$), or just continue with previous selection as only feedback.

Finally you could use a different NN for deciding whether to stop or continue and feed it the same sequence.

Which is better? I cannot say, but your dummy object selection appears to work already to some degree, so I would stick experimenting with that, and the simple all zeroes token. The common 0 total rewards may be an issue with your RL agent exploring, or maybe with a difficult reward metric. E.g. can the agent score less than zero if it overfills the knapsack? If so, not trying anything at all may sometimes look good to the agent, and it will need more training so it can properly predict when this is not the case.

",1847,,,,,9/2/2021 14:57,,,,1,,,,CC BY-SA 4.0 31517,2,,30506,9/2/2021 16:26,,1,,"

It all depends on your architecture.

What a chatbot is made of?

Most of the current commercial AI chatbots have an architecture somehow like this:

 ┌────┐┌─────────┐┌────────┐┌─────────┐┌────────┐┌───┐
 │User││Messenger││Back-end││NLP (NLC)││Database││API│
 └─┬──┘└────┬────┘└───┬────┘└────┬────┘└───┬────┘└─┬─┘
   │        │         │          │         │       │  
   │Message │         │          │         │       │  
   │───────>│         │          │         │       │  
   │        │         │          │         │       │  
   │        │ Message │          │         │       │  
   │        │────────>│          │         │       │  
   │        │         │          │         │       │  
   │        │         │ Message  │         │       │  
   │        │         │─────────>│         │       │  
   │        │         │          │         │       │  
   │        │         │  Intent  │         │       │  
   │        │         │<─────────│         │       │  
   │        │         │          │         │       │  
   │        │         │       Intent       │       │  
   │        │         │───────────────────>│       │  
   │        │         │          │         │       │  
   │        │         │       Answer       │       │  
   │        │         │<───────────────────│       │  
   │        │         │          │         │       │  
   │        │         │          │ Call    │       │  
   │        │         │───────────────────────────>│  
   │        │         │          │         │       │  
   │        │         │          Response  │       │  
   │        │         │<───────────────────────────│  
   │        │         │          │         │       │  
   │        │ Answer  │          │         │       │  
   │        │<────────│          │         │       │  
   │        │         │          │         │       │  
   │ Answer │         │          │         │       │  
   │<───────│         │          │         │       │  
 ┌─┴──┐┌────┴────┐┌───┴────┐┌────┴────┐┌───┴────┐┌─┴─┐
 │User││Messenger││Back-end││NLP (NLC)││Database││API│
 └────┘└─────────┘└────────┘└─────────┘└────────┘└───┘

So the question is: What are the vulnerable points here?

  1. Messenger: Theoretically, the messenger should only forward the message, but it's usual for the front-end to have some security flaws, like breaking on some special characters.
  2. Back-end: If the message is not validated / sanitized, there might be some vulnerability to SQL injection.
  3. Most of the AI behind a Chatbot are NLC (Natural Language Classifiers), NER (Named Entity Recognition) and other specific API (like a weather forecast). I don't see how the Machine Learning models can be attacked directly.
  4. But if the Chatbot directly accepts (or uses NER to extract) a user input, it could be used to extend the attack into Database or API's (Like: "My name is Robert'); DROP TABLE students;--" - inspired on this xkcd comic).
    • The NER extracts the name="Robert'); DROP TABLE students;--"
    • Is is used as a query parameter for the Database Check if name exist in Database
    • The Database trusts your Back-end.
    • The Back-end attacks the Database with the injected code.

Paradox Loop

Another (more philosophical) way to bug the AI would be trying to cause a paradox loop which is well explained on this link.

",49188,,,,,9/2/2021 16:26,,,,3,,,,CC BY-SA 4.0 31518,2,,30458,9/2/2021 18:09,,1,,"

A time series forecast model based upon lagged variables of that time series is commonly called auto-correlation. A time series forecast model based upon other series is called cross-correlation in the time-series literature. A general forecast that uses any number of lagged time series (including the lagged series itself) is called vector auto regression.

Any of the other ML and AI approaches can use non-linear methods to achieve the modelling and draw from an infinite amount of factors other than lagged time series. In finance, it's common to use factor based investing, for example (like fundamental value data). In Machine Learning, we are interested in finding those features (feature selection) that best model the time series outcome.

",49578,,,,,9/2/2021 18:09,,,,0,,,,CC BY-SA 4.0 31519,1,,,9/2/2021 19:01,,0,47,"

I have a task where I need to take "data points" which consist of collections of items. Each item needs to be categorised according to predefined categories. That's the easy part - my solution is to train a deep neural network with cross entropy loss. By the way, the reason I don't classify each item separately is because they acquire their meaning when they come together as a set.

The hard part is that each of these items also have a cluster label. Each cluster can only have items of one category in it, and there can be any number of clusters. Unsupervised clustering methods (applied after the neural network does the categorisation) work fairly well, but not well-enough for my needs. I'd like to:

A. Make use of the fact that I have the ground truth labelling for these clusters

B. Leverage my deep neural network because a lot of the "reasoning" required to solve the classification task will be conducive to the clustering task.

Answers which address at least one of those are useful to me.

EDIT

I realised I might be confusing people with this concept of cluster "labels". To clarify, this is no different than the standard way a classical unsupervised clustering algorithm might return its results. If I have N data points and feed them to a clustering algo, the algo might return N labels, one for each data point, and each of which are integers in [0, C-1] where C is the number of clusters. In my example we have the labels for a training dataset and want to make use of them during training. We cannot use softmax + cross-entropy loss because the cluster labels are permutation invariant.

",16871,,16871,,9/3/2021 8:59,9/3/2021 8:59,What are some machine learning frameworks for supervised clustering?,,0,8,,,,CC BY-SA 4.0 31520,1,31522,,9/2/2021 21:12,,0,138,"

I am training an object detection model, and I have some very highly unbalanced data annotations. I have almost 11,000 images, all with dimensions of 1024 $\times$ 1024. Within those images I have the following number of annotations:

*Class 1 - 40,000
*Class 2 - 25,000
*Class 3 - 900
*Class 4 - 500

This goes on for a few more classes.

As this is an object detection algorithm that was annotated with the annotation tool Label-img, there are often multiple annotations on each photo. Do any of you have any recommendations as to how to handle fine-tuning an object-detection algorithm on an unbalanced dataset? Currently, collecting more imagery is not an option. I would augment the images and re-label, but since there are multiple annotations on the images, I would be increasing the number of annotations for the larger classes as well.

Note: I'm using the Tensorflow Object Detection API and have downloaded the models and .config files from the Tensorflow 2 Detection Model Zoo.

",32750,,18758,,9/2/2021 22:52,9/2/2021 22:52,How to handle an unbalanced dataset when training object detection algorithms?,,1,0,,,,CC BY-SA 4.0 31521,2,,25406,9/2/2021 21:50,,1,,"

I am answering my own question here. The only additional thing that I found was that the average accuracy across a batch of data was slightly higher if the amount of samples in the batch was lower. So the validation set was only 15% of the data, therefore the average accuracy was slightly lower than for 70% of the data. I don't know why the more samples you take the lower the average accuracy, and whether this was a bug in the accuracy calculation or it is the expected behavior. Either way, if you have the same problem, one suggestion is plotting average accuracy vs number of samples and see if this is the reason why you get a lower training accuracy.

",42357,,,,,9/2/2021 21:50,,,,2,,,,CC BY-SA 4.0 31522,2,,31520,9/2/2021 22:10,,1,,"

One thing to try first is Focal Loss. This particular loss works well for classification or object detection where your dataset is unbalanced and contains many classes. In short, the loss suppresses highly confident predictions and gives the model more room to learn from other less confident classes.

You can read this blog to have more intuition about focal loss.

",49458,,,,,9/2/2021 22:10,,,,0,,,,CC BY-SA 4.0 31523,1,,,9/3/2021 1:56,,1,57,"

A loss function is a measure of how bad our neural network is. We can decrease the loss by proper training.

I came across the phrase "adaptive loss function" in several research papers. For example: consider the following excerpt from the "Introduction" of the research paper titled Generative Adversarial Text to Image Synthesis by Scott Reed et al.

By conditioning both generator and discriminator on side information, we can naturally model this phenomenon since the discriminator network acts as a "smart" adaptive loss function.

When can we denote a loss function as adaptive? Is it a mathematical property or is solely based on the context?

",18758,,2444,,9/3/2021 15:52,9/3/2021 15:52,"When can we call a loss function ""adaptive""?",,0,5,,,,CC BY-SA 4.0 31526,1,,,9/3/2021 7:38,,0,180,"

We come across the word "manifold" in artificial intelligence, especially in the domains where learning is done based on data instances.

What is the formal definition for manifold?

",18758,,2444,,9/3/2021 15:47,9/5/2021 11:41,What is the formal definition for manifold in artificial intelligence?,,2,1,,,,CC BY-SA 4.0 31527,2,,31526,9/3/2021 9:22,,3,,"

The definition is the same as in Mathematics and, I suppose, elsewhere:

it is a topological space such that the vicinity of each point is homeomorphic to a disk in $\mathbb{R}^n$ (note, that dimension has to be the same for all points $x$). This requirement is important, since not every imaginable geometric object satisfies this requirement:

  • Sphere is a manifold, since one can draw a tangent plane in the vicinity of any point. It is "locally flat" and even we humans see Earth flat, since its radius of curvature is much greater than the visible distance (let the proponents of the flat Earth theory forgive me).

  • Two intersecting lines are not a manifold, since for any point, except for the intersection we have a 1-dimensional space and for the intersection point, there are two non-collinear vectors belonging to them.

Natural examples emerging in Machine Learning are images, videos, or arbitrary data. One usually treats, say, an image, as an object in the $\mathbb{R}^{H \times W \times 3}$, where $H$ is the height, $W$ - width of the image, and $3$ - number of colors. But in fact, only a small subset of all objects in this high-dimensional space are real images, and they belong to some manifold of a lower dimension.

It is a non-trivial question to tell what exactly the true dimensionality of data is. For MNIST, it is claimed that it is $3$ (instead of $28 \times 28 = 784$).

As a good material on this topic, I recommend this lecture from the recent workshop.

Scikit-learn has a nice exposition as well.

",38846,,2444,,9/3/2021 15:49,9/3/2021 15:49,,,,5,,,,CC BY-SA 4.0 31529,1,,,9/3/2021 16:55,,1,391,"

I'm researching spatio-temporal forecasting utilising GCN as a side project, and I am wondering if I can extend it by using a graph with weighted edges instead of a simple adjacency matrix with 1's and 0's denoting connections between nodes.

I've simply created a similarity measure and have replaced the 1's and 0's in the adjacency with it.

For example, let's take this adjacency matrix

$$A= \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} $$

It would be replaced with the following weighted adjacency matrix

$$ A'= \begin{bmatrix} 0 & 0.8 & 0 \\ 0.8 & 0 & 0.3 \\ 0 & 0.3 & 0 \end{bmatrix} $$

As I am new to graph NN's, I am wondering whether my intuition checks out. If two nodes have similar time-series, then the weight of the edge between them should be approximately 1, right? If the convolution is performed based on my current weights, will this be incorporated into the learning?

",49593,,2444,,9/5/2021 11:51,1/29/2023 3:04,Can I extend Graph Convolutional Networks to graphs with weighted edges?,,2,5,,,,CC BY-SA 4.0 31531,1,,,9/3/2021 20:32,,0,31,"

I am trying to better understand the Least Mean Squares algorithm, in order to implement it programmatically.

If we consider its weight updating formula $$w(n+1) = w(n) + \rho * \text{error}(i)x(i),$$ where $w(n + 1)$ is the new weight of the classifier function, $w(n)$ is its current weight and $x(i)$ is the $i$th element of a training dataset, what should $\rho$ be?

From what I have found online, $ρ$ is supposed to be $0 < \rho < \frac{2}{trace(X^TX)}$, where $X$ is a matrix with all the training data the algorithm has processed at that point. One idea that I had, was to take $\rho = \frac{1}{trace(X^TX)} < \frac{2}{trace(X^TX)}$, but I do not know if that is correct. Also, one characteristic that this value has is that it changes with each iteration of the algorithm, as more samples are added to matrix $X$.

So, what is a good value for $\rho$? Should it change during the execution of the algorithm or should it stay the same?

",49544,,2444,,9/6/2021 11:15,9/6/2021 11:15,What should the value of $ρ$ in the $w(n+1) = w(n) + \rho*\text{error}(i)x(i)$ formula of Least Mean Squares be?,,0,2,,,,CC BY-SA 4.0 31532,1,,,9/3/2021 21:20,,2,82,"

I'm currently learning Policy-gradient Methods for RL and encountered REINFORCE algorithm. I learned from this site : https://towardsdatascience.com/policy-gradient-methods-104c783251e0 that the gradient of the objective function is calculated as follows:

From what I understand $\sum_{t=0}^{H}\nabla_{\theta}\log{\pi_{\theta}(a_{t}|s_{t})}$ is the sum through the entire trajectory and $\pi_{\theta}(a_{t}|s_{t})$ is the policy of the agent at time step $t$. However in Suton's book the gradient objective is defined differently.

There is only $\nabla \ln{\pi(A_t | S_t)}$ at time step $t$ and no sum of all time steps. So does the algorithm not consider the policy for the whole trajectory when updating? Only a single-step policy?

Furthermore, there is $\gamma^{t}$ (discounted reward) term in the latter and not the former. What is the reason for that?

Hopefully, someone can help me clarify this.

",49598,,49455,,9/17/2021 2:00,9/17/2021 2:00,REINFORCE differentiation on sum or single value?,,0,3,,,,CC BY-SA 4.0 31533,2,,31526,9/3/2021 22:25,,5,,"

Manifold is basically a geometric object where every small region can be mapped to a euclidean space(means manifold is locally euclidean). Think of a donut, here any small region can be mapped to a euclidean space shown in this image:

In this above picture, $M$ is the manifold, $\phi_\alpha,\phi_\alpha$ is the mapping function, $U_\alpha, U_\beta$ is two open sets(small local regions). This donut is an example of a manifold. Similarly, we can think of circles, spheres, paraboloids, $\mathbb{R}^2$, $\mathbb{R}^3$, etc. as a manifold because they all satisfy the above criteria(they are locally euclidean).

Now, the question is why we are interested in manifolds in machine learning. In many machine learning applications, the data we interpret is laying on a manifold or non-Euclidean domain. For example, in astrophysics the observational data often time lies on a spherical domain. If we want we perform convolution over this spherical manifold to extract features, we can't just apply 2D convolution since we have to take account of parallel transport, gauges, symmetries, etc. Similarly, we may want to perform convolution over more complex shapes like those figures to extract features.

There are methods like guage equivalent mesh CNN, geodesic CNN, etc to deal with such kind of data distribution.

Graphs also lie on a non-Euclidean domain since the distance between any two nodes is not a straight line we have to travel through the graph and count the number of edges to measure distance. There are many applications where data lies on a graph, for example, drug-drug interaction, community detection, molecule structure, friendship network, recommendation system, traffic forecasting, etc.

To perform convolution over graphs we have methods like ChebNet, GraphSAGE, graph attention network, etc.

Notes:

1) Parallel transport: One basic problem occurs when we try to compare two vectors of two different points over the same manifold is that those two vectors belong to different Euclidean spaces (see the first figure), thus we can not directly compare them. Parallel transport provides a mechanism to move vectors over a manifold and analysis them. But note that parallel transport depends on the path means the result of the parallel transport is path-dependent.

2) Guage: Guage is like a measurement apparatus to specify the tangent vector on the tangent space of a manifold.

References:

  1. Geometric Deep Learning: Grids, Groups, Graphs, Geodesics

  2. To learn about geometric deep learning this is very good website, they also contain extraordinary video lectures on GDL

Note that: I intentionally skipped the rigorous mathematical definition of a manifold while trying to convey the underlying meaning. Please, let me know if you want to know more about open sets, closed sets, topological spaces, topological manifold, charts, atlas, etc.

",28048,,2444,,9/5/2021 11:41,9/5/2021 11:41,,,,0,,,,CC BY-SA 4.0 31534,1,,,9/4/2021 0:44,,2,90,"

I've read most of the posts on here regarding this subject, however most of them deal with gameboards where there are two different categories of single pieces on a board without walls etc.

My game board has walls, and multiple instances of food. There are 8 different categories, Walls, enemy food, my food, enemy powerup, my powerup, attackable enemies, threatening enemies, and current teammate.

I have one hot encoded all of this data into a tensor of size (8, 16, 32) where (16, 32) are the sizes of the game grid. However I'm not sure whether this is appropriate since many of the categories have multiple occurrences of each category in a single (walls, food). Is it appropriate to use one hot encoding to represent categories in spatial data, where multiple one's may be present?

The alternative I was considering was to use a CNN, however many posts have said it is inappropriate for one hot data. My reasoning was that since the data is a abstract Boolean grid representing the RGB frames, it might be appropriate.

Does anyone have any suggestions as to the best way to represent a spatial Boolean grid representing multiple categories for input to a network?

",49602,,,,,1/27/2023 11:08,How to embed game grid state with walls as an input to neural network,,1,0,,,,CC BY-SA 4.0 31535,1,31540,,9/4/2021 2:10,,1,852,"

I'm training a Tensorflow object detection model with approx. 7500 images of two classes, which contains approx. 10,000 classes per class. I'm using Tensorflow 2.6.0, in case that is relavent. I am using Single Shot Detector (with a ResNet 50 backbone). The image dimensions are 1024 x 1024, and the batch size is set to 2. Training is being done on Ubuntu 20.04 with a GeForce RTX 2080 Super (GPU).

After beginning training, the process is starting out at loss numbers to be expected:

INFO:tensorflow:{'Loss/classification_loss': 2.1305692,
 'Loss/localization_loss': 0.6402807,
 'Loss/regularization_loss': 1.407957,
 'Loss/total_loss': 4.178807,
 'learning_rate': 0.014666351}
I0903 16:56:21.947736 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 2.1305692,
 'Loss/localization_loss': 0.6402807,
 'Loss/regularization_loss': 1.407957,
 'Loss/total_loss': 4.178807,
 'learning_rate': 0.014666351}
INFO:tensorflow:Step 200 per-step time 0.447s
I0903 16:57:06.592366 140581900665792 model_lib_v2.py:698] Step 200 per-step time 0.447s
INFO:tensorflow:{'Loss/classification_loss': 1.2596315,
 'Loss/localization_loss': 0.6752764,
 'Loss/regularization_loss': 3.0123177,
 'Loss/total_loss': 4.9472256,
 'learning_rate': 0.0159997}
I0903 16:57:06.592768 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 1.2596315,
 'Loss/localization_loss': 0.6752764,
 'Loss/regularization_loss': 3.0123177,
 'Loss/total_loss': 4.9472256,
 'learning_rate': 0.0159997}
INFO:tensorflow:Step 300 per-step time 0.452s
I0903 16:57:51.830375 140581900665792 model_lib_v2.py:698] Step 300 per-step time 0.452s
INFO:tensorflow:{'Loss/classification_loss': 1.0455683,
 'Loss/localization_loss': 0.5895866,
 'Loss/regularization_loss': 3.0799737,
 'Loss/total_loss': 4.715129,
 'learning_rate': 0.01733305}
I0903 16:57:51.830749 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 1.0455683,
 'Loss/localization_loss': 0.5895866,
 'Loss/regularization_loss': 3.0799737,
 'Loss/total_loss': 4.715129,
 'learning_rate': 0.01733305}

Up until about step 16,800, the loss is decreasing to these numbers:

INFO:tensorflow:{'Loss/classification_loss': 0.5526215,
 'Loss/localization_loss': 0.28333753,
 'Loss/regularization_loss': 0.24686696,
 'Loss/total_loss': 1.0828259,
 'learning_rate': 0.037849143}
I0903 18:59:14.666097 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.5526215,
 'Loss/localization_loss': 0.28333753,
 'Loss/regularization_loss': 0.24686696,
 'Loss/total_loss': 1.0828259,
 'learning_rate': 0.037849143}
INFO:tensorflow:Step 16700 per-step time 0.446s
I0903 18:59:59.247199 140581900665792 model_lib_v2.py:698] Step 16700 per-step time 0.446s
INFO:tensorflow:{'Loss/classification_loss': 0.4649979,
 'Loss/localization_loss': 0.28323257,
 'Loss/regularization_loss': 0.2433301,
 'Loss/total_loss': 0.9915606,
 'learning_rate': 0.037820127}
I0903 18:59:59.247609 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.4649979,
 'Loss/localization_loss': 0.28323257,
 'Loss/regularization_loss': 0.2433301,
 'Loss/total_loss': 0.9915606,
 'learning_rate': 0.037820127}
INFO:tensorflow:Step 16800 per-step time 0.446s
I0903 19:00:43.835976 140581900665792 model_lib_v2.py:698] Step 16800 per-step time 0.446s
INFO:tensorflow:{'Loss/classification_loss': 0.43402833,
 'Loss/localization_loss': 0.1641234,
 'Loss/regularization_loss': 0.24129395,
 'Loss/total_loss': 0.8394457,
 'learning_rate': 0.03779093}
I0903 19:00:43.836373 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.43402833,
 'Loss/localization_loss': 0.1641234,
 'Loss/regularization_loss': 0.24129395,
 'Loss/total_loss': 0.8394457,
 'learning_rate': 0.03779093}

However, starting at about 16,900, the model total_loss rapidly increases, up to numbers even higher than are shown below:

INFO:tensorflow:Step 16900 per-step time 0.446s
I0903 19:01:28.390861 140581900665792 model_lib_v2.py:698] Step 16900 per-step time 0.446s
INFO:tensorflow:{'Loss/classification_loss': 0.5590624,
 'Loss/localization_loss': 0.5160909,
 'Loss/regularization_loss': 338.40286,
 'Loss/total_loss': 339.478,
 'learning_rate': 0.03776155}
I0903 19:01:28.391232 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.5590624,
 'Loss/localization_loss': 0.5160909,
 'Loss/regularization_loss': 338.40286,
 'Loss/total_loss': 339.478,
 'learning_rate': 0.03776155}
INFO:tensorflow:Step 17000 per-step time 0.445s
I0903 19:02:12.936022 140581900665792 model_lib_v2.py:698] Step 17000 per-step time 0.445s
INFO:tensorflow:{'Loss/classification_loss': 0.7908556,
 'Loss/localization_loss': 0.7274248,
 'Loss/regularization_loss': 858.3554,
 'Loss/total_loss': 859.87366,
 'learning_rate': 0.037731986}
I0903 19:02:12.936432 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.7908556,
 'Loss/localization_loss': 0.7274248,
 'Loss/regularization_loss': 858.3554,
 'Loss/total_loss': 859.87366,
 'learning_rate': 0.037731986}
INFO:tensorflow:Step 17100 per-step time 0.452s
I0903 19:02:58.127156 140581900665792 model_lib_v2.py:698] Step 17100 per-step time 0.452s
INFO:tensorflow:{'Loss/classification_loss': 0.7510178,
 'Loss/localization_loss': 0.49337074,
 'Loss/regularization_loss': 2617.2888,
 'Loss/total_loss': 2618.5332,
 'learning_rate': 0.03770224}
I0903 19:02:58.127575 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.7510178,
 'Loss/localization_loss': 0.49337074,
 'Loss/regularization_loss': 2617.2888,
 'Loss/total_loss': 2618.5332,
 'learning_rate': 0.03770224}
INFO:tensorflow:Step 17200 per-step time 0.445s
I0903 19:03:42.625258 140581900665792 model_lib_v2.py:698] Step 17200 per-step time 0.445s
INFO:tensorflow:{'Loss/classification_loss': 1.1258743,
 'Loss/localization_loss': 0.45634705,
 'Loss/regularization_loss': 394886900.0,
 'Loss/total_loss': 394886900.0,
 'learning_rate': 0.037672307}
I0903 19:03:42.625638 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 1.1258743,
 'Loss/localization_loss': 0.45634705,
 'Loss/regularization_loss': 394886900.0,
 'Loss/total_loss': 394886900.0,
 'learning_rate': 0.037672307}
INFO:tensorflow:Step 17300 per-step time 0.445s
I0903 19:04:27.112154 140581900665792 model_lib_v2.py:698] Step 17300 per-step time 0.445s
INFO:tensorflow:{'Loss/classification_loss': 0.57859087,
 'Loss/localization_loss': 0.53405523,
 'Loss/regularization_loss': 383440770.0,
 'Loss/total_loss': 383440770.0,
 'learning_rate': 0.037642203}
I0903 19:04:27.112533 140581900665792 model_lib_v2.py:701] {'Loss/classification_loss': 0.57859087,
 'Loss/localization_loss': 0.53405523,
 'Loss/regularization_loss': 383440770.0,
 'Loss/total_loss': 383440770.0,
 'learning_rate': 0.037642203}

What could be the cause of this, and what would be the best way to go about fixing it?

",32750,,,,,9/4/2021 7:28,"Tensorflow object detection model total loss starts out good, but suddenly explodes up to high loss numbers",,1,0,,,,CC BY-SA 4.0 31536,1,,,9/4/2021 4:44,,2,498,"

I guess this problem is encountered by everyone trying to solve Tic Tac Toe with various flavors of reinforcement learning.

The answer is not "always win" because the random opponent may sometimes be able to draw the game. So it is slightly less than the always-win score.

I wrote a little Python program to calculate that. Please help verify its correctness and inform me if it has bugs or errors.

",17302,,,,,9/5/2021 9:56,What is the optimal score for Tic Tac Toe for a reinforcement learning agent against a random opponent?,,2,2,,,,CC BY-SA 4.0 31537,2,,31536,9/4/2021 4:44,,1,,"

Here I asssume:

  • Both players avoid illegal moves perfectly
  • Player X always chooses the move with maximum expectation value
  • Player O chooses all available moves with equal probability

Result depends on the scoring scheme:

  • (This scheme is used in one version of Gym Tic Tac Toe)

    For win=20, draw=10, lose=-20:

    Optimal expectation value =

    • X plays first: 19.94791666666666...

    • O plays first: 19.164021164021...

  • For win=20, draw=0, lose=-20:

    Optimal expectation value =

    • X plays first: 19.89583333333333...

    • O plays first: 18.497354497354....

It also helps to verify the program with some pre-played board positions, included in the code.

Here is the program:

import math     # for math.inf = infinity

print("Calculate optimal expectation value of TicTacToe")
print("from the perspective of 'X' = first player.")
print("Assume both players perfectly avoid illegal moves.")
print("Player 'X' always chooses the move with maximum expectation value.")
print("Player 'O' always plays all available moves with equal probability.")
print("You may modify the initial board position in the code.")

# Empty board
test_board = 9 * [0]

# Pre-moves, if any are desired:
# X|O|
# O|O|X
# X| |
#test_board[0] = -1
#test_board[3] = 1
#test_board[6] = -1
#test_board[4] = 1
#test_board[5] = -1
#test_board[1] = 1

def show_board(board):
    for i in [0, 3, 6]:
        for j in range(3):
            x = board[i + j]
            if x == -1:
                c = '❌'
            elif x == 1:
                c = '⭕'
            else:
                c = '  '
            print(c, end='')
        print(end='\n')

if test_board != 9 * [0]:
    print("\nInitial board position:")
    show_board(test_board)

# **** Calculate expectation value of input board position
def expectation(board, player):

    if player == -1:
        # **** Find all possible next moves for player 'X'
        moves = possible_moves(board)

        # Calculate expectation of these moves;
        # Player 'X' will only choose the one of maximum value.
        max_v = - math.inf
        for m in moves:
            new_board = board.copy()
            new_board[m] = -1       # Player 'X'

            # If this an ending move?
            r = game_over(new_board, -1)
            if r is not None:
                if r > max_v:
                    max_v = r
            else:
                v = expectation(new_board, 1)
                if v > max_v:
                    max_v = v
        # show_board(board)
        print("X's turn.  Expectation w.r.t. Player X =", max_v, end='\r')
        return max_v

    elif player == 1:
        # **** Find all possible next moves for player 'O'
        moves = possible_moves(board)
        # These moves have equal probability
        # print(board, moves)
        p = 1.0 / len(moves)

        # Calculate expectation of these moves;
        # Player 'O' chooses one of them randomly.
        Rx = 0.0        # reward from the perspective of 'X'
        for m in moves:
            new_board = board.copy()
            new_board[m] = 1        # Player 'O'

            # If this an ending move?
            r = game_over(new_board, 1)
            if r is not None:
                if r == 10:             # draw is +10 for either player
                    Rx += r * p
                else:
                    Rx += - r * p       # sign of reward is reversed
            else:
                v = expectation(new_board, -1)
                Rx += v * p
        # show_board(board)
        print("O's turn.  Expectation w.r.t. Player X =", Rx, end='\r')
        return Rx

def possible_moves(board):
    moves = []
    for i in range(9):
        if board[i] == 0:
            moves.append(i)
    return moves

# Check only for the given player.
# Return reward w.r.t. the specific player.
def game_over(board, player):
    # check horizontal
    for i in [0, 3, 6]:     # for each row
        if board[i + 0] == player and \
           board[i + 1] == player and \
           board[i + 2] == player:
            return 20

    # check vertical
    for j in [0, 1, 2]:     # for each column
        if board[3 * 0 + j] == player and \
           board[3 * 1 + j] == player and \
           board[3 * 2 + j] == player:
            return 20

    # check diagonal
    if board[0 + 0] == player and \
       board[3 * 1 + 1] == player and \
       board[3 * 2 + 2] == player:
        return 20

    # check backward diagonal
    if board[3 * 0 + 2] == player and \
       board[3 * 1 + 1] == player and \
       board[3 * 2 + 0] == player:
        return 20

    # return None if game still open
    for i in [0, 3, 6]:
        for j in [0, 1, 2]:
            if board[i + j] == 0:
                return None

    # For one version of gym TicTacToe, draw = 10 regardless of player;
    # Another way is to assign draw = 0.
    return 10

print("\u001b[2K\nOptimal value =", expectation(test_board, -1) )

Example output (for X's turn to play):

",17302,,17302,,9/5/2021 9:56,9/5/2021 9:56,,,,2,,,,CC BY-SA 4.0 31539,2,,31536,9/4/2021 7:20,,1,,"

This blog post suggests that when playing against a random opponent, if the agent goes first, the win rate is 97.8%, and if they go second, the win rate is 79.6% (and the rest are draws).

",47080,,,,,9/4/2021 7:20,,,,13,,,,CC BY-SA 4.0 31540,2,,31535,9/4/2021 7:28,,1,,"

This is a classic "exploding gradient" problem. You can try adding gradient clipping or increasing the amount of regularization.

Decreasing the learning rate, or increasing the batch size may also help.

",47080,,,,,9/4/2021 7:28,,,,0,,,,CC BY-SA 4.0 31541,1,,,9/4/2021 7:32,,0,35,"

I'm interested in the IT side, here, specifically how I most efficiently store a tensor in a one dimensional data structure. My assumption is that certain approaches will be more expensive than others, but I'd like to be able to validate that assumption, and show it to be wrong. Is there any work on this subject?

",1671,,,,,9/4/2021 8:41,Cost functions for reducing Tensors to 1-dimensional arrays?,,0,2,,,,CC BY-SA 4.0 31542,2,,31534,9/4/2021 7:38,,0,,"

The way you describe with one hot encoding is correct.

Note that how the state is encoded is a separate question from the neural network, so I'm not sure what convolutional neural networks have to do with the question. In the famous atari game example, the input is a sequence of RGB images; a cnn is used to process the images. In your example you probably just want to use a regular Dense network, as your input is just the one hot encoding and not images.

",47080,,,,,9/4/2021 7:38,,,,1,,,,CC BY-SA 4.0 31547,1,31548,,9/4/2021 15:36,,0,63,"

Consider the following excerpt from abstract of the research paper titled Better Mixing via Deep Representations by Yoshua Bengio et al.

It has been hypothesized, and supported with experimental evidence, that deeper representations, when well trained, tend to do a better job at disentangling the underlying factors of variation.

In general, as per my current knowledge, we want to preserve the factors that contribute to variation in the final representation. But, the abstract is contrary to that. Where am I going wrong? Why there is a need to disentangle the factors of variation?

",18758,,40434,,9/4/2021 22:00,9/5/2021 11:57,Why disentangling the features of variation in representation?,,1,0,,,,CC BY-SA 4.0 31548,2,,31547,9/4/2021 16:13,,1,,"

This is a very similar question to the one asked you before, at least in terms of the answer. Again going with PCA as a simple example of building features, when you end up with the principal components those are linearly independent. One reason that linearly independent features are useful is interpretability (which you'd get to some extent with sparse PCA). Another reason is that if the features are linearly independent than learning from those features becomes simpler. This is what they mean by disentangling the factors of variation. The original factors (features) can be combined in several layers of hierarchy to build more complex features where each higher level feature independently accounts for some variation in the data.

Update based on comment

The term factors of variation means "things that contribute to the variation in the data". When you build a model with simple linear regression, then the only factors of variation are your initial features. When you build simple feedforward neural networks with a hidden layer, each unit in the hidden layer account for some percentage of the total variation in the data. The goal is to produce hidden units which independently account for some variation in the data. This is what the authors mean by disentangling the underlying factors of variation.

",,user42664,,user42664,9/5/2021 11:57,9/5/2021 11:57,,,,3,,,,CC BY-SA 4.0 31549,2,,30487,9/4/2021 16:40,,1,,"

Great question! I did some research and found out that Generally capable agents emerge from open ended play by Deepmind is using ES in form of population based training:

We also explored the question, what distribution of training tasks will produce the best possible agent, especially in such a vast environment? The dynamic task generation we use allows for continual changes to the distribution of the agent’s training tasks: every task is generated to be neither too hard nor too easy, but just right for training. We then use population based training (PBT) to adjust the parameters of the dynamic task generation based on a fitness that aims to improve agents’ general capability. And finally we chain together multiple training runs so each generation of agents can bootstrap off the previous generation.

But this didn't really answer on their reasoning, so I dug some deeper and found a great article on lesswrong.com about the use of PBT. I will quote the essence, but highly recommend to read the linked Chapter on PBT:

What does the evolutionary selection give us that we don't already have? What problem does this let us avoid?

There are several answers to this question.

The more narrow answer is that this allows the dynamic task generation hyper parameters themselves to shift in a direction that promotes general competence. Neither of the optimization levels beneath us include any way of changing these parameters. But the ideal filtering parameters for the production of general competence might be different at the beginning, or at the middle of training. Or they might be different from agent to agent. Without something like population-based-training, they would have no way of changing and this would hurt performance.

The less narrow answer, I think, is that this ensures that agents are developing broad competence in a way the innermost loop cannot do. [...] each agent in our population of agents will learn to get better at some distribution of tasks, then, but without population-based-training they might not spread themselves broadly across the entire span of this distribution. Like a student who advances wildly at subjects she prefers, while ignoring subjects she is not good at, our agents might not approach our desired ideal of general competence. Population-based-training helps prevent the scenario by multiplying agent / teacher pairs that do well generally and non-narrowly.

",49455,,,,,9/4/2021 16:40,,,,0,,,,CC BY-SA 4.0 31550,2,,28850,9/4/2021 17:03,,2,,"

Equation 1

In normal Q-Learning your target is defined as $y_t = r_t + \gamma \mathrm{max_a}Q(s_{t+1}, a)$. Since you're training a regularized version, you construct the estimated value of the next state via averaging your estimations for each image augmentation. To turn this into the expected value over all $k$ transformations for the given state we need to average it by dividing the summed targets by the number of transformations ${k}$.

Equation 3

Here the Q-Function is updated with respect to all the image transformations. $f(s_i, v_{i,m})$ is the transformed image, i.e. it is the same as $s_i$ but its brightness is increased by 0.5. We fit our action value network on the mean squared error between the output of the net and the Q-Target $y_i$ averaged by the number of images transformations and states.

",49455,,49455,,9/6/2021 17:51,9/6/2021 17:51,,,,0,,,,CC BY-SA 4.0 31551,1,,,9/4/2021 17:48,,1,55,"

I am beginner in deep learning and currently training a few neural networks (Pytorch) for problems in audio and speech. For my tasks, simple feed-forward networks are working well enough. I use basic layers like Linear, ReLU and Softmax with nll loss. I have tested a few initialization schemes provided by Pytorch and noticed that initialization has significant (but not high) effect in the speed of training and the final accuracy of the model. I am currently using torch.nn.init.kaiming_uniform_ for initialization.

In my understanding, all these are data independent initialization schemes. I would like to try something that is data dependent. I saw a few pre-training strategies with unsupervised learning followed by supervised learning, but they seem overly complicated.

I am looking for something simple where I can use (preferably a fraction of the) training data to 'tweak' weights to better positions before the training. Are there any such strategies?

Addendum-1: Current initialization schemes (AFAIK) are mostly random values with control over range or energy to prevent values from dying down or blowing up. My aim is to further improve the starting point of training by taking account training data (or at least some of it). I am thinking of something like this. We pass a few batches of training data through the initial network and collect statistics on neuron outputs. Based on this, we identify the misbehaving neurons and tweak the weights and biases to reduce such issues so as to improve the training speed or accuracy. Is there anything of that kind?

",49615,,49615,,9/5/2021 22:13,9/5/2021 22:13,What are strategies for data driven weights initialization?,,0,5,,,,CC BY-SA 4.0 31552,2,,30037,9/4/2021 18:31,,1,,"

To give some practical advice, it is important to understand parts of calculus. This is mainly because Backpropagation is a leaky abstraction in modern libraries. In a nutshell, there is a lot which can go wrong (exploding or vanishing gradient for example) and you will need knowledge about gradient descent to handle it.

I highly recommend Andrej Karpathys Lecture on it. He gives a easy to understand and intuitive explanation.

",49455,,,,,9/4/2021 18:31,,,,0,,,,CC BY-SA 4.0 31556,1,,,9/5/2021 9:44,,1,143,"

Statistics is a branch of mathematics that extracts useful information from data. The data is generally called as "training data" in statistical (machine) learning.

Consider the following paragraph from the section 1.1 Reinforcement Learning of CHAPTER 1. THE REINFORCEMENT LEARNING PROBLEM of the textbook Reinforcement Learning: An Introduction by Richard S. Sutton and Andrew G. Barto.

Reinforcement learning is different from supervised learning, the kind of learning studied in most current research in field of machine learning. Supervised learning is learning from a training set of labeled examples provided by a knowledgable external supervisor. Each example is a description of a situation together with a specification—the label—of the correct action the system should take to that situation, which is often to identify a category to which the situation belongs. The object of this kind of learning is for the system to extrapolate, or generalize, its responses so that it acts correctly in situations not present in the training set. This is an important kind of learning, but alone it is not adequate for learning from interaction. In interactive problems it is often impractical to obtain examples of desired behavior that are both correct and representative of all the situations in which the agent has to act. In uncharted territory—where one would expect learning to be most beneficial—an agent must be able to learn from its own experience.

You can observe that, training data in machine learning, if we model it in a proper format, can be for reinforcement learning. But, may not be complete and practical.

I am asking this question in the view of statistical learning rather than machine learning alone. training data in statistical learning can be understood to any data, at any time instant, useful in learning.

Then, is it perfectly fine to always interpret experience in reinforcement learning as training data in statistical learning?

",18758,,,,,9/7/2021 17:49,"Can I treat ""experience"" in reinforcement learning as ""training data"" in statistical learning?",,1,6,,,,CC BY-SA 4.0 31558,1,,,9/5/2021 12:44,,0,35,"

A dataset in artificial intelligence, in general, consists of some features (say $n$). Assume that $m$ among them are output features. I want to model this function using a neural network. So, input to my neural network is $n-m$ features and output is $m$ features. My question is about the output features.

If an output feature is a continuous random variable, then its corresponding output neuron can be trained to give continuous output. Similarly, if an output feature is a discrete random variable, then its corresponding output neuron can be trained to give discrete output.

But, I never came across the features that are mixed random variables. What is the nature of the output of the neuron that is intended to give the output value for a mixed random variable, which is neither discrete nor continuous in nature?

",18758,,18758,,1/15/2022 0:21,1/15/2022 0:21,Modelling of output neuron for mixed features?,,0,3,,,,CC BY-SA 4.0 31560,2,,31529,9/5/2021 16:54,,0,,"

According to the definition of Graph Neural Networks taken from here GCN perfroms an operation of the form: $$ f (H^{(l)} ,A) = \sigma(\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2} H^{(l)} W^{(l)}) $$ Where $H^{(l)}$ is the input to GCN layer, $\tilde{A} = A + I$ is the adjacency matrix with self loops added and $\tilde{D}$ is a degree matrix, corresponding to the adjacency matrix $\tilde{A}$ (on the diagonals there are sums over the columns of $\tilde{A}$).

This definition is for matrix with $1$, if there is an edge between $i$ and $j$, and $0$ otherwise. For matrix of this form normalized Graph Laplacian $\tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2}$ is guaranteed to be positive semidefinite matrix.

One can extend the definition for arbitrary values of $a_{ij} \in A$. But there won't be guarantees, that graph Laplacian will be well-defined.

",38846,,,,,9/5/2021 16:54,,,,0,,,,CC BY-SA 4.0 31561,1,31562,,9/5/2021 17:15,,-2,70,"

MNIST images are 28x28 pixels. Perhaps a silly question: is there anything like MNIST, but whose images have fewer pixels?

",49623,,,,,9/6/2021 12:36,MNIST with fewer pixels?,,1,3,,9/6/2021 14:36,,CC BY-SA 4.0 31562,2,,31561,9/5/2021 18:52,,1,,"

Sklearn has digit dataset with images of size $8 \times 8$:

Classes
10
Samples per class
~180
Samples total
1797
Dimensionality
64
Features
integers 0-16
",38846,,38846,,9/6/2021 12:36,9/6/2021 12:36,,,,0,,,,CC BY-SA 4.0 31563,2,,31529,9/5/2021 20:48,,-1,,"

Graph attention network(GAN) exactly perform the same thing you are referring to . In chebnet, graphsage we have a fixed adjacency matrix that is given to us. Now, in GAN the authors try to learn the adjacency matrix via self-attention mechanism.

Graph Attention Network:

Let, $K$ be the number of attention heads, $h^{l+1}_i$ is the feature vector of node $i$ at $l+1$ layer, $e^{l}_{ij}$ is the attention weight between two adjacence node $i$ and $j$ at layer $l$.

Then the update rule for graph attention network is as follows: \begin{align} h^{l+1} &= \text{Concat}_{K=1}^{K}(\text{ELU}(\sum_{j \in \mathcal{N}_i} \underset{\text{scalar}}{e_{ij}^{K,l}} \underset{d \times d}{W_1^{K,l}} \underset{d \times 1}{h_j^l})) \end{align} where the $K$-th head attention weight is defined as:

\begin{align} \underset{\text{scalar}}{e^{K,l}_{ij}} &= \text{Softmax}_{\mathcal{N}_i}(\hat{e}^{K,l}_{ij}) \\ \hat{e}^{K,l}_{ij} &= \text{LeakyRelu}(W_{2}^{K,l} \text{Concat}(W_1^{K,l}h_i^l, W_1^{k,l}h_j^l)) \end{align}

Notice that in GAN we are learning anisotropic filters(treats each direction differently, since attention weight is different for each direaction) which are more powerful than isotropic filters(treat all the directions). For this reason, GAN are more powerful than isotropic graph convolutional network(GCN).

Spatio Temporal GCN(ST-GCN)

In stgcn, we first perform graph convolution(vanilla GCN or GAN) on the spatial domain then apply temporal convolution along the temporal direction. Here is an example of STGCN for human activity recognition here blurred skeleton indicate time axis.

In the aforementioned figure, color coding indicates attention weight.

",28048,,,,,9/5/2021 20:48,,,,0,,,,CC BY-SA 4.0 31564,1,,,9/5/2021 21:41,,1,51,"

I am making a labeled dataset of images from web streams for a CNN classification. Pictures from the same stream are quite similar as far as background, but slightly different as far as the main object. The focus of what should be learned is in the main object.

My concern is that feeding similar images with the same features in the background will result into making those features more relevant and hence lower the weights of the features that matter.

So, should I be worried about removing similar images from a dataset, so that unrelated features are not learned?

An ideal answer would discuss the trade-offs.

I am aware of the practice of augmenting the training images by scaling/skewing/flipping them around. So it looks like people do it intentionally, but why?

I should also say that it's not about learning from a single stream, there are tons of them. So most images are very square-distant from one another since they are coming from different streams, except those ones that were snapped from the same stream.

",23198,,23198,,9/6/2021 23:31,9/6/2021 23:31,Is having near-duplicates in a training dataset a bad thing?,,0,0,,,,CC BY-SA 4.0 31566,1,31575,,9/5/2021 22:00,,1,342,"

I'm studying error backpropagation in neural networks. I am interested in why we use only one path on the computational graph to get the value of the derivative for a weight? I ask the question because there are several paths in the computational graph to get the derivative for a particular weight. Why do we only use a one value? Why don't we combine the values from all possible paths?

Schema: Fromulas:

Normal path: $$\frac{\partial E}{\partial w_{1,1}} = \frac{\partial E}{\partial Out} \cdot \frac{\partial Out}{\partial a_{1,1}}\cdot \frac{\partial a_{1,1}}{\partial a_{0,1}}\cdot \frac{\partial a_{0,1}}{\partial w_{1,1}}$$

Alternate path: Normal path: $$\frac{\partial E}{\partial w_{1,1}} = \frac{\partial E}{\partial Out} \cdot \frac{\partial Out}{\partial a_{1,2}}\cdot \frac{\partial a_{1,2}}{\partial a_{0,1}}\cdot \frac{\partial a_{0,1}}{\partial w_{1,1}}$$

Why don't we consider both derivatives or the sum of them?

",49631,,49631,,9/15/2021 6:06,9/15/2021 6:06,"Different ways to calculate backpropagation derivatives, any difference?",,1,1,,,,CC BY-SA 4.0 31567,1,,,9/5/2021 22:06,,1,134,"

I am experimenting with MADDPG algorithm implemented in this repo. Since there were only a few agents (2-3) in the implementation (also in the original paper) steps like parameter updates, action prediction, etc. are done in a for loop. I want to increase the number of agents, say 10 or 30, and perform parallelization of the above-mentioned steps for all agents, i.e. I want to avoid for loops like this

for agent_idx in range(n_agents):
    ...
    ...

I tried Python Multiprocessing module with pool.map method but I am getting 'AttributeError: Can't pickle local object ...". Below is code I am running to get a joint action prediction but it results in the error above.

def get_ind_action(i, obs_i):
    return actor_critic[i].act(obs_i) # returns an individual action for a given observation for ith agent

def get_joint_action(obs):
    pool = Pool()
    args_list = [[i, obs[i]] for i in range(n_agents)]
    joint_action = pool.map(get_ind_action, args_list)
    return joint_action
    

Here actor_critic is a list of neural networks of all agents, obs is the joint state observed by the centralized critic but each actor only sees its own state. The algorithm has the following architecture.

",32517,,32517,,9/8/2021 1:09,9/8/2021 1:09,How to parallelize multi-agent DDPG (MADDPG),,0,5,,,,CC BY-SA 4.0 31569,1,31739,,9/6/2021 4:02,,0,883,"

I couldn't find out how to interpret the multilayer perceptron notation given in PointNet. Specifically, I am looking to find out what the numbers inside the parentheses of mlp(64,64) and mlp(64,128,1024) actually mean.

(I also have a 2nd question about PointNet MLP architecture, which I ask towards the end.)

Here's what I found online, which I believe applies:

  1. https://towardsdatascience.com/deep-learning-on-point-clouds-implementing-pointnet-in-google-colab-1fd65cd3a263

    There's a paragraph here that says

    In this case MLP with shared weights is just 1-dim convolution with a kernel of size 1.

    Here, a link is provided to explain more about the 1-dim convolution...

    https://jdhao.github.io/2017/09/29/1by1-convolution-in-cnn/

    ...and I follow this pretty well.

  2. There's also this Matlab example...

    https://www.mathworks.com/help/vision/ug/point-cloud-classification-using-pointnet-deep-learning.html

    ...which tells me to

    set the classifier model input size to 64 and the hidden channel size to 512 and 256 and use the initalizeClassifier helper function...to initialize the model parameters.

    inputChannelSize = 64; hiddenChannelSize = [512,256];

  3. Then there's this link: https://www.researchgate.net/figure/Network-architecture-The-numbers-in-parentheses-indicate-number-of-MLP-layers-and_fig2_327068615

    ...in which they say,

    The numbers in parentheses indicate number of MLP layers

    but this is, in my opinion, not written very well. Do they mean,

    The notation mlp(64,64,128,256) means that the MLP has 4 layers, and each layer produces an output with 64, 64, 128, and 256 channels, respectively?

Here are my 2 questions about PointNet MLP notation / architecture:

  • What do each of the numbers in something like mlp(64,64,128,256) actually mean, and what do their positions mean? Are these numbers ONLY referring to the hidden layers, which includes the output layer? Also, are they referring to the number of channels, akin to the depth-wise feature layers of a CNN?

  • Finally, if your input is nx3 (as in, n (x,y,z) points), does this mean that the PointNet MLP takes an input of 1x3, meaning 1 input neuron, or 3 input neurons?

",38271,,2444,,9/18/2021 14:06,9/18/2021 19:06,"What, exactly, do mlp(64,64) and mlp(64,128,1024) mean in PointNet, and how many input neurons does 1 (x,y,z) point have?",,1,1,,,,CC BY-SA 4.0 31570,2,,31507,9/6/2021 6:19,,1,,"

You can find information similar to exposed by Neil, but with more theoretical detail, in the book Deep Learning (Goodfellow et al., 2016) in the chapter 10 (Recurrent networks), more specifically in 10.2.3 Recurrent Networks as Directed Graphical Models and other subchapters.

Additional, related with pointer networks there are people changing the LSTM with Transformer (Learning Heuristics for the TSP by PolicyGradient, 2018)

",46432,,,,,9/6/2021 6:19,,,,1,,,,CC BY-SA 4.0 31572,2,,31556,9/6/2021 7:52,,2,,"

The main similarity between reinforcement learning experience and supervised learning datasets, is that both consist of a set of records. These records are commonly expressed as vectors of numbers for use in the algorithms. In addition, reinforcement learning that uses neural networks (or other function approximation) will typically implement some variant of supervised learning internally.

There are a few key differences between a prepared dataset for supervised learning and the experience in reinforcement learning. There are exceptions to these, but these are the usual case:

  • A supervised learning dataset has a fixed target value to learn by association, for each entry, e.g. a class or regression value. An individual reinforcement learning experience does not - the raw tuple of state, action, reward, next state $(s,a,r,s')$ must be processed in some way to obtain a useful training target value, and this processing is not static. When training in reinforcement learning for optimal control, even with historical experience, the target values must constantly be re-assessed.

  • Reinforcement learning is in part a design for actively collecting experience. There is no equivalent in supervised learning where the dataset is a given.

  • Reinforcement learning experience arrives in groups of $(s, a, r, s')$ - state, action, reward, next state - such groups of related data within a record are called tuples. The RL records are often correlated with each other, at least initially because each time step changes things only slightly, and that can be bad when combined with supervised learning which usually assumes uncorrelated data. In supervised learning you will often have a deliberate shuffling or randomisation of dataset order to protect against this. Experience replay in deep RL is a related idea to protect internal neural networks from being exposed to training samples in sequences with correlated values.

It is possible to apply reinforcement learning to supervised learning problems in theory. You can do this by making the agent guess each labelled value as an action, and reward it with negative the loss from the supervised learning. This is generally a bad idea because it is very inefficient, and there is no matching concept of state transitions in the supervised learning problem (the agent cannot impact what state is next due to its guess). However, the fact that this is possible with very little modification to the reinforcement learning agent shows how general reinforcement learning is as a learning approach.

The inverse is not really true, you cannot normally adjust a supervised learning algorithm so that it can solve a reinforcement learning problem from the given experience. There are some edge cases, such as when learning only from previous experience to assess an existing policy, or to learn a control algorithm which only cares about immediate reward. In which case you could use reinforcement learning theory to help construct a fixed dataset and give that to a supervised learning algorithm. However, by far the most common approach is to use supervised learning approaches (e.g. a neural network) as components of the agent, and rely on reinforcement learning to generate data for them on the fly.

",1847,,1847,,9/7/2021 17:49,9/7/2021 17:49,,,,0,,,,CC BY-SA 4.0 31574,2,,30038,9/6/2021 8:09,,1,,"

A convolution layer has a Kernel that is matrix smaller that the image (in many papers 3*3), so this Kernel have learnable values.

The kernel is applied pixel by pixel (and dimensions, for color images), by applied I mean take the kernel centered in the pixel and multiply element-wise with the image and sum all results. This is the output for that pixel, a value that takes information from pixel's neighborhood.

When you apply this many times you lose specific pixel's information, but it gains information about pixel's neighborhood. And this is good because maybe this neighborhood is a corner, a line, curve, etc. (this is what learned the kernel to identify value content, for that reason the network in d and e have more basic forms and color without detail) . if you continue applying more convolutions you will find a abstract representation of the all the input, even you can get only one number that represent that input (if you don't avoid dimension reduction with padding or other tool) and say something like "oh it's a five and therefore the input is a house" (in classification case).

NOTE: This is very general and many details were omitted, if you want to learn more I recommend the book deep learning (Goodfellow et al., 2016) chapter 9 convolutional networks.

",46432,,,,,9/6/2021 8:09,,,,0,,,,CC BY-SA 4.0 31575,2,,31566,9/6/2021 9:06,,1,,"

The backpropagation algorithm doesn't take paths, it computes the gradient with respect to all combinations defined by each weight matrix. But you could look at it as adding up the gradient from all paths.

Usually, the notation for the weights have three indices, the index of the input cell, the index of the output cell and the index of the layer, so $W_{1, 2}^{[3]}$ can be the coefficient in the 3rd layer of the neural network that multiplies the activation of the 1st cell in that layer and it contributes to the 2nd cell in the next layer. Now, when you look at $W^{[3]}$ without the lower indices, that is a matrix of weights between layers. This is what's used when expanding the differentiation using the product rule. So the derivatives are now with respect to those matrix weights. The point is that all of these paths contribute to the gradient at lower levels.

",,user42664,,user42664,9/6/2021 9:58,9/6/2021 9:58,,,,1,,,,CC BY-SA 4.0 31577,1,,,9/6/2021 11:17,,0,68,"

I am currently doing research work on an inversion of geophysical data using Machine Learning. I have come across some research work where a Convolutional Neural Network (CNN) has been used effectively for this purpose (for example, his).

I am particularly interested in how to prepare my input and output labelled data for this machine learning application, since the input will be the observed geophysical signal, and the label output will be the causative density or susceptibility distribution (for gravity and magnetic, respectively).

I need some assistance and insight as to how to prepare the data for this CNN application.

Additional Explanation

Experimental setup: Measurements are taking above the ground surface. These measurements are signals that reflect the distribution of a physical property (e.g., density) in the ground beneath. For modelling, the subsurface is discretised into squares or cubes each having a constant but unknown physical property (e.g., density).

How it applies to CNN: I want my input data to be the Measurements taken above ground. The output should then be the causative density distribution (that is, the value of the density in each cube/squares)

See attached picture (flat top is the "above ground", all other prisms represent the discretisation of the subsurface. I want to train the CNN to give out a density value for each cube in the subsurface, given the above ground measurements)

",49639,,2444,,9/7/2021 11:48,9/7/2021 11:48,How do I prepare my data for a CNN to be applied to a geophysical-related problem?,,1,5,,,,CC BY-SA 4.0 31578,2,,31577,9/6/2021 17:12,,1,,"

I haven't done similar work with CNNs, but I can list a couple of approaches, maybe it helps you get started.

If I understand it correctly, the question is mostly about shapes of the data, so that's what I'll focus on as well.

Option A: You can keep your input as a 2D "image" with a single channel and just use 2D convolutions to expand to the required output size. This could work, but it doesn't incorporate the spatial dependency in the 3D output.

Option B: You can consider your 2D input to be 3D but with only one unit in the extra dimension, then use a couple of 3d transposed convolutions to get to the correct output shape. This is nice because you rely on 3D translation invariance which is probably what you want for the densities, but it would need to be tested. Also, in this case, You would only have one channel both in the input and the output, this doesn't mean you can only use one channel inside the NN, but you need to reduce it towards the end.

Option C: You can have your input as a 2D "image" with a single channel and do a couple of 2D convolutions to expand the number channels, then expand the dimensions of the tensors within the neural network and continue with 3D convolutions, considering the previous channels as the 3rd dimension and initially a single channel for the 3D "image". I could imagine this working, but the transition from channels (without spacial relations) to the 3rd dimension feels weird, and I'm not sure about the validity of this setup.

",,user42664,,,,9/6/2021 17:12,,,,2,,,,CC BY-SA 4.0 31579,2,,30289,9/6/2021 18:48,,1,,"

My feeling from what I have generally read about machine learning approaches is that these adapt better to more complex problems with higher dimensionalities.

Your intuition seems to align with the empirical results in these domains in the last few years. Expert System are in general not used for domains like Computer Vision, but there may be exceptions do this.

Specifically, I am currently trying to make a general comparison (if possible) of expert systems with machine learning systems across dimensions of efficiency and complexity

I will try to provide you with an approach to systematically compare them. You should create a set of tasks $T$ to evaluate your comparison. I will assume your metric to evaluate on is accuracy, but you can of course replace this with any score function of your choice.

Space Complexity

For all tasks implement the expert system and ML model you want to compare. Compare the amount of space they take and average this by the number of tasks to get an estimate of the complexity for each.

$ \hat{E}[\mathrm{space}] = \frac{1}{T} \sum_{t\in T} \mathrm{space}(t)$

Time Complexity

Similar to measuring space, but now time each of your model and expert system. You could do so for different functions on each of your tasks such as training or predicting. Training is more difficult, because if you're doing supervised learning for example you don't know when you will stop since more training time might be better. So you should set a time or some criteria of convergence and list them under your assumptions.

$ \hat{E}[\mathrm{time}] = \frac{1}{T} \sum_{t\in T} \mathrm{time}(\mathrm{predict}(t))$

Run-time-efficiency

Here is where things get tricky, because there isn't a single unified expression for this that I'm aware of. So I would suggest to evaluate this based on dividing the accuracy by the approximate number of operations. This could be done multiple times and averaged for example if training a neural net for each epoch. To reflect the actual run time in practice more accurately you might also just want to add a time comparison in the form of accuracy divided by runtime on fixed hardware.

All of this provides of course no strong theoretical explanation, but is more of a study to provide some empirical data. If this is done on a large enough scale your results become a lot more meaningful. Scale here means the number of different model types and expert systems and tasks you're testing on.

",49455,,49455,,9/6/2021 18:55,9/6/2021 18:55,,,,1,,,,CC BY-SA 4.0 31582,1,,,9/7/2021 6:03,,1,30,"

Sentiment analysis, as we know, measures "Cake sucks" as say -0.4, and "Cake is great" as 0.7. What I'm looking for is something a bit different like so:

  1. Given input text data written by 1 person (say a blog)
  2. Predict how they (the person who wrote the text) might react to a certain piece of text

What might something like this look like?

  • Let's suppose that Person A with a blog has written in his blog posts thousands of times about how much cake is the best thing to happen to humanity.
  • The system should probably infer that if that person read something like "Cake is the WORST food ever", they would react negatively to it, if say, they also believe that there is such a thing as 'objective taste' somehow (aesthetic absolutism).
  • Or if Person A has made anti-racist statements, that racist statements would be strongly negative.
  • If Person A reads the statement "I hate lawyers" and in their blog they have written about how they don't care either way about law, it should probably be 0.
  • Finally, if Person A reads the statement "iPhones are better than Android" and there is zero data about either iPhones or Androids, or even related data about Apple or Google, then it should probably be 0, with an additional "confidence" metric at 0 (since there is no data, this confidence metric will let us know whether there is any data to support the measurement or not).

This model would need to be able to somehow inductively 'infer' a value system of some kind, and assign intensities of probable reactions based on the frequency of an expressed view, as well as pick up on nuances (such as philosophical assumptions, (for example in the above cake example: aesthetic absolutism) etc.) that may inform that measurement.

In other words, I'd like to create a model (or find a pre-trained model to fine-tune), that would be able to, given text data from that 1 person, predict their sentiment in response to a new piece of text.

Would love any help whatsoever regarding:

  1. What types of pre-trained models I should look at
  2. Any ideas of any kind whatsoever you might have on how to achieve this
  3. What sorts of architectures/resources/concepts may be relevant to look at
",49653,,49653,,9/7/2021 17:33,9/7/2021 17:33,Is there something like person-specific sentiment analysis?,,0,0,,,,CC BY-SA 4.0 31583,1,,,9/7/2021 6:06,,2,2569,"

I am new to DRL and trying to implement my custom environment. I want to know if normalization and regularization techniques are as important in RL as in Deep Learning.

In my custom environment, the state/observation values are in a different range. For example, one observation is in the range of [1, 20], while another is in [0, 50000]. Should I apply normalization or not? I am confused. Any suggestions?

",49654,,2444,,9/7/2021 21:59,3/8/2022 7:00,Should I apply normalization to the observations in deep reinforcement learning?,,2,3,,,,CC BY-SA 4.0 31584,2,,11866,9/7/2021 7:33,,0,,"

In additive attention (as described in the paper by Bahdanau et al. (2014)), the alignment model $a$ is represented by a feedforward neural network. If you look in their appendix, they actually implement this as

$$ e_{ij} = V_a^T \tanh \left(W_a s_{i-1} + U_a h_j\right) = V_a^T \tanh \left(Q + K\right).$$

In contrast, in dot-product attention (as described in the paper by Vaswani et al. (2017)), the alignment model is implemented as

$$ e_{ij} = W_Q s_{i-1} \left(W_K h_j\right)^T = Q K^T.$$

The computational advantage is that the dot-product alignment model has only two weight matrices and only needs matrix multiplication, for which highly-optimized code exists.

",49656,,49656,,9/7/2021 7:38,9/7/2021 7:38,,,,0,,,,CC BY-SA 4.0 31586,1,,,9/7/2021 10:07,,0,32,"

Recently, I asked a question on the concept of a manifold and received an answer that points to a relatively new subfield of deep learning named geometric deep learning.

In the preface of the paper titled Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges, there is a mention of three types of geometry that do exist in the literature.

For instance, Euclidean geometry is concerned with lengths and angles, because these properties are preserved by the group of Euclidean transformations (rotations and translations), while affine geometry studies parallelism, which is preserved by the group of affine transformations. The relation between these geometries is immediately apparent when considering the respective groups, because the Euclidean group is a subgroup of the affine group, which in turn is a subgroup of the group of projective transformations.

The three types of geometry they mentioned are Euclidean, affine and projective. I want to know the complete list of types of geometry that do exist in the literature, if relevant to geometric deep learning.

What are the types of geometry in the literature that may be used for deep learning?

",18758,,2444,,9/7/2021 21:58,9/7/2021 21:58,What are the different types of geometry in literature that may be used for deep learning?,,0,2,,,,CC BY-SA 4.0 31587,1,,,9/7/2021 10:43,,1,1801,"

Is it true that a bias is said to be inductive iff it is useful in generalising the data?

Or does inductive bias can also refer to the assumptions that may cause a decrease in performance?


Suppose I have a dataset on which I want to use a deep neural network to do my task. I think, based on some knowledge, that a DNN with 5 or 11 layers may work well. But, after implementation, suppose 11 layer one worked well. Then can I call both of them inductive bias? or only the 11 layer assumption?

",18758,,2444,,9/7/2021 12:20,9/8/2021 13:04,Is the inductive bias always a useful bias for generalisation?,,3,1,,,,CC BY-SA 4.0 31588,1,31590,,9/7/2021 14:09,,0,65,"

I have no specific knowledge of the AI field, but I heard that AI systems get better the longer they learn.

So, I was wondering: could it be possible that AIs will learn how to create better AIs (Or assists humans to create a better AI), and then those better AIs will learn to create an even better/faster AI, and so on? Wouldn't this mean that the AI would get exponentially better/faster because, after each successive generation, a slightly faster AI will do the job?

I also heard that "Google is using AI to design processors that run AI more efficiently". Wouldn't this be the same? AI designs faster CPU => AI get's faster and can design an even better CPU to run on.

Is something like that possible? Would this mean that at some point there will be a breakthrough in AI that will significantly increase the speed of AIs because of those loops?

",49669,,2444,,9/7/2021 16:54,9/8/2021 11:33,"Can an AI create another better AI, which in turn creates another better AI, and so on?",,1,3,,9/8/2021 3:49,,CC BY-SA 4.0 31589,1,,,9/7/2021 14:38,,1,92,"

In the paper Conservative Q-Learning for Offline Reinforcement Learning, it is stated (section 3.1, page 3) that

standard Q-function training does not query the Q-function value at unobserved states, but queries the Q-function at unseen actions

I don't see how this is true. For every $(s,a)$ pair, we need to update $Q(s,a)$ to reduce the value $|Q(s,a) - R(s,a) - \gamma E[\max_{a'}Q(s',a')]|$ until it converges to zero.

We see the existence of both $a'$ and $s'$, and $s'$ could be unseen, for example, on the very first update, where we are at $s$, take action $a$, and could arrive at any state $s'$.

Can someone explain this?

",45562,,2444,,11/17/2021 17:39,11/30/2022 1:02,Why does Q-function training not query the Q-function value at unobserved states?,,2,1,,,,CC BY-SA 4.0 31590,2,,31588,9/7/2021 15:05,,0,,"

The first thing to consider is that (as of today) there is no general AI (AGI) and most if not all research don't try to create an AGI. An AGI can be described as an AI able to perform well (at least at a human-like level) on a wide range of tasks. If you are interested in the question I recommand you to do some research about the term AGI as there is no consensual definition, but I believe that the one I gave you will be enough here.

This means that an AI can actually do just one (or a few) task and so, if we created an AI creating new AIs it would just create new AIs better to either do something else (so there is no loop of improvement and it's not what you are interested in) or better to create a better version of themselves which in itself is useless as this better version can't do anything useful to us (except maybe to show that we can do it).
I don't know if there are examples of research creating an AI improving themselves, but there are examples of AI improving other AI such as this one where an evolutionary algorithm is used to design a better neural network.

Concerning your example about AI improving the hardware, it could indeed lead to the improvement of a wide range of AI, but it's, in my opinion, different than an AI improving themself as the loop take several years before being closed (from this source it take several years to produce a new cpu) where an AI directly training an AI could do so in a matter of days (or weeks or hours depending on the technology, but anyway it's way faster)

But, if we had an AGI able to create better, more or equally general AI, it could indeed lead to a breakthrough in AI*, or this is at least the opinion of some people including Nick Bostrom which wrote the famous book "superintelligence" in which he described how an AGI could very rapidly improve itself and described it as a possible singularity (meaning that we are unable to foresee what this AGI could become and how fast it could improve itself).

*at the condition that a better AGI is able to further improve the next version, which is not so obvious (thanks Neil Slater for this remark). Adding this condition it means that your metrics for "better" must include the ability to create a better AGI and that you can effectively always (or for long enough to obtain a breakthrough) improve this ability (As far as I know, we don't know if it's indeed the case)

",26961,,26961,,9/8/2021 11:33,9/8/2021 11:33,,,,4,,,,CC BY-SA 4.0 31591,2,,31589,9/7/2021 15:47,,0,,"

We see the existence of both $a'$ and $s'$, and $s'$ could be unseen, for example, on the very first update, where we are at $s$, take action $a$, and could arrive at any state $s'$.

The very first update is made after taking action $a$ in state $s$ and observing reward $r$ plus next state $s'$. There is no other way in Q learning of knowing what the next state is in order to process the update. So $s'$ in not unseen, it has been observed.

Another way to put this: In model-free methods, the update to Q value estimates for state $S_t$ action $A_t$ on time step $t$ is always made on or after time step $t+1$, when $R_{t+1}$ and $S_{t+1}$ are known.

However, on that same time step, the action $a'$ does not yet need to have been taken. The value of $a'$ is not taken from $A_{t+1}$, but is evaluated for all possibilities. Even after $A_{t+1}$ is known and has been taken, the need to process all possible actions in the state $s'$ (in order to update $Q(s,a)$ or $Q(S_t, A_t)$) can often lead to needing action value estimates for never seen state/action pairs.

The update step you quoted includes the expected reward (or possibly a custom reward function known to the agent) $R(s,a)$, which is not standard for Q learning. It would be more usual to use the observed reward $r$. However, it may be ok because a lot of the time the developer/researcher is in charge of the reward signal and can provide $R(s,a)$ to the agent, even when the full model is not used.

",1847,,1847,,9/7/2021 15:52,9/7/2021 15:52,,,,2,,,,CC BY-SA 4.0 31592,2,,31587,9/7/2021 16:29,,1,,"

Is it true that a bias is said to be inductive iff it is useful in generalising the data?

Or does inductive bias can also refer to the assumptions that may cause a decrease in performance?

Tom M. Mitchell defines bias as:

Any basis for choosing one generalization over another, other than strict consistency with the observed training instances.

Gordon, Desjardins extend the definition of bias to include any factor (including consistency with the instances) that influences the definition or selection of inductive hypotheses.

Basically inductive bias is any type of bias that a learning algorithm introduces in order to provide a prediction.

For example:

  1. In SVM we attempt to maximize the width of the boundary between two classes
  2. In Nearest neighbors we assume that most of the cases in a small neighborhood in feature space belong to the same class
  3. In cross-validation choosing the model that minimizes the cross-validation error is an inductive bias, because we are choosing an hypothesis over another

However it must be said that cross-validation score has a well grounded theoretical background, and trying to avoid its bias by only considering a hypothesis class is itself a bias because we are discarding all the other hypothesis a priori.

In your case:

Suppose I have a dataset on which I want to use a deep neural network to do my task. I think, based on some knowledge, that a DNN with 5 or 11 layers may work well. But, after implementation, suppose 11 layer one worked well. Then can I call both of them inductive bias? or only the 11 layer assumption?

In this case you have introduced an inductive bias by using a Neural network between 5 and 11 layers (you chose a function class) then you introduced another bias by choosing the number of layers that minimizes the cross-validation score.

Conclusion

In the end we want to minimize the error on the real-world task that we need to solve, for example maximizing the accuracy on MNIST. To do this we use certain hypothesis/models, and methods to compare them, i.e. cross-validation score. The choice of the models and methods are up to the researcher and its prior beliefs. All these beliefs result in choices about the learning algorithm and introduce an inductive bias.

",49580,,49580,,9/8/2021 13:04,9/8/2021 13:04,,,,2,,,,CC BY-SA 4.0 31594,2,,31587,9/7/2021 19:08,,0,,"

The inductive bias assumed by CNN is that if we translate an image, the output does not change (the image has translational symmetry), and we can see that this assumption is valid. Similarly, spherical CNN  has rotational symmetry as inductive bias capture by the SO3 group (a collection of all the special orthogonal $3 \times 3$ matrices), and this is valid when data is on a sphere. The inductive bias of linear regression is that the data can be separated linearly. 

We must choose algorithms such that the inductive bias captures the correct assumption about the data distribution. For example, linear regression is better than polynomial regression if data is perfectly linear, since the inductive bias of linear regression matches with the data distribution. 

",28048,,2444,,9/8/2021 2:31,9/8/2021 2:31,,,,3,,,,CC BY-SA 4.0 31595,1,31648,,9/7/2021 19:30,,0,128,"

Consider that we have an agent that has a set of thousands of different actions at each timestep. The reward function in $R:S \rightarrow\{0,1\}$. Let $Q_{t}^\pi(s,a)$ be the estimate from the neural network in the DQN. At timestep $t \leq T$, where $T$ is the horizon of the RL task, is there any possible way to upper bound

$$ max_a Q_{t}^\pi(s,a) - min_a Q_{t}^\pi(s,a) $$

where $\pi$ is the policy of DQN's Q-network (Neural Net regressor)?

",36055,,36055,,9/7/2021 20:30,9/10/2021 21:01,Are the Q-values of DQN bounded at a single timestep?,,1,6,,,,CC BY-SA 4.0 31597,1,,,9/7/2021 20:29,,5,76,"

Negative results occur frequently in AI/ML research (and perhaps in other domains too). Most of the time, these results are not published. This is mostly because your typical AI/ML conference doesn't accept such papers.

However, are there any venues to publish these results? I believe these results can be still useful to look at before you delve into a certain project so that you'd at least know what approaches don't work.

As an example venue, there seems to be PerFail workshop from the pervasive computing domain. So, is there something similar for AI/ML?

",32621,,2444,,12/29/2021 12:05,12/29/2021 12:05,Is there a venue to publish negative results in AI/ML domain?,,0,0,,,,CC BY-SA 4.0 31599,1,,,9/8/2021 1:29,,1,59,"

In deep learning, an image is said to contain two types of features. One is the content of the image and the other is the style of the image.

Deep neural networks are generally used to obtain both content representation and style representation of an image. So, one can roughly define the style and content representations of an image using deep neural networks.

Research papers generally show foreground objects (under consideration or focus) in an image as the content of the image and the background (or background objects such as sky etc.) as the style of the image.

If we need to define the content and style of an image without using deep neural networks, then what can be the definitions for the content and style of an image?

",18758,,2444,,9/12/2021 13:53,9/12/2021 13:53,What are the definitions for the content and style of an image without using deep neural network?,,0,1,,,,CC BY-SA 4.0 31600,2,,31587,9/8/2021 3:26,,1,,"

The inductive bias is the prior knowledge that you incorporate in the learning process that biases the learning algorithm to choose from a specific set of functions [1].

For example, if you choose the hypothesis class

$$\mathcal{H}_\text{lines} = \{f(x) = ax + b \mid a, b \in \mathbb{R} \}$$ rather than $$\mathcal{H}_\text{parabolas} = \{f(x) = ax^2 + b \mid a, b \in \mathbb{R} \},$$ then you're assuming (implicitly or explicitly, depending on whether you're aware of these concepts) that your target function (the function that you want to learn) lies in the set $\mathcal{H}_\text{lines}$. If that's the case, then your learning algorithm is more likely to find it.

In most cases, you do not know exactly the nature of your target function, so you could think that it may be a good idea to choose the largest set of possible functions, but this would make learning infeasible (i.e. you have too many functions to choose from) and could lead to over-fitting, i.e. you choose a function that performs well on your training data, but it's actually quite different from your target function, so it performs badly on unseen data (from your target function). This can happen because the training data could not be representative of your target function (you don't usually know this a priori, so you cannot really or completely solve this issue).

So, the definition above does not imply that the inductive bias will not necessarily lead to over-fitting or, equivalently, will not negatively affect the generalization of your chosen function. Of course, if you chose to use a CNN (rather than an MLP) because you are dealing with images, then you will probably get better performance. However, if you mistakingly assume that your target function is linear and you choose $\mathcal{H}_\text{lines}$ from which your learning algorithm can pick functions, then it will choose a bad function.

Section 2.3 of the book Understanding Machine Learning: From Theory to Algorithms and section 1.4.4. of the book Machine Learning A Probabilistic Perspective (by Murphy) provide more details about the inductive bias (the first more from a learning theory perspective, while the second more from a probabilistic one).

You may also be interested in this answer that I wrote a while ago about the difference between approximation and estimation errors (although if you know nothing about learning theory it may not be very understandable). In any case, the idea is that the approximation error (AE) can be a synonym for inductive bias because the AE is the error due to the choice of hypothesis class.

(As a side note, I think it is called "inductive bias" because this bias is the one that can make inductive inference feasible and successful [2] and maybe to differentiate it from other biases, e.g. the bias term in linear regression, although that term can also be an inductive bias).

",2444,,,,,9/8/2021 3:26,,,,0,,,,CC BY-SA 4.0 31602,1,31718,,9/8/2021 7:53,,0,85,"

I was working a little bit on a school project my team and I decided to do for submission in the year-end. It's a small game which I call 'Quattro', and its rules are as follows:

  1. The game is played on an 8 x 8 square grid and each player (both the human and the computer) have sixteen pieces on their side (just the same layout as in chess, but here all pieces are identical for each player).
  2. Only vertical moves,(i.e., one can move only one square forward at a time and that too in the forward direction/along a column) as long as no other piece stands before the piece to be moved.
  3. However, one can cross over to a square present in the north-west or north-east (when you look around a 3 x 3 grid with the piece under consideration at the centre) if the north and west/east boundary of the piece when in the 3 x 3 grid are held by the enemy's pieces, as in the case of the move 'en passant' in chess. In the process, the enemy piece in the square just below the square the player has crossed over to is lost by the enemy.
  4. The players involved can either be an attacker or a defender (if one choses the former, the other takes the latter). The attacker wins if he/she successfully takes at least four of his/her pieces(hence the name Quattro) to the enemy's side (that is, to the last row counting from that player's side) while the defender wins if the attacker is prevented from doing so.

You can request me to add screenshots in case the rules are very vague (even my teammates were confused 😅).

Okay, so I'm doing this on Python 3.9.6 and I have somehow made the board layout and movement rules (except for rule 2 and rule 4, which are supposed to be added once the primary workings of the game is completed). I had somehow made the AI player (which is based on a single-layer perceptron), but I doubt if it is working right or not. The problem is that when I make a random move, the AI player starts always at the same column and moves pieces in some order I can't clearly remember(in the primary stage of creation, it all seemed to work fine, but as time progressed, I began to see indexing errors so I tried to adjust things somehow) and then it wanders off into an infinite loop. From a debug message I set up to observe the change in weights, I saw that at times one weight it growing while the other few would either be shrinking or remaining constant. As of now, I set up a variable to give the model a random target value (or may not be random, it seems) to train with and still the problem continues. I doubt if the input data is biased someway or the other. Here's how the input is taken:

  1. The model first checks through each column, and the input corresponding to each column will be a vector containing 0's and 1's with the 1's indicating the enemy's presence and 0 for the else case. The model thus generates a 'preference score' (equivalent to the activation function of the sum of the weighted inputs, as how it is in any other perceptron).
  2. The same is done for rows as well and the list of values in both cases are passed to a dictionary, from which the player choses the row and column index with the highest scores and moves the piece there.

I also set up an InvalidMove exception so as to make sure that the machine doesn't play blankly.

So here's the code:

  • MarchOfTheFinalFour.py - the module containing the required exception and the board.
# MarchOfThFinalFour.py

from time import *

# 'March of The Final Four' - a clone of chess

player_piece = 'Φ' # player's piece
computer_piece = 'τ' # computer's piece

class InvalidMove(Exception):
    '''error when you/ the computer takes an invalid move'''
    def __init__(self, coords):
        self.stmt = "invalid move from:"
        self.coords = coords

class PlayTable:
    def __init__(self, table_side):
        ''' generates the game board, empty and with no pieces '''
        self.length = table_side
        self.table = [[0]*table_side]*table_side

    def __repr__(self):
        '''prints the table'''
        print()
        table_str = ''
        num = 0
        for row in self.table:
            table_str += str(num) + " \t"
            for piece in row:
                table_str += str(piece) + "|"
            table_str += "\n"
            num += 1
        return table_str + "\t0 1 2 3 4 5 6 7"
        
    def reset(self):
        ''' resets the board/ places pieces on it '''
        for row in [0, 1] :
            self.table[row] = [computer_piece]*self.length

        for row in [self.length-2, self.length - 1] :
            self.table[row] = [player_piece]*self.length
        
        for row in range(2, self.length-2):
            self.table[row] = [0]*self.length
    def move_piece(self, coord, turn = 'player'):
        '''moves the piece at coord '''
        if self.table[coord[1]%self.length][coord[0]%self.length] != 0 and self.table[(coord[1] + (1 if turn == 'computer' else -1))%self.length][coord[0]%self.length] == 0:
            temp = self.table[coord[1]][coord[0]]
            self.table[coord[1]][coord[0]] = 0
            direction = 1 if turn == 'computer' else -1
            self.table[coord[1]+direction][coord[0]] = temp
            print(f"Moved {temp} from {(coord[0] if coord[0] >= 0 else 8 + coord[0],coord[1] if coord[1] >= 0 else 8 + coord[1])}") # msg
        elif self.table[coord[1]%self.length][coord[0%self.length]] == 0 or self.table[(coord[1] + (-1)**(1 if turn == 'player' else 1))%self.length][coord[0]%self.length] != 0:
            raise InvalidMove(coord)
        elif turn == 'player' and self.table[coord[1]%self.length][coord[0]%self.length] == computer_piece:
            raise InvalidMove(coord)
        elif turn == 'computer' and self.table[coord[1]%self.length][coord[0]%self.length] == player_piece:
            raise InvalidMove(coord)
        
        
board = PlayTable(8)
board.reset()
print(board)

  • TestGameML.py - sample game, NPC, single-layer perceptron, etc. all lies here:
from math import *
from random import *
import MarchOfTheFinalFour as mff

######################

## Math functions for our use in here
    
def multiply(list_a, list_b):
    '''matrix multiplication and addition'''
    list_res = [list_a[n] * list_b[n] for n in range(len(list_a))]
    return fsum(list_res)

def sig(x):
    '''logistic sigmoid function'''
    return exp(x)/(1+ exp(x))

##############################

## Neighbourhood search

def neighbourhood(coords, board_length):
    '''generates the 3 x 3 grid that forms the neighbourhod of the required square'''
    axial_neighbours =  [(coords[0] + 1, coords[1]),(coords[0] - 1, coords[1]),
                        (coords[0], coords[1] + 1), (coords[0], coords[1] - 1)] # neighbours along NEWS directins
    diagonal_neighbours = [(coords[0] + 1, coords[1]+1),(coords[0] - 1, coords[1] - 1),
                           (coords[0]-1, coords[1] + 1), (coords[0]+1, coords[1] - 1)] #diagonal neighbours
    neighbours = axial_neighbours + diagonal_neighbours # supposed neighbours
    ## purging those coordinates with negative values in them:
    for i in range(len(neighbours)):
        if (neighbours[i][0] < 0 or neighbours[i][0] > board_length - 1) or (neighbours[i][1] < 0 or neighbours[i][1] > board_length - 1):
            neighbours[i] = 0
    while 0 in neighbours:
        neighbours.remove(0)
    
    return neighbours

########################

# The NPC's brain

class NPC_Brain:
    '''brain of the NPC ;), actually a single-layer perceptron '''
    def __init__(self,board_size):
        ''' Initialiser'''
        self.inputs = board_size # no. of input nodes for the neural network
        self.weights = [random() for i in range(self.inputs)] # random weights for each game
        self.column_scores = [] # column scores (for each column) - the 'liking' of the computer to move a piece in a column as the output
                                # of the neural network's processing
        self.row_scores = [] #same here
        self.inputs_template_columns = [] # a container to hold the inputs to the neural network
        self.inputs_template_rows = [] # same here but for rows
    def process(self, board, threshold):
        '''forward-feeding'''
        # we begin by setting the lists to zero so as to make the computer forget the past state of the board and to look for the current state
        self.inputs_template_columns = []
        self.inputs_template_rows = []
        self.column_scores = []
        self.row_scores = []
        self.row_scores = []
        for column in range(self.inputs):
            scores = [1 if row[column] == mff.computer_piece else 0 for row in board] # checking for enemies in each column
            self.inputs_template_columns.append(scores) 
            score = sig(multiply(scores, self.weights)/threshold) # using the logistic sigmoid function to generate a liking for columns :D
            self.column_scores.append(score) # each column score is appended
        for row in range(self.inputs):
            scores = [1 if board[row][i] == mff.player_piece else 0 for i in range(self.inputs)] # checking for enemies in each column
            self.inputs_template_rows.append(scores) 
            score = sig(multiply(scores, self.weights)/threshold) # using the logistic sigmoid function to generate a liking for columns :D
            self.row_scores.append(score) # each column score is appended
        return {'columns':self.column_scores, 'rows':self.row_scores}
    def back_prop(self, learning_rate, target = 1):
        '''Back-propagation, with error function as squared-error function (target - error)**2'''
        for j in range(len(self.inputs_template_columns)):
            for i in range(self.inputs):
                '''overfitting can occur, but still let's try this'''
                self.weights[i] +=  learning_rate * 2 * (self.column_scores[j] - target) * (self.column_scores[j]*(1-self.column_scores[j])) * self.inputs_template_columns[j][i] #backprop formula
        for k in range(len(self.inputs_template_rows)):
            for i in range(self.inputs):
                '''overfitting can occur, but still let's try this'''
                self.weights[i] += learning_rate * 2 * (self.row_scores[k] - target) * (self.row_scores[k]*(1-self.row_scores[k])) * self.inputs_template_rows[k][i] #backprop formula
                
    
        

class NPC:
    ''' non-playable character / computerized player class '''
    def __init__(self):
        self.mind = NPC_Brain(mff.board.length) # the model
        self.piece_lower = 0; self.piece_upper = 1 # initial row numbers of the computer's pieces
        self.row_expanse = 2
        
    def make_move(self):
        moved = False
        req_target = 0.5
        counter = 1
        while not moved:
            if counter % 50 == 0:
                req_target += log(req_target**(counter%25))
                print("New target set:", req_target)
            score_board = temp = self.mind.process(mff.board.table, 0.5) # feeding forward
            x_coord = score_board['columns'].index(max(score_board['columns'])) # choosing the column the compute likes the most
            y_coord = score_board['rows'].index(max(score_board['rows'])) % self.row_expanse # a random y coordinate is chosen
            try:
                if y_coord < mff.board.length - 1: 
                    if mff.board.table[int(y_coord) + 1][int(x_coord)] == 0 and (mff.board.table[int(y_coord)][int(x_coord)] not in  [0, mff.player_piece]):
                        mff.board.move_piece((int(x_coord), int(y_coord)), turn = 'computer')
                        self.piece_upper += 1 #increasing the upper limit of the y coordinate by 1
                        moved = True
                        req_target += 0.0001
                        self.row_expanse += 1
                        counter += 1
                    else:
                        raise mff.InvalidMove((x_coord,y_coord))
                    counter += 1 
            except mff.InvalidMove:
                # trying to avoid the computer's confusion
                self.mind.back_prop(1/pi, target = req_target) # making the computer learn from its decision
                req_target -= 0.0001
                counter += 1
                    
            
                
            
                


npc = NPC() # creating the NPC

## Sample gamplay
## The following gameplay will be a bit smooth in the beginning but turns into a confusion later
all_gone_good = True
while True:
    all_gone_good = True 
    # infinite loop here till errors occur
    player_mv = eval(input("Enter your move:")) # waiting for the player's move
    try:
        mff.board.move_piece(player_mv)
    except mff.InvalidMove:
        print("Invalid move")
        all_gone_good = False
    # next we check if the player's move was valid
    if all_gone_good:
        print(mff.board)
        npc.make_move()
        print(mff.board)
        

I am sorry that haven't been able to comment in certain regions of the code, in which case you can ask me for clarification.

My main doubts are : is my data acquisition method biased? Is the training part also little bit wacky? Or is it that I programmed it all without knowing what I am doing? What's actually causing such an infinite loop?


Edit: : I have edited TestGameML.py and it's down here:

from math import *
from random import *
import MarchOfTheFinalFour as mff

######################
##Bug fixes required:

##1. The machine is making multiple moves unknowingly

######################

## Some variables for global use

my_move = (0,0)

## Math functions for our use in here
    
def multiply(list_a, list_b):
    '''matrix multiplication and addition'''
    list_res = [list_a[n] * list_b[n] for n in range(len(list_a))]
    return fsum(list_res)

def sig(x):
    '''logistic sigmoid function'''
    return exp(x)/(1+ exp(x))

##############################

## Neighbourhood search

def neighbourhood(coords, board_length):
    '''generates the 3 x 3 grid that forms the neighbourhod of the required square'''
    axial_neighbours =  [(coords[0] + 1, coords[1]),(coords[0] - 1, coords[1]),
                        (coords[0], coords[1] + 1), (coords[0], coords[1] - 1)] # neighbours along NEWS directins
    diagonal_neighbours = [(coords[0] + 1, coords[1]+1),(coords[0] - 1, coords[1] - 1),
                           (coords[0]-1, coords[1] + 1), (coords[0]+1, coords[1] - 1)] #diagonal neighbours
    neighbours = axial_neighbours + diagonal_neighbours # supposed neighbours
    ## purging those coordinates with negative values in them:
    for i in range(len(neighbours)):
        if (neighbours[i][0] < 0 or neighbours[i][0] > board_length - 1) or (neighbours[i][1] < 0 or neighbours[i][1] > board_length - 1):
            neighbours[i] = 0
    while 0 in neighbours:
        neighbours.remove(0)
    
    return neighbours

########################
# The NPC's brain

class NPC_Brain:
    '''brain of the NPC ;), actually a single-layer perceptron '''
    def __init__(self,board_size):
        ''' Initialiser'''
        self.inputs = board_size # no. of input nodes for the neural network
        #self.weights = [random() for i in range(self.inputs)] random weights for each game
        self.weights = [0.5]*self.inputs
        self.column_scores = [] # column scores (for each column) - the 'liking' of the computer to move a piece in a column as the output
                                # of the neural network's processing
        self.row_scores = [] #same here
        self.inputs_template_columns = [] # a container to hold the inputs to the neural network
        self.inputs_template_rows = [] # same here but for rows
        
    def process(self, board, threshold):
        '''forward-feeding'''
        # we begin by setting the lists to zero so as to make the computer forget the past state of the board and to look for the current state
        self.inputs_template_columns = []
        self.inputs_template_rows = []
        self.column_scores = []
        self.row_scores = []
        for column in range(self.inputs):
            scores = [(1/8)**(row + 1 if row == my_move[1] else 1) if board[row][column] == mff.player_piece else -1/8 for row in range(self.inputs)] # checking for enemies in each column
            self.inputs_template_columns.append(scores) 
            score = sig(multiply(scores, self.weights)/threshold) # using the logistic sigmoid function to generate a liking for columns :D
            self.column_scores.append(score) # each column score is appended
        for row in range(self.inputs):
            scores = [(1/8)**(i + 1 if i == my_move[0] else 1) if board[row][i] == mff.player_piece else -1/8 for i in range(self.inputs)] # checking for enemies in each column
            self.inputs_template_rows.append(scores) 
            score = sig(multiply(scores, self.weights)/threshold) # using the logistic sigmoid function to generate a liking for columns :D
            self.row_scores.append(score) # each column score is appended
        return {'columns':self.column_scores, 'rows':self.row_scores}
    
    def back_prop(self, learning_rate, target = 1):
        '''Back-propagation, with error function as squared-error function (target - error)**2'''
        for j in range(len(self.inputs_template_columns)):
            for i in range(self.inputs):
                '''overfitting can occur, but still let's try this'''
                self.weights[i] +=  -learning_rate * 2 * (self.column_scores[j] - target) * ((self.column_scores[j]**2)*(1-self.column_scores[j])) * self.inputs_template_columns[j][i] #backprop formula
        for k in range(len(self.inputs_template_rows)):
            for i in range(self.inputs):
                '''overfitting can occur, but still let's try this'''
                self.weights[i] += -learning_rate * 2 * (self.row_scores[k] - target) * ((self.row_scores[k]**2)*(1-self.row_scores[k])) * self.inputs_template_rows[k][i] #backprop formula
                
    
        

class NPC:
    ''' non-playable character / computerized player class '''
    def __init__(self):
        self.mind = NPC_Brain(mff.board.length) # the model
        self.piece_lower = 0; self.piece_upper = 1 # initial row numbers of the computer's pieces
        self.row_expanse = 2
        
    def make_move(self):
        moved = False
        req_target = 0.5
        counter = 1
        print("Thinking...")
        while not moved:
            score_board = temp = self.mind.process(mff.board.table, 0.5) # feeding forward
            x_coord = score_board['columns'].index(min(score_board['columns'])) # choosing the column the compute likes the most
            y_coord = score_board['rows'].index(max(score_board['rows'])) % self.row_expanse # a random y coordinate is chosen
            try:
                if y_coord < mff.board.length - 1: 
                    if mff.board.table[int(y_coord) + 1][int(x_coord)] == 0 and (mff.board.table[int(y_coord)][int(x_coord)] not in  [0, mff.player_piece]):
                        mff.board.move_piece((int(x_coord), int(y_coord)), turn = 'computer')
                        self.piece_upper += 1 #increasing the upper limit of the y coordinate by 1
                        moved = True
                        self.row_expanse += 1
                        counter += 1
                    else:
                        raise mff.InvalidMove((x_coord,y_coord))
                    counter += 1 
            except mff.InvalidMove:
                # trying to avoid the computer's confusion
                self.mind.back_prop(0.5, target = req_target) # making the computer learn from its decision
                counter += 1
                    
            
                
            
                


npc = NPC() # creating the NPC

## Sample gamplay
## The following gameplay will be a bit smooth in the beginning but turns into a confusion later
all_gone_good = True
while True:
    all_gone_good = True 
    # infinite loop here till errors occur
    player_mv = eval(input("Enter your move:")) # waiting for the player's move
    try:
        mff.board.move_piece(player_mv)
    except mff.InvalidMove:
        print("Invalid move")
        all_gone_good = False
    # next we check if the player's move was valid
    if all_gone_good:
        my_move = player_mv
        print(mff.board)
        npc.make_move()
        print(mff.board)

Changelog:

  • Change data distribution method in lines 71 and 76
  • Asked NPC to choose the column with the least column score and max row score.
",40583,,40583,,9/8/2021 13:37,9/17/2021 13:30,Is my single layer perceptron getting biased input some way or the other?,,1,15,,,,CC BY-SA 4.0 31607,1,,,9/8/2021 14:41,,0,31,"

While reading the Notation of the paper titled Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges, I came across the following notations.

$$ \Omega = \text{ Domain} \\ u = \text{Point on domain} $$

I can only understand domain here as a set and point as an element in the set $\Omega$.

I have a simple doubt here.

Why do we call the set $\Omega$ as domain? Is it due to the notation related to mathematical function?

$$\text{function_name} : \text{domain} \rightarrow \text{co_domain}$$

Or is it due to the usage of the word domain in the sense of application domain: A specified sphere of activity or knowledge. such as computer vision, natural language processing etc?

or a combination of both?

",18758,,,,,9/8/2021 14:41,What is meant by domain in the notations of geometric deep learning?,,0,4,,,,CC BY-SA 4.0 31608,1,,,9/8/2021 14:47,,1,64,"

While reading the Notation of the paper titled Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges, I came across the following notations.

$$ \Omega = \text{ Domain} \\ u = \text{Point on domain} \\ x(u) \in \mathcal{X}(\Omega, C) = \text{ Signal on the domain of the form } x : \Omega \rightarrow C $$

Mathematically, a signal is just a function.

But, every function may not be a signal. There may be some distinction between a mathematical function and a signal.

When can I call any mathematical function a signal? And what is $\mathcal{X}$ in the notation given?

",18758,,18758,,9/11/2021 9:43,9/11/2021 9:43,Can I call any function a signal?,,0,1,,,,CC BY-SA 4.0 31610,2,,31583,9/8/2021 18:22,,2,,"

The use of normalisation in neural networks and many other (but not all - decision trees are a notable exception) machine learning methods, is to improve the quality of the parameter space with respect to optimisers that will apply to it.

If you are using a function approximator that benefits from normalisation in supervised learning scenarios, it will also benefit from it in reinforcement learning scenarios. That is definitely the case for neural networks. And neural networks are by far the most common approximator used in deep reinforcement learning.

Unlike supervised learning, you will not have a definitive dataset where you can find mean and standard deviation in order to scale to the common $\mu = 0, \sigma = 1$. Instead you will want to scale to a range, such as $[-1, 1]$.

You may also want to perform some basic feature engineering first, such as using log of a value, or some power of it - anything that would make the distribution of values you expect to see more like a Normal distribution. Again, this is something you could do in supervised learning more easily, but maybe you know enough about the feature to make a good guess.

",1847,,,,,9/8/2021 18:22,,,,0,,,,CC BY-SA 4.0 31611,1,,,9/8/2021 21:25,,4,179,"

Consider a single-player card game which shares many characteristics to "unprofessional" (not being played in casino, refer point 2) Blackjack, i.e.:

  • You're playing against a dealer with fixed rules.
  • You have one card deck which is played completely through.
  • etc. An exact description of the game isn't needed for my question, thus I remain with these simple bullet points.

Especially the second point bears an important implication. The more cards that have been seen, the higher your odds of predicting the subsequent card - up to a 100% probability for the last card. Obviously, this rule allows for precise exploitation of said game.

As far as the action and state space is concerned: The action space is discrete, the player only has a fixed amount of actions (in this case five - due to the missing explanation of the rules, I won't go in-depth about this). Way more important is the state space. In my case, I decided to structure it as follows:

A 2 3 4 5 6 7 8 9 10 J
4 4 4 4 4 4 4 4 4 14 2

First part of my state space describes each card value being left in the stack, thus on the first move all 52 cards are still in the deck. This part alone allows for about 7 million possible variations.

Second part of the state space describes various decks in the game (once again without the rules it's hard to explain in detail). Basically five integers ranging from 0-21, depending on previous actions. Another 200k distinct situations.

Third part are two details - some known cards and information, though they only account for a small factor, but still bring in a considerable amount of variation into my state space.

Thus a complete state space might look something like this: Example one would be the start setup: 444444444142;00000;00. Another example in midst of the game: 42431443081;1704520;013. Semicolons have been added for readability purposes.

So now arises the question: From my understanding my state space is definitely finite and discrete, but too big to be solved by SARSA, Q-learning, Monte Carlo or alike. How can I approach projects with a big state space without loosing a huge chunk of predictability (which I might fear with DQN, DDPQ or TD3)? Peculiarly due to the fact that only one deck is being used - and it's played through in this game - it seems like a more precise solution would be possible.

",49694,,,,,9/11/2021 11:00,How to approach a blackjack-like card game with the possibility of cards being counted?,,1,8,,,,CC BY-SA 4.0 31612,1,,,9/8/2021 23:07,,0,109,"

Recently, I ran a code on my system that involves deep neural networks. The number of epochs provided by the designers are 301.

I tried to increase the number of epochs to 501. To my shock, the model after 350 epochs is behaving eccentric. And I can say that they are just returning crazy values.

What makes such phenomena possible? Is "number of epochs" also a hyperaparameter/ magic number as an upper bound beyond which the model fails?

",18758,,145,,6/6/2022 5:14,6/6/2022 5:14,Does an increase in the number of epochs lead to complete breakdown?,,1,3,,,,CC BY-SA 4.0 31613,2,,31612,9/8/2021 23:18,,1,,"

There is nothing specific about this particular numbers. Everything depends on the NN, software, model and data. As illustrated as the number of epoch increases, more number of times the weights are changed and the curve goes from underfitting to overfitting. And overfitting is exactly eccentric and crazy, see the picture at left:.

",49564,,,,,9/8/2021 23:18,,,,4,,,,CC BY-SA 4.0 31614,1,32543,,9/8/2021 23:22,,1,220,"

I was reading the following article on Towards Data Science (here) and it says the following, regarding the calculation of convolutional layers:

So the overall steps are:

  1. Transform the graph into the spectral domain using eigendecomposition
  2. Apply eigendecomposition to the specified kernel
  3. Multiply the spectral graph and spectral kernel (like vanilla convolutions)
  4. Return results in the original spatial domain (analogous to inverse GFT)

Question: How can we visualize the convolutional layer working for a graph neural network?

For example, for a CNN we can imagine the following (source: Stanford CS231n YouTube lectures, Lecture 5: Convolutional Neural Networks (here)). What is the analogous image for a graph convolutional filter?

",49689,,49689,,9/19/2021 18:50,11/29/2021 5:06,How do convolutional layers of basic Graph Convolutional Networks work?,,1,4,,,,CC BY-SA 4.0 31615,1,,,9/9/2021 1:36,,0,69,"

Generative Adversarial Networks, in general, consists of two multi layer perceptrons: generator and discriminator. Generator is used for generating samples that are as real as training samples and discriminator tries to discriminate the real and fake samples.

Generator receives noise and generate samples.

Some papers like this use extra networks

To achieve this, one can train a convolutional network to invert $G$ to regress from samples $\hat{x} \leftarrow G(z, \phi(t))$ back onto $z$.

And some other papers, I remember saying that we use the same generator architecture to invert the it.

What does it mean by inverting the generator of a GAN? Does inverting means passing samples and getting noise as output by using same or different architectures?

",18758,,18758,,9/10/2021 0:42,9/10/2021 0:42,What is meant by inverting the generator?,,0,7,,,,CC BY-SA 4.0 31618,1,,,9/9/2021 3:15,,0,133,"

I was reading the following paper: here. In it, it talks about spectral graph convolutions and says:

We consider spectral convolutions on graphs defined as the multiplication of a signal $x \in R^N$ (a scalar for every node) with a filter $g_{\theta}$ $=$ $\text{diag} (\theta)$ parameterized by $\theta \in R^{N}$ in the Fourier domain, i.e.: $ g_{\theta} * x = U g_{\theta} U^Tx $. We can understand $ g_{\theta}$ as a function of the eigenvalues of $L$, i.e. $g_{\theta}(\Lambda)$

So far, it makes sense. $U^T x$ is the graph Fourier transform of the signal $x$, then we multiply by $ g_{\theta}$ in the Fourier domain as: $FT(f * g) = F(\omega)G(\omega)$. Then we have the multiplication by $U$ in the front to represent the inverse (graph) Fourier transform.

Then the paper lists some reasons why using the above convolution equation may not be practical in reality:

  • Evaluating the above equation is computationally expensive; multiplying with eigenvector matrix $U$ is $O(N^2)$
  • Computing eigendecomposition of $L$ may be too expensive for an arbitrarily large graph
  • etc.

and then the paper says:

To circumvent this problem, it was suggested in Hammond et al. (2011) that $g_{\theta}(\Lambda)$ can be well-approximated by a truncated expansion in terms of Chebyshev polynomials $T_k (x)$ up to $K^{\text{th}}$ order: $$ g_{\theta '}(\Lambda) \approx \sum_{k = > 0}^{K} \theta_k ' T_k(\tilde{\Lambda}) $$

with a rescaled $\tilde{\Lambda} = \frac{2}{\lambda_{\text{max}}}\Lambda − I_N$. $\lambda_{\text{max}}$ denotes the largest eigenvalue of $L$. $\theta ′ \in R^K$ is now a vector of Chebyshev coefficients. The Chebyshev polynomials are recursively defined as $T_k(x) = 2xT_{k−1}(x) − T_{k−2}(x)$, with $T_0(x) = 1$ and $T_1(x) = x$. The reader is referred to Hammond et al. (2011) for an in-depth discussion of this approximation. Going back to our definition of a convolution of a signal $x$ with a filter $g_{\theta '}$, we now have: $$ g_{\theta '} * x \approx \sum_{k=0}^{K} \theta_k ′ T_k (\tilde{L}) x$$ with $\tilde{L} = \frac{2}{\lambda_{\text{max}}}L − I_N$ ; as can easily be verified by noticing that $(U \Lambda U^T)^k = U \Lambda^k U^T $

Question: What happened to the terms $U^T$ and $U$ which take the (graph) Fourier transform and invert it respectively?

Attempt: Does it have something to do with what is mentioned in the last line about noticing that $(U \Lambda U^T)^k = U \Lambda^k U^T $? I might guess that we use that because a k-th order Chebyshev polynomial will have $\Lambda ^k$ (and lower powers) present in the equation and thus the $U^T$ and $U$ mean that we can write the convolution equation in terms of the Laplacian matrix $L$

",49689,,2444,,12/19/2021 11:48,12/19/2021 11:48,How does Chebyshev approximation of spectral convolution work?,,0,2,,,,CC BY-SA 4.0 31620,1,,,9/9/2021 7:23,,2,36,"

The following excerpt is taken from 3. The Inception Score for Image Generation from the paper titled A Note on the Inception Score.

Suppose we are trying to evaluate a trained generative model $G$ that encodes a distribution $p_g$ over images $\hat{x}$. We can sample from $p_g$ as many times as we would like, but do not assume that we can directly evaluate $p_g$. The Inception Score is one way to evaluate such a model.

The excerpt is saying that we are not directly evaluating the $p_g$, the generator distribution but trying to evaluate the model $G$.

Does the excerpt intend to say that it is practically impossible to evaluate $p_g$?

",18758,,,,,9/11/2021 10:43,Is it impossible to evaluate the generator distribution directly?,,1,0,,,,CC BY-SA 4.0 31621,2,,31620,9/9/2021 7:47,,2,,"

Does the excerpt intend to say that it is practically impossible to evaluate $p_g$?

No, this statement says that they assume that they are not going to directly evaluate $p_g$ for a given problem, and offer other evaluation criteria which does not require doing so, which is the point of the paper. I think the authors have chosen the wording carefully and accurately.

They have probably been motivated to publish about this approach, because it is not possible to evaluate $p_g$ in most use cases for GANs. Image generators output highly complex functions with very high dimensionality and a lot of correlation between pixel values. So the paper is of interest because it is hard (usually a practical impossibility) to define $p_g$ directly, or use something like KL divergence to assess the generator's match to the real distribution.

",1847,,1847,,9/11/2021 10:43,9/11/2021 10:43,,,,0,,,,CC BY-SA 4.0 31622,1,,,9/9/2021 12:31,,0,224,"

I am working with a deep CNN with over 100k sample data. I divided it up into 75% training, 12.5% validation and 12.5% for testing. As I train my network, the training accuracy approaches near 100% accuracy. The validation accuracy approaches 70-90% accuracy. The validation accuracy is always increasing and never decreases so I do not believe that the network is over-training.

The training accuracy is similar to the validation accuracy but both are less than the training accuracy.

My question is, what is causing my validation/training data to trail the training data? Is it because my validation/training sets contains sample types which are not found in the training set? What else might be causing this?

Additionally, between epochs, I see this 'stair case' in learning in that I see a huge jump in accuracy as soon as a new epoch starts. I am shuffling my data between epochs. What might be causing this jump in accuracy?

Also, if there are more technical terms for the events that I am describing please let me know so that I can further research these.

Thank you!

blue = training, black = validation

",49703,,,,,10/4/2022 16:03,Validation accuracy less than training accuracy (with no sigh of overtraining),,1,0,,,,CC BY-SA 4.0 31623,1,,,9/9/2021 12:43,,1,48,"

I'm working on a problem using DDPG.

Is it possible to add some intelligence in the initialization phase, such that the convergence time is improved/shortened and local optima are avoided as much as possible?

For example, this may include assigning (higher) probabilities to (better) actions (in the action selection algorithm) at the start of an episode. This hopefully leads to the agent discovering and selecting "better" actions faster, rather than starting from more random ones. Or this won't work since the neural networks will just unlearn these initial values during the training process?

Also, with the above description, am I better off using Soft Actor-Critic?

",49704,,2444,,9/9/2021 14:06,9/9/2021 14:06,Setting initial values in DDPG to favor better actions,,0,0,,,,CC BY-SA 4.0 31625,2,,31622,9/9/2021 13:17,,1,,"

The validation trend doesn't inform you much about real overfitting, cause the model hyperparameters are optimized based on the validation set. Reason why usually the validation scores are better than the test ones. So to truly check overfitting you should constantly look at the test scores.

The jumps make me think that you're using all training instances each epoch, so no matter if you shuffle or not, the model performs better cause it starts going trough already seen examples.

In general, don't expect to reach a test accuracy similar to the training one, it happens only with perfect toy datasets. And if you have reasons to think the test accuracy should be much closer than what you are observing then inspect more closely your dataset and the splitting you're performing. Specifically check if you have imbalanced classes shown more in validation/test than training, larger variance in data features for validation/test than training or any other form of imbalance you might check depending on the type of data you're using.

",34098,,,,,9/9/2021 13:17,,,,1,,,,CC BY-SA 4.0 31626,1,,,9/9/2021 13:46,,3,105,"

This is an empirical question, essentially how many tasks do you need data for, to make a useful meta learning model (e.g. using MAML)? I'm looking for ranges based on personal experience or if anyone has done research on the topic and you know of references for the estimates that would be helpful as well.

For context I'm trying to work with about 5-7 tasks. I saw a person implement meta-learning with about this many in the paper Multi-MAML. But I've since seen example code in the learn2learn library which uses thousands of tasks...

P.S. I'm not sure if different parameterizations of a single task definition are still 'one task' (e.g. y=a*cos(x), where 'a' varies). Could that account for the discrepancy?

",42996,,42996,,9/13/2021 18:45,9/13/2021 18:45,How many tasks are needed for meta-learning?,,0,0,,,,CC BY-SA 4.0 31627,1,31649,,9/9/2021 18:52,,1,207,"

I am looking to have a cooperative multi agent reinforcement learning framework where one agent has a discrete action space and another agent has a continuous action space. Is there a way to do this as most papers I have seen will only handle one or the other.

",49707,,,,,9/11/2021 2:47,Multi Agent Deep Reinforcement Learning for continuous and discrete action,,1,2,,,,CC BY-SA 4.0 31628,1,31643,,9/9/2021 20:31,,1,579,"

I have trained a classification network with PyTorch lightning where my training step looks like below:

def training_step(self, batch, batch_idx):
    x, y = batch
    y_hat = self(x)
    loss = F.cross_entropy(y_hat, y)
    self.log("train_loss", loss, on_epoch=True)
    return

When I look at the output logits, almost all of them are very large negative numbers, with one that is usually 0. Is this normal or might be something wrong with my training?

I am just using nn.LogSoftmax() on the outputs and taking the max to make my predictions, but my network is not doing so good when I am running on unseen data, and I want to make sure the problem is just me overfitting.

",49707,,2444,,9/11/2021 20:30,9/11/2021 20:30,Is it normal that the values of the LogSoftmax function are very large negative numbers?,,1,1,,9/15/2021 13:08,,CC BY-SA 4.0 31629,1,,,9/10/2021 0:31,,0,46,"

Context: I was reading the following set of notes (page 83): here and it says:

Thus, the Fourier transform of signal (or function) $ \mathbf{f} \in R^{|V|} $ on a graph can be computed as $$ \mathbf{s} = \mathbf{U}^T \mathbf{f} $$

Question: What happens if each node has multiple 'signals'? Are the Fourier transforms on each signal independent of one another?

Attempt: I assume that each signal is denoted as a column vector, and thus multiple signals may be written as a matrix $\mathbf{F} = [\mathbf{f_1}, \mathbf{f_2}, ..., \mathbf{f_n}]$ (where $ \mathbf{f_i} \in R^{|V|} $) for a graph with $n$ signals. Thus, the graph Fourier transform would be $$ \mathbf{S} = \mathbf{U}^T \mathbf{F} $$ and thus each of the Fourier transforms would be independent of one another. Is this the correct way to think about this.

Many thanks in advance!

",49689,,,,,9/10/2021 0:31,How does graph Fourier transform work when multiple signals present on each node?,,0,2,,,,CC BY-SA 4.0 31630,1,,,9/10/2021 0:40,,1,115,"

I saw two versions of the optimality equation for $V_{*}(s)$ and $Q_{*}(s,a)$.

The first one is:

$$ V_{*}(s)=\max _{a} \sum_{s^{\prime}} P_{s s^{\prime}}^{a}\left(r(s, a)+\gamma V_{*}\left(s^{\prime}\right)\right) $$

and

$$ Q_{*}(s, a)=\sum_{s^{\prime}} P_{s s^{\prime}}^{a}\left(r(s, a)+\gamma \max _{a^{\prime}} Q_{*}\left(s^{\prime}, a^{\prime}\right)\right) $$

The second one is:

$$ V_{*}(s)=\max _{a \in \mathcal{A}}\left(R(s, a)+\gamma \sum_{s^{\prime} \in \mathcal{S}} P_{s s^{\prime}}^{a} V_{*}\left(s^{\prime}\right)\right) $$

and for $Q_*$

$$ Q_{*}(s, a)=R(s, a)+\gamma \sum_{s^{\prime} \in \mathcal{S}} P_{s s^{\prime}}^{a} \max _{a^{\prime} \in \mathcal{A}} Q_{*}\left(s^{\prime}, a^{\prime}\right) $$

If following distributive property to get from the first to the second expression. Why there is no summation term for the reward, for example, $$V_{*}(s) = \max_{a}(\sum_{s'}P^{a}_{ss'}r(s,a)+\gamma\sum_{s'}P^{a}_{ss'}V_{*}(s'))$$?

My guess is that $r(s,a)$ is the constant so it can be moved out of the summation, leaving $$r(s,a)\sum_{s'}P^{a}_{ss'} = r(s,a).$$

But is it always the case that $r(s,a)$ is independent of $s'$? I think the reward of moving from state $s$ to $s'$ may vary.

",49598,,2444,,12/22/2021 12:43,12/22/2021 12:43,How are these two versions of the Bellman optimality equation related?,,1,0,,,,CC BY-SA 4.0 31632,1,,,9/10/2021 2:06,,2,1385,"

This question is restricted to the text domain only.

The meaning of the word "encode" is Convert (information or instruction) into a particular form. One which performs encoding is called an encoder.

In deep learning, an encoder can also be the first part of a neural network (autoencoder) that simulates identity function, which governs the English meaning of encoder since it encodes the input.

Embeddings are encodings where the intention is to preserve semantics. You can observe the following excerpt from the chapter Vector Semantics and Embeddings

In this chapter we introduce vector semantics, which instantiates this linguistic hypothesis by learning representations of the meaning of words, called embeddings, directly from their distributions in texts.

But all encodings may not be the embeddings since encodings might not always preserve semantics (?). I have doubt in this statement which I inferred based on my current knowledge.

Many times, I came across the terms text encoding and text embedding interchangeably. But failing to catch whether they are the same or we need to be choosy while using them.

Consider the following usages of encoding and embedding in the paper titled Generative Adversarial Text to Image Synthesis by Scott Reed et al.

#1: The intuition here is that a text encoding should have a higher compatibility score with images of the correspondong class compared to any other class and vice-versa.

#2: Text encoding $\phi(t)$ is used by both generator and discriminator.

#3: ...where $T$ is the dimension of the text description embedding.

#4: ... we encode the text query $t$ using text encoder $\phi$. The description embedding $\phi(t)$ is first compressed ...

I think they are used interchangeably. Is it true? Can I use any word if I am confident enough that my encoding is semantic preserving? Or is there any strong reason for choosing the words?

If you observe the last point, the word "encoder" is used. Can I use embedder instead of it?

",18758,,2444,,12/26/2021 12:22,12/27/2021 14:29,"Can I always use ""encoding"" and ""embedding"" interchangeably?",,2,1,,,,CC BY-SA 4.0 31633,1,,,9/10/2021 2:11,,-1,606,"

I am watching David Silver's lectures on RL available on YouTube. My question here is with regard to Lecture 2 (Link to Video). At 1:11:00, I could not understand how he is calculating the state-value functions for C1, C2 and C3 (nodes with values 6, 8 and 10 respectively) in the student MDP example, starting from C3 and working backwards. Can someone please explain this?

",49709,,49709,,9/11/2021 20:08,9/18/2021 19:34,Calculating state-value functions in Markov Decision Process,,1,3,,,,CC BY-SA 4.0 31635,2,,31630,9/10/2021 7:11,,1,,"

My guess is that $r(s,a)$ is the constant so it can be moved out of the summation, leaving $r(s,a)\sum_{s'}P^{a}_{ss'} = r(s,a)$

Yes, this is the case. More specifically:

  • $r(s,a)$ is the expected reward after taking action $a$ in state $s$.
  • Reward may depend on the state arrived in, $s'$, but that is ignored in the equations.
  • Reward may vary randomly, but by using the expected reward, this can be ignored.

The first equations you quote, which sum over $s'$ but use $r(s,a)$ inside that sum, are very misleading IMO, since the individual terms may not represent anything meaningful within the MDP. That is the term $r(s,a) + \gamma V^*(s')$ does not correspond to any part of the trajectory of the agent.

Although the sum is still mathematically sound, it is more normal to see a different term $r(s,a,s')$ (the expected reward similar to $r(s,a)$ but also conditional on $s'$) where the expected reward is used inside the sum of next states. The term $r(s,a,s') + \gamma V^*(s')$ does correspond to nodes on the trajectory of the agent. It is the expected future return from $s,a$ conditional on the state transitioning to $s'$.

but is it always the case that $r(s,a)$ is independent of $s'$. I think the reward of moving from state $s$ to $s'$ may vary.

Yes $r(s,a)$ is independent of $s'$. Although individual rewards may vary stochastically, and may depend on $s'$ too, the term is already the expected reward when taking the action $a$ in state $s$. So it already includes any effects of random state transition and random reward. For the Bellman equations to work as written, the expectation needs to be independent of the policy $\pi$ thus a property of the environment, and this is the case.

I think both sets of equations are a little bit awkward from using a combination of expected reward, yet summing up expectations over the state transition matrix. I prefer the notation used in second edition of Sutton & Barto's Reinforcement Learning: An Introduction:

$$v^*(s) = \text{max}_a \sum_{r,s'} p(r,s'|s,a)(r + \gamma v^*(s'))$$

Where $p(r, s'|s,a)$ is the conditional probability of observing reward $r$ and next state $s'$ given initial state $s$ and action $a$. The $p(r, s'|s,a)$ function replaces the combination of state transition matrices $P_{ss'}^a$ and the expected reward (either $r(s,a)$ or $r(s,a,s')$). Those objects can be derived from $p(r,s'|a,s)$ if you want, but personally I find the newer notation easier to follow.

",1847,,1847,,9/10/2021 9:59,9/10/2021 9:59,,,,0,,,,CC BY-SA 4.0 31638,1,,,9/10/2021 11:42,,3,359,"

To me, it seems that PReLU is strictly better than ReLU. It does not have the dying ReLU problem, it allows negative values and it has trainable parameters (which are computationally negligible to adjust). Only if we want the network to output positive values it makes sense to use it in the output layer. Other than that, I don't see why a priori I would decide to choose ReLU over PReLU. However, most architectures I came across use ReLU activations. Why? Am I missing something?

",49715,,2444,,9/10/2021 18:30,9/11/2021 13:14,Why should one ever use ReLU instead of PReLU?,,1,0,,,,CC BY-SA 4.0 31639,2,,31638,9/10/2021 14:40,,2,,"

I suppose, the situation is as follows - PReLU increases the expressiveness of a model for a bit at a small cost, but the gain is almost negligible as well (according to this post).

There is, indeed, a noticeable difference between ReLU and PReLU, since the former takes the same value for all $\mathbb{R}_{\leq 0}$.

However, compared with a LeakyReLU, note that this activation is accompanied with a linear operation, like Dense or Convolution layer: $$ y \sim f \left(\sum w \cdot x \right) $$ And the slope $\alpha$ can be absorbed in the weights of neural network.

",38846,,2444,,9/11/2021 13:14,9/11/2021 13:14,,,,0,,,,CC BY-SA 4.0 31640,1,,,9/10/2021 15:31,,1,86,"

I have a doubt about how clipping affects the training of the RL agents.

In particular, I have come across a code for training DDPG agents, the pseudo-code is the following:

1  for i in training iterations
2      action = clip(ddpg.prediction(state) * a + b, x, y)
3      state, reward = environment(action)
4      store action, state and reward
5      if the number of experiences is larger than L:
6          update the parameters of the agent

In this case, the actor NN that predicts the DDPG has a $\tanh$ activation in the output layer.

My question is, could we add the clipping in the output layer of the actor (changing $\tanh(x)$ by $\operatorname{clip}(a\cdot \tanh(x)+b, x, y$) in the training loop? Would the training work in that case?

",49718,,2444,,9/11/2021 13:02,9/11/2021 13:02,Could we add clipping in the output layer of the actor in DDPG?,,0,2,,,,CC BY-SA 4.0 31641,2,,30163,9/10/2021 15:46,,0,,"

There is no obvious reason why concatenating layers will "stabilize" the training of the network. In fact, you are adding more information, which might improve accuracy, but perhaps at certain computational disadvantages.

ResNets prevent the vanishing gradient problem by "passing information" from previous layers onto the next. Page 6 of these slides shows calculations that show this to be the case in a specific degenerate edge cases; but as for how it consistently provides such an advantage on more general basis is from what I understand still unclear.

",6779,,,,,9/10/2021 15:46,,,,1,,,,CC BY-SA 4.0 31643,2,,31628,9/10/2021 17:15,,1,,"

Sounds like it worked to me.

nn.LogSoftmax returns the log of the softmax (duh). The outputs from softmax add up to 1, and form a probability distribution.

0 is the log of 1, meaning that class was predicted at a level of nearly 100%. And the other classes with large negative logs are a rounding error.

",28406,,,,,9/10/2021 17:15,,,,0,,,,CC BY-SA 4.0 31644,1,,,9/10/2021 17:43,,1,214,"

I am new to Reinforcement Learning and trying to understand the concept of reaping rewards during episodic tasks. I think in games like tic-tac-toe, rewards will be in terms of a win or lose. But does that mean we need to finish the entire game to gain the reward? I mean reward will make sense only if three of the tokens are in one line. Each game of tic-tac-toe will be different as the sequence of actions followed will be different. So does reward come into the picture only after completing the game? And what if the game is a draw?

",40304,,2444,,9/10/2021 18:24,2/7/2022 20:00,How are rewards calculated for episodic tasks like playing chess or tic-tac-toe?,,1,1,,,,CC BY-SA 4.0 31645,2,,31644,9/10/2021 18:08,,1,,"

There is two ways to formulate a reward function for these types of problems. First there is sparse reward:

Win +1

Loss -1

All other rewards are 0.

As the opposite there also exist dense rewards, which could be some signal for every timestep which tells you how well you're doing, i.e. give rewards in chess if you kick an opponents piece of the board.

Sparse rewards is harder to learn from since a lot of the time your reward and so your gradient is 0. On the other hand reward functions for dense reward require careful crafting by hand and leave room for human error and bias.

",49455,,,,,9/10/2021 18:08,,,,0,,,,CC BY-SA 4.0 31648,2,,31595,9/10/2021 21:01,,2,,"

Yes, I believe you can. Assume that you want to upper bound your difference to $k$. Use the following function:

$$ y_{t}^{\pi} = \frac{k}{2}*\tanh(Q_{t}^{\pi}(s,a)) $$

Here, $y_{t}^{\pi} \in [-\frac{k}{2}, \frac{k}{2}]$. Hence the upper bound would be $k$. Checkout this tanh graph.

A practical suggestion - Try to soften out the sharp edjes of tanh by using $$\frac{k}{2} * \tanh(\frac{2}{k} * Q_{t}^{\pi}(s,a))$$

This is numerically safer. Because the difference between the $\tanh$ outputs reduces drastically when you start going to extreme values in both directions.

",49721,,,,,9/10/2021 21:01,,,,1,,,,CC BY-SA 4.0 31649,2,,31627,9/11/2021 2:47,,1,,"

Seems like what you are looking for is Parametrized RL to train an agent for a Parametrized Markov Decision Process. You can look up both terms to find courses/readings about it.

Anyway, one existing RL framework for it is the Multi-pass Parametrized DQN (MPDQN) which is proposed in this paper, and if google enoug you can even find the author's dissertation on MP-DQN which preliminaries section might help you a lot. In short, the Agent uses Actor-Critic arhitecture in which the Actor will predict the continuous parameters of all actions given the current state, and the Critic will predict the action-value of each action given the states concatenated by the predicted values of the continuous parameters. The final discrete action is chosen by sampling the action-value or using argmax.

In P-DQN, the state concatenated with all the continuous parameters' values are passed to the Critic. However, it is shown that by passing all continuous parameters at once to the Critic, the unused action's parameters will affect the gradient thus affecting the Actor's parameters, which is not expected. Therefore, the parameters of each action are passed separately into the Critic in MP-DQN (unrelated action parameters' are zeroed). With good design, you can implement it so you can pass all the parameters and state in a single pass (multi-pass) as shown in MP-DQN. Other alternative for Parametrized RL is the hybrid-PPO and parametrized DDPG.

Finally, based on my personal experiments, I still have difficulty on controlling the scale of the continuous parameters values prediction so that it stays in a certain predefined range. There are 2 ways to do it, first is using softmax or using gradient inverting (changing the gradients' sign depending on the current predicted parameter values). However, I still eventually produced Actor that output only extreme values (either upper or lower bound) of the continuous parameters.

",44920,,,,,9/11/2021 2:47,,,,4,,,,CC BY-SA 4.0 31651,2,,31611,9/11/2021 8:30,,2,,"

How can I approach projects with a big state space without loosing a huge chunk of predictability (which I might fear with DQN, DDPQ or TD3)?

You can impact this by choosing a combination of function approximator and engineered features which are a good match to predicting either the value functions or policy function that an agent will need to produce.

It is hard to tell a priori how much work you would need in this regard. Given that you play as many training games as you have time for, use a deep neural network with hardware acceleration, then one approach is to simply normalise/scale your feature vector as it is, and train heavily. This approach in RL has repeatedly shown new state-of-the-art results for e.g. AlphaZero, Open AI's DoTA agent and others. This appears to work provided the compute resource that you can throw at the problem is large enough. As you mention, card-counted blackjack is a solved problem, so doing this may be within your reach on consumer hardware.

Some smart feature engineering may help though, by making the core problem easier for the agent. As this is a hobby project, what you do will depend on what you want the agent to learn. Is the purpose of your project to teach the agent how to card-count from scratch? Given that you have told the agent the count of remaining cards in the state, it does not appear so.

Next question: Do you need the agent to learn to sum up all remaining cards in order to calculate the raw probability of revealing a card of each type amongst cards so far unseen? That's currently what your main state feature is doing. If you don't need the agent to learn that, you could help by using the probability of each card being seen as a feature instead of the remaining count. This feature is likely to reduce the complexity of value and policy functions, thus it will reduce compute time, and may also improve accuracy.

With a game that can be solved analytically, you could take this up to the point of deriving the optimal policy directly and not using RL at all (i.e the input to your NN would be the correct action or the action values!). The question for your project is then this: What precisely are you hoping to demonstrate that your agent can learn? Or maybe: What do you want to gain by applying RL to this problem?

",1847,,1847,,9/11/2021 11:00,9/11/2021 11:00,,,,5,,,,CC BY-SA 4.0 31652,1,,,9/11/2021 8:59,,2,146,"

Say that I have a simple Actor-Critic architecture, (I am not familiar with Tensorflow, but) in Pytorch we need to specify the parameters when defining an optimizer (SGD, Adam, etc) and therefore we can define 2 separate optimizers for the Actor and the Critic and the backward process will be

actor_loss.backward()
actor_optimizer.step()
critic_loss.backward()
critic_optimizer.step()

or we can use a single optimizer for both the Actor's and the Critic's parameters so the backward process can be like

loss = actor_loss + critic_loss
loss.backward()
optimizer.step()

I have 2 questions regarding both approaches:

  1. Is there any consideration (pros, cons) for both the single joined optimizer and the separate optimizer approach?

  2. If I want to save the best Agent (Actor and Critic) periodically (based on a predefined testing environment), do I always have to update the Critic, regardless of the current Agent's performance? Because (CMIIW) the Critic is (in its most basic purpose) only for predicting the action-value or state-value thus a more trained Critic is better.

",44920,,2444,,10/17/2021 2:37,2/26/2022 14:36,Joined vs Separate optimizer for Actor-Critic,,1,1,,,,CC BY-SA 4.0 31653,1,,,9/11/2021 9:01,,2,131,"

For learning purposes, I am trying to implement the minimax algorithm for the ColorShapeLinks game, which is similar to connect 4, except the fact that it combines both shape and color as the winning conditions, with a shape having priority over color. A color is associated with only one shape, so there can only be X blue or O red.

I have previously applied the minimax to TicTacToe, but I think that's a lot simpler than this one.

My question is: which heuristic function could be used to evaluate the states for this game?

I'm thinking of checking each window of the board, where the new piece is put, and then compare the differences of shapes and colors and calculate the total of each predetermined value before (all heuristic). However, I think that's a little bit too simple, right?

Also, is there any chance that the AI can be made with Local Search algorithms, like Hill Climbing, Annealing, or GA? I think we can't, since the state configurations are not complete, right? If there are any additional reasons for this, please kindly guide me through, I am pretty new in this research :D

",49725,,2444,,9/11/2021 20:35,9/11/2021 20:35,Which heuristic function should I use for the ColorShapeLinks game?,,0,0,,,,CC BY-SA 4.0 31654,1,,,9/11/2021 10:39,,1,224,"

Consider the following paragraph from 2 Learning in High Dimensions in from of the paper titled Geometric Deep Learning Grids, Groups, Graphs, Geodesics, and Gauges

Supervised machine learning, in its simplest formalisation, considers a set of $N$ observations $D = \{(x_i, y_i)\}_{i=1}^{N}$ drawn i.i.d. from an underlying data distribution $P$ defined over $\mathcal{X} \times \mathcal{Y}$, where $\mathcal{X}$ and $\mathcal{Y}$ are respectively the data and the label domains. The defining feature in this setup is that $\mathcal{X}$ is a high-dimensional space: one typically assumes $\mathcal{X} = \mathbb{R}^d$ to be a Euclidean space of large dimension $d$.

Here, it is mentioned that $N$ observations are drawn i.i.d from probability distribution $P$, which is defined over $\mathcal{X} \times \mathcal{Y}$.

My doubt is that how can we draw i.i.d from every probability distribution if our distribution is not an i.i.d distribution. The only $P$ I know to be an i.i.d is the following

$$p(x_i) = \dfrac{1}{|\mathcal{X} \times \mathcal{Y}|} \text{ for } x_i \in \mathcal{X} \times \mathcal{Y} \text{ and } 1 \le i \le |\mathcal{X} \times \mathcal{Y}|$$

To put simply, dataset with all possible $256 \times 256 \times 3$ images is i.i.d but the dataset with all dogs is not an i.i.d.

As per my knowledge, every possible distribution may not be an i.i.d distribution. Then, without knowing anything about the distribution, how can we draw i.i.d?

",18758,,2444,,1/15/2022 21:10,1/16/2022 15:14,"How can we ""draw i.i.d"" from any probability distribution?",,2,1,,,,CC BY-SA 4.0 31655,2,,31654,9/11/2021 11:45,,1,,"

My doubt is that how can we draw i.i.d from every probability distribution if our distribution is not an i.i.d distribution.

Probability distributions cannot be defined as i.i.d. or not i.i.d..

The term i.i.d. is a property of a dataset. A dataset can be created that is i.i.d. with respect to a particular probability distribution. It doesn't matter what that distribution is, it just has to exist and be relevant to the purpose the ML is being put to. An underlying population or distribution of data is assumed in a lot of theories in machine learning - for instance, it defines measures of how good a function is at deriving labels from inputs.

If you define in terms of a population, then i.i.d. is a little bit like your example $p(x_i)$ if you could sample equally likely from any member of the population. But such a population rarely consists of the product of all possible traits and all possible labels, with one example of each. Some combinations of traits and labels will be rare or impossible, and you will not expect to see them in any sample from the population.

Commonly, with real-world datasets, there is an assumption of an underlying but unknown probability distribution that is being drawn from, and some efforts are made to make the resulting dataset i.i.d. by e.g. varying free choices on the collection of data (e.g. when, where to collect), shuffling the order of samples, or by removing elements that could affect the sampling with respect to the distribution of interest (e.g. correcting for response rates by social class in surveys). This effort is worthwhile because assumptions about i.i.d. are used in practice by ML models, and they can perform worse if data is not i.i.d.

To put simply, dataset with all possible $256 \times 256 \times 3$ images is i.i.d but the dataset with all dogs is not an i.i.d.

This is incorrect. Assuming your goal is to classify dog breeds from pictures, then your first dataset is meaningless. It consists mostly of white noise images equally labeled as "poodle" or "spaniel" etc, and even when it was a recognizable picture of a dog for the $1$ in $10^{100}$ or so images where that happened, it would most likely have the wrong label. If the second dataset of dogs was all-natural photos of dogs, or all pictures of dogs found on the internet, or any other well-defined population, and it was curated properly when collected to avoid bias or correlation, then it could be i.i.d.

",1847,,18758,,1/15/2022 0:32,1/15/2022 0:32,,,,0,,,,CC BY-SA 4.0 31658,1,34166,,9/11/2021 14:50,,1,110,"

Let $X_1, X_2$ be two discrete random variables. Each random variable takes two values: $1, 2$

The probability distribution $p_1$ over $X_1, X_2$ is given by

$$p_1(X_1=1, X_2 = 1) = \dfrac{1}{4}$$ $$p_1(X_1=1, X_2 = 2) = \dfrac{1}{4}$$ $$p_1(X_1=2, X_2 = 1) = \dfrac{1}{4}$$ $$p_1(X_1=2, X_2 = 2) = \dfrac{1}{4}$$

The probability distribution $p_2$ over $X_1, X_2$ is given by

$$p_2(X_1=1, X_2 = 1) = \dfrac{8}{16}$$ $$p_2(X_1=1, X_2 = 2) = \dfrac{4}{16}$$ $$p_2(X_1=2, X_2 = 1) = \dfrac{3}{16}$$ $$p_2(X_1=2, X_2 = 2) = \dfrac{1}{16}$$

Suppose $D_1, D_2$ are the datasets generated by $p_1, p_2$ respectively.

Then which dataset can I call an iid? I am guessing as $D_1$ since we can prove the random variables are independent and are identically distributed and for $D_2$, iid does not hold.


$\underline{\text{ For }D_1}$

Identical : $p_1(X_1 = x_1) = p_1(X_2 = x_2)= \dfrac{1}{2} \text{ where } x_1 = x_2 \in \{1, 2\}$
Independent: $p_1(X_1 = x_1,X_2 = x_2) = \dfrac{1}{4} = p_1(X_1 = x_1) p_1(X_2 = x_2) \text{ for } x_1, x_2 \in \{1, 2\}$


We can show that random variables $X_1, X_2$ are not iid if we consider $p_2$.

Is the iid I am discussing is different from making an iid of a dataset as answered here? If not, where am I going wrong?

",18758,,18758,,1/15/2022 5:40,1/16/2022 9:14,Which of the following probability distribution is generating an iid dataset?,,1,3,,,,CC BY-SA 4.0 31661,2,,25963,9/11/2021 23:53,,1,,"

If what is mentioned above, that is probably in the context of lstm networks. I would suggest using the keras tuner bayesian optimizer and making the l1 or l2 number a parameter of the kernel space. This way you find the optimal values, and its a great way to hypertune. Just keep in mind, the greater the range of parameters, or kernel if i am not wrong, the higher computer power you need.

from tensorflow import keras
import keras_tuner as kt

def model1(hp):
  model=Sequential()
  model.add(keras.layers.LSTM(units=hp.Int('units',min_value=40, max_value=800, step=20),
                              dropout=hp.Float('droput',min_value=0.15, max_value=0.99, step=0.05),
                              recurrent_dropout=hp.Float('redroput',min_value=0.05, max_value=0.99, step=0.05),
                              activation='relu',
                              return_sequences=True,
                              input_shape=(30,1)))
  Attention()
  model.add(keras.layers.LSTM(units=hp.Int('units',min_value=40, max_value=800, step=20),
                              dropout=hp.Float('droput',min_value=0.15, max_value=0.99, step=0.05),
                              activation='relu',return_sequences=True))
  Attention()
  model.add(keras.layers.LSTM(units=hp.Int('units',min_value=40, max_value=800, step=20), activation='relu'))
  model.add(keras.layers.Dense(1))
  
  model.compile(loss='mean_squared_error',optimizer=tf.keras.optimizers.Adam(hp.Choice('learning_rate', values=[1e-2, 1e-3, 1e-4, 1e-7, 1e-10])))
  return model

bayesian_opt_tuner = kt.BayesianOptimization(
    model1,
    objective='val_loss',
    max_trials=200,
    executions_per_trial=1,
    project_name='timeseries_bayes_opt_POC',
    overwrite=True,)

xval=X_test
bayesian_opt_tuner.search(x=X_train ,y=X_train, 
             epochs=300,
             #validation_data=(xval ,xval),
             validation_split=0.95,
             validation_steps=30,  
             steps_per_epoch=30,
             callbacks=[tf.keras.callbacks.EarlyStopping(monitor='val_loss', 
                              patience=4,
                              verbose=1,
                              restore_best_weights=True),
                        tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', 
                                   factor=0.1, 
                                   patience= 2, 
                                   verbose=1, 
                                   min_delta=1e-5, 
                                   mode='min')]
             )
This is where the magic happens. Something I composed myself. If interested holla 
",49736,,,,,9/11/2021 23:53,,,,0,,,,CC BY-SA 4.0 31664,1,,,9/12/2021 10:04,,0,81,"

Consider the following information regarding iid random variables

The acronym IID stands for "Independent and Identically Distributed".

A sequence of random variables (or random vectors) is IID if and only if the following two conditions are satisfied:

  1. the terms of the sequence are mutually independent;

  2. they all have the same probability distribution.

Definition:

Let $\{\mathcal{X}_n\}$ be a sequence of random vectors. Let $F_{\mathcal{X}_n}{(x_n)}$ be the joint distribution function of a generic term of the sequence $\{\mathcal{X}_n\}$. We say that $\{\mathcal{X}_n\}$ is an IID sequence if and only if

$$F_{\mathcal{X}_n}{(x)} = F_{\mathcal{X}_k}{(x)} \forall x, n, k $$

and any subset of terms of the sequence is a set of mutually independent random vectors.

Thus,

  1. iid is a property for a sequence of random variables.
  2. A joint probability distribution function is necessary to validate whether a sequence of random variables is iid or not.

Thus, the iid property of a sequence of random variables, from 2, is entirely depending on the underlying joint probability distribution function. Am I wrong anywhere?

If I am wrong, is there any other iid property of random variables that do not depend on the underlying probability distribution function?

",18758,,18758,,1/14/2022 23:58,1/15/2022 0:01,Is knowing underlying probability distribution mandatory for deciding iid property of random variables?,,2,0,,,,CC BY-SA 4.0 31667,1,,,9/12/2021 19:55,,0,27,"

It appears that it may be necessary to acquire a very large number of tasks for meta-learning , because MAML for example says that each task is analogous to a single training example in regular learning.

This is slightly confusing to me because it appears that outside of automated techniques like N-way classification where you randomly sub-select classes (& training examples) to include in a given task.

Adding tasks seems to be quite laborious right? I mean it sounds like you would need to get a different micro dataset for each task? And if meta-learning needs so many tasks then how do you satisfy that need?

",42996,,40434,,9/17/2021 1:59,9/17/2021 1:59,What are practical methods to acquire a large number of tasks for Meta-learning?,,0,2,,,,CC BY-SA 4.0 31668,1,,,9/13/2021 0:10,,1,85,"

Accuracy of my regularized model is higher for training set than for validation set.

The situation improves when regularization coeefficient is reduced:

What does this really imply?

From my understanding, this seems to suggest that regularization is actually resulting in the model overfitting training set, which is the opposite of the intended outcome

",49747,,,,,9/13/2021 0:15,What does it mean when accuracy of regularized model is higher for training set than for validation set?,,1,0,,,,CC BY-SA 4.0 31669,2,,31668,9/13/2021 0:15,,2,,"

It implies that your regularization effects are too much, and prevent the model from learning from data. Also, at such a low accuracy (~10%), we can't really talk about overfitting.

",32621,,,,,9/13/2021 0:15,,,,0,,,,CC BY-SA 4.0 31671,1,,,9/13/2021 7:49,,0,93,"

I am working on classifying images in "Left", "Right", "Center", "Back". Training and Validation images look like this:

The images are "Left", "Right", and "Center". I am following Pytorch transfer learning tutorial with Resnet50 architecture and have not changed anything.

The transformations I am using is as follows.

data_transforms = {"train"  : A.Compose([
                           A.Resize(256,256),
                          #  A.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.05, rotate_limit=15, p=0.5),
                          #  A.RandomCrop(height=128, width=128),
                           A.RGBShift(r_shift_limit=15, g_shift_limit=15, b_shift_limit=15, p=0.5),
                           A.RandomBrightnessContrast(p=0.5),
                           A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
                           ToTensorV2(),]),
                  "val": A.Compose([
                                    A.Resize(256,256),
                                    # A.CenterCrop(height=128, width=128),
                                    A.Normalize(mean=(0.485, 0.456, 0.406), std=(0.229, 0.224, 0.225)),
                                    ToTensorV2(),])
                }

Initially I had back images too, but they were getting miss-classified with center a lot with val accuracy 65%.

Without back images I got 74% accuracy with labels left, right and center. Modified it to classify only center and not center, with only three type of images "left", "right" and "center" I have achieved 91% of val accuracy.

I am looking for ways to increase accuracy for classifying images in left, right, center.

",49756,,16565,,9/13/2021 23:36,9/13/2021 23:36,"How to increase accuracy of image orientation classification (Left, Right, Center)?",,0,2,,,,CC BY-SA 4.0 31672,1,,,9/13/2021 11:46,,1,63,"

Let's have a set of n devices firing signals. Devices are firing in the same cycles, but each device can fire in different phase of the cycle. More, the exact firing point can fluctuate, for example 20% off the phase to both sides.

For me, this repeated firings can be seen like some kind of time-based pattern.

The main goal is to detect if some device changed it's phase, so the timings in the pattern are changed.

I generated dataset with rows for each second. On each row there is actual time and another n-1 columns representing difference between firing time of this device and first device.

Example: let's have 4 devices - D0 to D3. t1 means difference of firing times between D1 and D0.

time     t1 t2 t3
13:00:00  5  2  7

That means that D1 fired 5s after D0, D2 fired 5+2s after D0 and D3 fired 5+2+7s after D0.

Also there can be negative numbers since the allowed fluctuation - the D2 could fired earlier than D0.

To amateur me this seems that this numbers create patterns that some kind of NN should be able to learn and filter out the noise.

My questions:

  • which kind of network should I use? I tried to do simple feed forward network and do a classification, but since I am generating the dataset, I have no negative data and I so I am unable to categorize it.
  • is this kind of dataset a good approach, or should I use other properties?
",49760,,49760,,9/17/2021 10:59,9/17/2021 10:59,How to build neural network that detects changed signal firing pattern and is trained on positive patterns only?,,0,8,,,,CC BY-SA 4.0 31673,1,31676,,9/13/2021 12:52,,7,916,"

Is the LSTM-Architecture a subcategory of RNNs? Or are they totally different?

Literature doesn't seem to be unitary on this. This figure appears to explain the models to be alternatives, but I thought of them otherwise (LSTM to be a subcategory of RNN)

LSTM as a subcategory of RNN is mentioned in the Wikipedia article on LSTMs:

Long short-term memory (LSTM) is an artificial recurrent neural network (RNN) architecture...

",48548,,48548,,9/17/2021 7:57,9/17/2021 7:57,Is LSTM a subcategory of RNN?,,1,0,,,,CC BY-SA 4.0 31675,1,,,9/13/2021 15:19,,1,1334,"

I want to get a model which works best, what should I go for while training the model, ModelCheckpoint, EarlyStopping, or both?

",44529,,2444,,9/13/2021 16:25,12/12/2021 14:08,"What is better to use: early stopping, model checkpoint or both?",,1,0,,,,CC BY-SA 4.0 31676,2,,31673,9/13/2021 15:57,,10,,"

The Wikipedia article is more technically correct, in that the term RNN is formally taken to mean "a neural network with recurrent connections", and that includes many architectures that match this description, including LSTMs.

However, it is also common to see "RNN" used as a short-hand for a kind of "Vanilla RNN" or "basic RNN", where one or more layers have weights connecting the layer to itself (its own activations from $t-1$ are concatenated to the external inputs at $t$), and there are no other gates or special combinations, just those recurrent connections.

Oddly, this basic layer-based RNN archtecture is not listed in all the options on the Wikipedia page on RNNs - probably the closest are Elman networks and Jordan networks which are ways to implement the recurrent connection. It is a valid architecture choice, and can be effective. The LSTM and GRU architectures improve on it in terms of handling longer sequences and preserving important signals over them when training (e.g. matching a starting and ending quote in text processing).

",1847,,1847,,9/13/2021 16:06,9/13/2021 16:06,,,,2,,,,CC BY-SA 4.0 31677,2,,31675,9/13/2021 16:12,,3,,"
  • Early stopping: stop the training when a condition is met
  • Checkpoint : frequently save the model

The purpose of Early Stopping is to avoid overfitting by stopping the model before it happens using a defined condition. If you use it, and then you save the model when the training is stopped*, you will get a model that is assumed to be good enough and not overfitted.

The purpose of the class ModelCheckpoint is to save models several times while training. This can be useful to find at which epoch the model gets the best performance. So, if you use it, you will get several models that are saved at different epochs (or, more generally, "checkpoints").

Even after using both methods, you will get some models, but they have a different purpose. None of them is better than the other. I almost always use both methods at the same time. You can use early stopping to stop the training and save a lot of models while training using ModelCheckpoint. In most of my cases, the best model is around the epoch during early stopping.

*note: the model saving process is not done by EarlyStopping

",16565,,16565,,9/13/2021 16:43,9/13/2021 16:43,,,,5,,,,CC BY-SA 4.0 31678,2,,31664,9/13/2021 17:52,,2,,"

The point is even you know the distribution, sometimes you can't prove that the sampled data is i.i.d. or not! (more details in https://stats.stackexchange.com/q/130381/144441). Hence, without knowing the distribution, you have less information, and of course, you can't prove any identically distributed-ness property of the sampled data.

Note that i.i.d. is mostly mentioned as an assumption that is held in the corresponding domain, and you do not need to prove it as a property.

",4446,,18758,,1/15/2022 0:01,1/15/2022 0:01,,,,0,,,,CC BY-SA 4.0 31679,1,,,9/13/2021 18:33,,0,42,"

I'm currently working through Week 5 of Andrew Ngs Machine Learning course on Coursera, which goes through the backprop algorithm for basic neural networks. Whilst trying to derive the formulae he gave in the lectures, I noticed that the formula for $\delta^L$, "error" of last activation layer, is slightly different to that derived in http://neuralnetworksanddeeplearning.com/chap2.html.

In Andrew's, it seems like there is no inclusion of the partial derivative da/dz, or $\sigma'(z)$, only the dC/da part.

However Michael Nielson does include that term:

Is this difference significant and why does it arise? Is it because the derivation Nielson goes through defines the Cost using the mean square errors, whereas Andrew Ng defines the cost using the -ylog(h(x))... one? Also will Nielson's equations score full marks on the Ng's assignment?

Thank you for reading.

",49766,,,,,7/25/2022 2:06,Discrepancy of backpropagation formula between Andrew Ngs ML Course and those derived by neuralnetworksanddeeplearning.com,,1,0,,,,CC BY-SA 4.0 31681,2,,31664,9/13/2021 23:19,,-1,,"

From your conclusion, 1. is correct. But more specifically, it characterizes the nature of an underlying data-generating statistic. A table of results of dice throws is likely iid, but more significantly it is because the dice roll itself is iid.

Not really for 2. since you would be simply calculating for $P(A)P(B) = P(A,B)$ and $P(A) = P(B), \forall A, B$ in the discrete case. Since iid is defined as an iff (if and only if), this characterization is also sufficient.

Note that the iid assumption allows us to characterize the joint distribution in a certain way, which then allows us to compute it. Otherwise, the model might grow to be very complex.

",6779,,18758,,1/15/2022 0:00,1/15/2022 0:00,,,,4,,,,CC BY-SA 4.0 31682,1,,,9/14/2021 1:15,,0,83,"

The acronym "iid" stands for "independent and identically distributed". It is a property of a sequence of random variables. You can read here for more details. This question is just about the usage of the word "iid" in contemporary machine learning and is not about the feasibility of checking iid based on either associated joint distribution or dataset.

In the formal and strict sense, the word "iid" should be used only as a property for a sequence of random variables based on the underlying joint probability distribution function. But, I noticed that there is another (maybe less-strict) usage for the word 'iid' based on the context.

Consider the following statements compiled from different answers to my questions 1,2

From this answer

The term i.i.d. is a property of a dataset. A dataset can be created that is i.i.d. with respect to a particular probability distribution. It doesn't matter what that distribution is, it just has to exist, and be relevant to the purpose the ML is being put to.

From this answer

The point is even you know the distribution, sometimes you can't prove that the sampled data is i.i.d. or not!....

From this answer

....A table of results of dice throws is likely iid...... (there are some issues with this answer, but the bolded excerpt is true)

So, the usage of the word iid, in this sense, is somewhat different. Although I think, iid is a property of a sequence of random variables in this sense also, it is okay to use the word 'iid' for a dataset (collection of samples) since the dataset represents some underlying probability distribution.

Thus, the two usages I am aware of up to now are

  1. iid for a sequence of random variables based on joint distribution.

  2. iid for a sequence of random variables based on the collection of samples.

Is my understanding of the two usages of the word "iid" correct? and are there any other usages for the word "iid"?

",18758,,18758,,1/15/2022 0:59,1/15/2022 0:59,"What are the different possible usages of the word ""i.i.d"" in machine learning?",,0,6,,,,CC BY-SA 4.0 31683,1,,,9/14/2021 4:09,,0,79,"

How do I prepare the info of 3D models to use with NN? For example, I have thousands of models with boxes similar to the ones in the image below. I can extract the vertices and their normals that make up the faces of these boxes. Similarly, I would like to prepare the info of the red-shaded surfaces, again I have their vertices and their normals. For future studies, I will have more complex shapes such as cylinders, pyramids,...etc. What would be the best way to represent these complex shapes for NN?

Update: These boxes don't stay in the same position, see the second image I added. I will have different geometric models and different red-shaded areas on the surfaces of these objects. The NN output would be a number for each surface of these boxes/objects. The number represents the surface temperature. The input would be the following:

1- Some climate information such as (air temperature, humidity, ...etc.) 3- The location and size of the buildings that are represented in boxes/or maybe other shapes. 4- The size and the location of the red-shaded areas (red-shaded areas represent the shadow cast by buildings. 5- Material of each surface (concrete, brick,...etc).

",47257,,47257,,10/14/2021 21:26,10/14/2021 21:26,How do I prepare this 3D data for NN?,,1,7,,,,CC BY-SA 4.0 31684,1,,,9/14/2021 6:35,,1,86,"

I have a scenario in which we should leverage previously asked questions (not questions pairs, single question in a column) to locate similar questions within those questions.

How can I fine-tune my model to manage out of vocabulary, as my data includes domain-specific questions (3300 questions)?.

Right now, I'm using hugging face sentence transformers, which is already pre-trained on huge data.

For example, BERT knows that gold is a metal, but, in our domain corpus, it's a platform. We have some terminologies which were not exposed openly, how can I fine-tune the model to get related sentences (handling OOV).

",49776,,2444,,9/15/2021 12:53,9/15/2021 12:53,How to fine-tune a model which was pre-trained on a corpus that contains words with different meanings than the meanings of those words on my corpus?,,0,4,,,,CC BY-SA 4.0 31686,1,,,9/14/2021 12:10,,0,70,"

I am trying to solve a problem where I need to map multiple variations of a company name to a single name. For example: say I have a company named Super Idea Corporation Limited.

I need to resolve the following to Super Idea Corporation Limited

  • SICL
  • Super Idea Corp Ltd
  • SIC Ltd
  • SIC Limited

Is there a non regex way of doing this? The reason I am averse to using regex is that there are a lot of business names that can be represented in many different ways. I want something that is more flexible and adaptive.

",31449,,,,,9/15/2021 10:36,Identify whether two companies are the same,,1,2,,,,CC BY-SA 4.0 31687,2,,31683,9/14/2021 14:41,,2,,"

I think, that the answer depends on the application, but a possible choice would be store it as a mesh - a list of vertices $V$ and edges $E$. Instead of edges, one can work with polygons, and define connectivity $F$ - for triplets of vertices $(v_i, v_j, v_k)$.

There is a nice paper on Mesh CNN that can handle various geometric object.

For the special case of boxes maybe there is a more educated approach, but since you would like to handle later more complex shapes, I would suggest to work with this architecture from the start.

",38846,,,,,9/14/2021 14:41,,,,1,,,,CC BY-SA 4.0 31688,1,,,9/14/2021 15:42,,2,82,"

My question is very general and it does not originate from a specific problem. Let's assume that, through experience, we have learned that some statistical property of a set of data is important in predicting some behavior in a system. For example, for time series data d1, d2, d3, ..., dn, we heuristically know that the average of the last n steps denoted by avg(d,n) and the standard deviation of the last m denoted by std(d,m) are significant in prediction. Now my questions are:

  1. Should a machine learning system, let's say LSTM, or reinforcement learning agent, be fed the raw data or data with other statistical properties? I am asking this because, if the statistical derivatives are useful in training then there is no limit on how many statistical properties we can define and feed to the training process.

  2. Do machine learning, again let's say LSTM, automatically learn about underlying statistics from just pure raw data?

  3. How do we deal with different data of different scales and dimensions, for example, simple average is in the same scale as the raw data but standard deviation is of different scale and dimension and so on so forth?

I appreciate your comments.

",33488,,33488,,9/14/2021 16:08,9/14/2021 16:08,Machine learning with raw data alone / or raw data with its statistics,,0,2,,,,CC BY-SA 4.0 31689,1,31696,,9/14/2021 19:58,,1,179,"

Here's a quote from the T5 paper (T5 stands for "Text-to-Text Transfer Transformer") titled Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer by Colin Raffel et al.:

To summarize, our model is roughly equivalent to the original Transformer proposed by Vaswani et al. (2017) with the exception of removing the Layer Norm bias, placing the layer normalization outside the residual path, and using a different position embedding scheme. Since these architectural changes are orthogonal to the experimental factors we consider in our empirical survey of transfer learning, we leave the ablation of their impact for future work.

What exactly does 'orthogonal' mean in this context? Also, is it just me or have I seen the word used in a similar way before, but can't remember where?

",47131,,2444,,9/15/2021 12:58,9/15/2021 12:58,"Why do the authors of the T5 paper say that the ""architectural changes are orthogonal to the experimental factors""?",,2,0,,,,CC BY-SA 4.0 31690,1,,,9/14/2021 21:40,,0,64,"

I found a paper about using an Unscented Kalman Filter(UKF) for traning a neural network.

The UKF filter is modified so it works for parameter estimation. Assume that we have a neural network model $\hat d_k = G(x_k, W_k)$ where $G$ is the neural network, $x_k$ is the input vector, $W_k$ is the parameter gain matrix and $\hat d_k$ is the output vector.

This paper counts servral methods to train a neural network.

  • Gradient descent
  • Quasi-Newton
  • UKF parameter estimation

So my questions for you are:

  1. What's the benefit for using a kalman filter for training a neural network compared to other optimization algorithms?

  2. I remember that I have been using gradient descent methods and they work, but I have always used regularization. Is that due to the noise in the data?

  3. If I'm using a UKF-filter as parameter estimation, I can avoid regularization then?

  4. Is this UKF parameter estimation algorithm only made for 1 layer neural network, or can it be used for deep neural networks as well?

",36211,,156,,9/15/2021 13:53,9/15/2021 13:53,What's the benefit for using a Kalman filter for training a neural network compared to other optimization algorithms?,,0,2,,,,CC BY-SA 4.0 31691,2,,31689,9/14/2021 22:06,,2,,"

Looking at the paper, it seems to me that they are not using orthogonal in a literal, mathematics (or geometric) sense. Instead, I read that as two things (especially since the word "ablation" appears later in the sentence):

  • They are attempting to use lots of fancy words
  • They are simply indicating that these changes are separate from and have no impact on (at least they claim this to be so) the "experimental factors [they] consider in [their] empirical survey..."
",30426,,30426,,9/15/2021 11:57,9/15/2021 11:57,,,,0,,,,CC BY-SA 4.0 31692,1,,,9/15/2021 2:07,,1,474,"

Across the literature, the terms "high-level" and "low-level" are generally used as an adjective to the features generated by the convolution neural network as intermediate representations.

Should I understand the level to be either high or low based on

  1. the position of feature maps considering the architecture of the convolutional neural network i.e., the size of the feature maps generated at different layers of the convolutional neural network

(For example, the feature maps after bottleneck layers are "high-level" and are after the wide layers are "low-level" )

or

  1. the content of the feature maps based on the task

(For example, the feature maps that learned eyebrows of a cat are "high-level" and are just containing pixel intensities are "low-level" )

or

  1. any other property?
",18758,,18758,,1/17/2022 23:55,1/17/2022 23:55,"What does it mean by ""low-level"" and ""high-level"" in features generated by CNN?",,0,5,,,,CC BY-SA 4.0 31693,1,,,9/15/2021 8:41,,2,155,"

I am looking at the paper Conservative Q-Learning for Offline Reinforcement Learning, but I'm not sure how they proved theorem 3.1.

Here is a screenshot of theorem 3.1.

In the proof of theorem 3.1

they say

By setting the derivative of Equation 1 to 0, we obtain the following expression

...

$$\forall \mathbf{s}, \mathbf{a} \in \mathcal{D}, k, \quad \hat{Q}^{k+1}(\mathbf{s}, \mathbf{a})=\hat{\mathcal{B}}^{\pi} \hat{Q}^{k}(\mathbf{s}, \mathbf{a})-\alpha \frac{\mu(\mathbf{a} \mid \mathbf{s})}{\hat{\pi}_{\beta}(\mathbf{a} \mid \mathbf{s})} \tag{11}\label{11}$$

Here's equation 1 from the paper.

$$\hat{Q}^{k+1} \leftarrow \arg \min _{Q} \alpha \mathbb{E}_{\mathbf{s} \sim \mathcal{D}, \mathbf{a} \sim \mu(\mathbf{a} \mid \mathbf{s})}[Q(\mathbf{s}, \mathbf{a})]+\frac{1}{2} \mathbb{E}_{\mathbf{s}, \mathbf{a} \sim \mathcal{D}}\left[\left(Q(\mathbf{s}, \mathbf{a})-\hat{\mathcal{B}}^{\pi} \hat{Q}^{k}(\mathbf{s}, \mathbf{a})\right)^{2}\right] \tag{1}\label{1}$$

My question is: what exactly is the derivative of equation (1)? And how does that result in equation (11)?

The $\hat{B}^\pi$ is the empirical Bellman operator and is defined as $\hat{B}^\pi \hat{Q}^k (s,a) = r + \gamma \sum_{s'} \hat{T}(s' \mid s,a) \mathbb{E}_{a'\sim \pi(a' \mid s')}\hat{Q}_k(s', a')$. Since in offline reinforcement learning, the dataset $\mathcal{D}$ typically does not contain all possible transitions $(s, a, s')$, the policy evaluation step actually uses an empirical Bellman operator that only backs up a single sample.

",12932,,12932,,9/16/2021 5:26,12/17/2022 16:03,"What is the derivative of equation 1 in the paper ""Conservative Q-Learning for Offline Reinforcement Learning""?",,1,4,,,,CC BY-SA 4.0 31694,1,,,9/15/2021 8:43,,1,12,"

I have a rather interesting problem here; I work in the field of image classification for quality assurance. For this I have a dataset of about 1 million images, which I have used to train different defect classes. Now one of these defect types has additional properties (new features of an image class), I would like to teach these new features to the previously trained network, and without re-training the whole previous dataset.

In short: new features of an image class should be taught without affecting the performance of the network on the previous training set too much. Is this possible, and if so, what are some strategies for doing this?

Thanks in advance!

",26857,,,,,9/15/2021 8:43,Continue teaching pre-trained network without forgetting previous data set,,0,0,,,,CC BY-SA 4.0 31696,2,,31689,9/15/2021 10:14,,3,,"

"Orthogonal" is often used to mean "independent", as in "independent variable which does not correlate with the other variables". I believe this terminology originates from principal component analysis, where uncorrelated variation would be along orthogonal axes.

Or, in the words of the Wikipedia article on orthogonality applied to computer science:

Orthogonality is a system design property which guarantees that modifying the technical effect produced by a component of a system neither creates nor propagates side effects to other components of the system.

So in this excerpt they state that their changes do not affect anything else (because they are independent/uncorrelated), and can hence be discussed elsewhere.

",2193,,,,,9/15/2021 10:14,,,,0,,,,CC BY-SA 4.0 31697,2,,31686,9/15/2021 10:36,,1,,"

Have a look at Named Entity Recognition (NER); these algorithms are mainly concerned with recognising that there is an entity, but often also include normalising the name to a canonical form for information retrieval -- this is what you would need.

In a previous job I actually implemented this, using a fuzzy match with variable word order. You would still need to map Corp and Corporation onto each other as exchangable, and deal with acronyms, but that should be tractable.

",2193,,,,,9/15/2021 10:36,,,,0,,,,CC BY-SA 4.0 31701,1,,,9/15/2021 22:40,,0,88,"

Assume that we have 4 layers in a neural network.

$$z_1 = L_1(x, W_1)$$ $$z_2 = L_2(z_1, W_2)$$ $$z_3 = L_3(z_2, W_3)$$ $$y = L_1(z_3, W_4)$$

Where $x$ is the vector input, $y$ is the vector output and $W_i, i = 1..4$ is the weight matrix.

Assume that I could estimate parameters in a function.

$$b = f(a, w)$$

Where the $b$ is a real value and $a$ is the input vector and $w$ is the weight vector parameter. The function $f$ could be like this.

$$b = \text{activation}(a_1*w_1 + a_2*w_2 + a_3*w_3 + \dots + a_n*w_n)$$

Here we can interpret $b$ as the neuron output. Estimate $w_n$ is very easy if we know $b$ and $a_n$. This can be done used by recursive least squares or a Kalman filter.

Question:

If every neuron in a neural network is a function that has inputs and weights, can I use parameter estimation for estimating all weights in a neural network if I did parameter estimation for every neuron inside a neural network?

The reason why I'm asking:

I found a paper where they are using a Unscented Kalman Filter for parameter estimation.

Function $D_{k|k-1} = G[x_k, W_{k|k-1}]$ can be interpreted as a neuron function where $W_{k|k-1}$ is a matrix with different types of weights and $D_{k|k-1}$ is different types of outputs from that neuron. No, it's not a "multivariable output"-neuron. It's just the way how to estimate the best weights by using different weights.

The error of the neuron output is: $d_k - \hat d_k$ in equation (41). So when the error is small, that means the output of the neuron is OK and that means the real weights $\hat w_k$ has been found.

",36211,,2444,,9/19/2021 18:47,9/19/2021 18:47,Using parameter estimation for training a neural network,,0,8,,,,CC BY-SA 4.0 31702,1,,,9/16/2021 1:40,,0,1035,"

It is known that the primary purpose of activation functions, used in neural networks, is to introduce non-linearity.

Then how can the linear activation function, especially the identity function, be treated as an activation function?

Are there any special applications/advantages in using an identity function as I cannot see any such use theoretically?

",18758,,18758,,12/17/2021 14:55,12/17/2021 14:55,Why identity function is generally treated as an activation function?,,1,0,,,,CC BY-SA 4.0 31703,1,31757,,9/16/2021 2:01,,0,146,"

We generally encounter the following statement several times

The input vector is first fed into a fully connected layer......

Since linear activation functions, such as identity function, can so considered as an activation functions, a fully connected layer can be considered just as an Affine transformation if the fully connected layer uses linear activation function.

So, in theory, a fully connected layer can refer to the following

  1. Just an affine transformation
  2. Affine transformation followed by a nonlinear activation function

Do authors generally choose to use "fully connected layer" for case 2 only or for both cases 1 and 2?

",18758,,18758,,9/16/2021 12:48,9/20/2021 0:20,Do authors generally use fully connected layer instead of affine transformation?,,1,3,,,,CC BY-SA 4.0 31704,2,,4136,9/16/2021 2:33,,1,,"

No. When a system where input and output is well known ie. the state space is fully known and defined using say physics (PDEs) then overfitting is desirable as there is no need to generalize. An ML model has low inference time compared to solving the PDEs or lookup table is of multiple terabytes.

",8895,,,,,9/16/2021 2:33,,,,6,,,,CC BY-SA 4.0 31705,2,,20401,9/16/2021 3:19,,1,,"

An MLP is just a fully-connected feedforward neural net. In PointNet, a shared MLP means that you are applying the exact same MLP to each point in the point cloud.

Think of a CNN's convolutional layer. There you apply the exact same filter at all locations, and hence the filter weights are shared or tied. If they were not shared, you'd have potentially different filters (MLPs) at each pixel (point), updating independently.

As an example, let $f_\theta$ be an MLP with parameters $\theta$. Say we have a 3D point cloud $[\vec{x}_1,\ldots,\vec{x}_n]\subseteq \mathbb{R}^3$. If we apply $f_\theta$ as a shared MLP in the way PointNet describes, the result would be $[f_\theta(\vec{x}_1),\ldots,f_\theta(\vec{x}_n)]$.

",19881,,,,,9/16/2021 3:19,,,,0,,,,CC BY-SA 4.0 31706,2,,4048,9/16/2021 10:27,,1,,"

I'm not sure if or how I would apply neural networks to make selections with cards, which have a complex synergy. How could I design and train a neural network for this game, such that it can take into account all the variables? Is there a common approach?

Advice 0.

I would highly recommend to design it the way most neural networks are designed. The most common pattern is to have n layers, each one might have different size, one input layer, one output layer, a few hidden layers between them.

Advice 1.

Differentiate between neural network itself (brain), its environment (game conditions and rules), its sensors (inputs) and its "hands" (outputs). Why you might want to call it "hands"?  If you simulate a player in a card game, you basically want him to use his hands to play. In other situations it might be legs, or wings, or even the gas pedal.

How to design the inputs:

Just create a neural network with a common pattern, figure out, what variables a real player would analyze before throwing a card, then try to translate those variables into signals. This step might actually be a bit tricky and it's also what actually happens in biological neural networks. The electrical signals in our brains are really weak, even though they might carry bits that make up big numbers. What I mean by making weak signals out of numbers is dividing varying diapasons of numbers by their maximal values. For example, instead of putting in 5, which would represent a card with rank 6, divide 5 by 13 (or whatever value represents the highest rank). Basically if your diapason of input for one neuron is 0 to 13, divide it by 13 to get a value (or signal) in codomain [0; 1], which is a suitable input for the sigmoid function compared to the values in codomain [0; 13]. To visualize, take a look at the graph of the sigmoid function.

The difference between f(10) and f(8) is ~0.00029, whereas the difference between f(10/13) and f(8/13) is ~0.034, which obviously has much more impact on the output. So, make sure you translate all the values into the diapason where your function is most "sensitive", in this case in [-4; 4]

Advice 2.

Every time you need a decision from the AI, create a pool of possible decisions (e. g. pool of possible cards the player can throw right now) that you can index by an integer. Then you might want to multiply the output value in its codomain [0; 1] by the amount of possible decisions - 1 to be able to interpret it as index. Or you might have the amount of output neurons that corresponds to the maximal amount of possible moves and interpret the index of the neuron with the highest value as the index of the array with possible decisions.

How to train it:

If you create AI for a game, I would recommend to take a look at genetic algorithms. The basic idea is to create a population of players (hundreds or thousands) with randomly generated weights and biases in their NNs, restrict their possibilities by the rules of the game, and let them play. Then perform selection by their fitness function, which in this case might just be the score of each player, crossover and mutation of their genes (of single bits or even numbers) to create the next generation, and so on. Repeat this process until you come up with a satisfying solution. I recommend you genetic algorithms for this case, because it might be quite hard to find training data for traditional methods of training NNs. And if you're able to generate training data yourself, then you might also be able to program all the behaviour manually, in which case you don't even need NNs. If you're interested in training NNs using genetic algorithms, you should read some external literature, since it's a pretty big topic. You can also check out my github repo, where I train AI to play snake using GAs.

",49815,,49815,,9/19/2021 16:36,9/19/2021 16:36,,,,0,,,,CC BY-SA 4.0 31707,2,,30200,9/16/2021 10:29,,1,,"

While all the properties you list are true, the essential parts are the Differentiability and the fact that you build a nice probability distribution from it.

The unnormalized scores the neural network puts out before we apply softmax to them may very well contain negative values. Applying the exponential function is a great way to build a probability distribution from a given vector, disregarding the sign of the values.

",49455,,,,,9/16/2021 10:29,,,,0,,,,CC BY-SA 4.0 31708,2,,4436,9/16/2021 11:21,,1,,"

My understanding is that the ASCII encoding would not get the best performance or results from the RNN because the ASCII codes for each character are not meaningful; they are arbitrary. If the number of each ASCII code represented something meaningful about the letter, it would work better. But they don't.

The same principles apply as when deciding how to encode any categorical data. If your categories are ordinal (eg. 'First', 'Second' .. or 'Age group 18-24', Age group 25-35' .. or even 'Social Class E', 'Social Class D' ..), then assigning a single numerical value to each class might work well. But in categorical data where there is no meaningful order, one hot encoding will work better.

This is an example of the principle of giving neural networks the most expressive data that we can. In the case of non-ordinal, arbitrary categories, one-hot is more expressive to the next layer of neurons (will stimulate them more distinctly) than using a numerical encoding.

",49816,,,,,9/16/2021 11:21,,,,0,,,,CC BY-SA 4.0 31711,1,,,9/16/2021 21:52,,0,109,"

I have recently just completed a course on deep learning and I feel like an intermediate, but I still don't know how to structure this problem.

I'm looking to create a NN to play the card game Splinterlands.

I have the history of battles, the cards played, who won, and the battle rules and constraints.

I'm struggling with how best to model the inputs. I think that standard feature engineering and the encoding of all the variables affect the game, like manor, cards chosen, rules, etc.

How best to constrain the outputs? For example, the model needs to select cards I have available, etc., while the history contains cards I don't.

I know it's a long shot, but just looking for some inspiration :)

The basic rules of splinterlands are as follows:

  1. Start battle (you are given mana and set of rules, like no shields etc)
  2. Then select a summon (some rules say it has to be of certain, you also want to select your highest level summoner)
  3. Then select 6 monsters ( each monster has a position and stats like attack, speed, health and mana cost)
  4. Press battle
",49829,,49829,,9/23/2021 16:13,9/23/2021 16:13,How to model the inputs and outputs of the neural network for the Splinterlands card game?,,0,3,,,,CC BY-SA 4.0 31712,1,,,9/16/2021 22:52,,0,38,"

I came across this question about MDP.

From the look of it, it seems the full MDP is reducible if the discarded state only have 1 way in and out but is it really so if we change the discounted factor? I think there is some tricky part regarding this problem...

",49598,,18758,,9/16/2021 22:58,9/17/2021 7:27,Discard irrelavant states from a MDP,,1,2,,,,CC BY-SA 4.0 31713,1,,,9/17/2021 1:13,,0,150,"

Consider the following paragraph from A.1 MULTI-MNIST AND CLEVR of A IMPLEMENTATION DETAILS from the research paper titled GENERATING MULTIPLE OBJECTS AT SPATIALLY DISTINCT LOCATIONS by Tobias Hinz et al.

In the global pathway of the generator we first obtain the layout encoding. For this we create a tensor of shape (10, 16, 16) (CLEVR: (13, 16, 16)) that contains the one-hot labels at the location of the bounding boxes and is zero everywhere else. We then apply three convolutional layers, each followed by batch normalization and a leaky ReLU activation. We reshape the output to shape (1, 64) and concatenate it with the noise tensor of shape (1, 100) (sampled from a random normal distribution) to form a tensor of shape (1, 164). This tensor is then fed into a dense layer, followed by batch normalization and a ReLU activation and the output is reshaped to (−1, 4, 4). We then apply two upsampling blocks to obtain a tensor of shape (−1, 16, 16).

The paragraph is saying that a tensor of shape (1, 164) is reshaped to (-1, 4, 4). What is the reason behind using negative number -1? Is it representing axis? Can't we represent it with $a \times x \times y$, where $a, x, y$ are natural number s and dimensions of the tensor?

$\dfrac{164}{4 \times 4}$ is not a natural number, so what is the shape of the reshaped tensor using only the natural numbers?

",18758,,18758,,9/17/2021 2:31,10/12/2022 10:04,Why using negative integers (as dimensions?) in tensor shapes rather than natural numbers?,,1,7,,,,CC BY-SA 4.0 31714,1,,,9/17/2021 1:41,,2,574,"

In deep learning, we encounter the upsample blocks several times, especially when we deal with images.

Consider the following statements from description regarding UPSAMPLE in PyTorch

The algorithms available for upsampling are nearest neighbor and linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input Tensor, respectively.

Where can I read about these upsampling techniques in detail, especially in the context of deep learning?

",18758,,,,,9/17/2021 13:45,Where can I read about upsampling methods in detail?,,1,9,,,,CC BY-SA 4.0 31715,1,,,9/17/2021 2:52,,0,35,"

So here is a description of my problem:

Essentially, I have a large amount of files filled with code for a number of different tasks. However, lets say these codes are inefficient, and should be edited to be made more efficient. There exists a program to edit this code, and essentially what is does is analyze the code to look for what possible edits can be made, and classifies these edits into 5 classes (Class A, Class B, Class C, Class D, Class E). There is also a program to test the efficiency of this code and yields numeric value quantifying the efficiency. However, choosing some edits benefit the efficiency of a some code better than other edits. For example, a piece of code that has had 5 different edits applied to it (2 of Class A, and 3 of Class C) can be more efficient than the same code that has had 10 edits applied to it (5 of Class A, 5 of Class D). Ideally, the less edits and the higher the efficiency the better.

I want to use deep reinforcement learning to address this problem.

Here are the goals of my model:

  1. Takes the code in a form which it can understand (I have looked into this and stumbled upon code2vec, so I am looking into that)
  2. Also can take in the edits that can be applied to the code (which is the action space) and their respective classes.
  3. Makes decisions for edits which yield the highest efficiency for the code

I want to train this model on the files filled with the inefficient code. The ultimate goal is to end up with an AI that takes code as an input and makes the best decision of which edits it should apply to the code.

What deep reinforcement learning techniques/algorithms/architecures should I use to approach this problem and how can I go about implementing these in Python? As you can probably tell from this question, I am not very experienced in deep reinforcement learning, but I am willing to learn. Any help would be appreciated!

",49831,,,,,9/17/2021 2:52,What deep reinforcement learning algorithm should I use for my problem?,,0,2,,,,CC BY-SA 4.0 31716,2,,31712,9/17/2021 7:04,,1,,"

In terms of meaningful decisions that an agent might make, then states $S_1$ and $S_2$ do seem redundant.

However, if you remove those states and the actions from them, you also remove the time step tracking that goes along with them, and this fundamentally affects the MDP.

The question gives some hints about what will be affected at the end to help you answer it. This is a non-episodic (continuing) MDP without a terminal state, so you cannot use discount factor $\gamma = 1$, because the expected return will be infinite. Any valid approach for value functions in continuing MDPs - using a discount factor, finite horizon or average reward - will give you different results in the reduced MDP due to the difference in counting time steps.

",1847,,1847,,9/17/2021 7:27,9/17/2021 7:27,,,,0,,,,CC BY-SA 4.0 31717,2,,31713,9/17/2021 7:21,,0,,"

It definitely is. If you check the code (line 145) you'll see that in the forward definition of the Stage1 Generator they do:

class STAGE1_G(nn.Module):
    def __init__(self):
...
    def forward(self, text_embedding, noise):
    c_code, mu, logvar = self.ca_net(text_embedding)
    z_c_code = torch.cat((noise, c_code), 1)
    h_code = self.fc(z_c_code)

    h_code = h_code.view(-1, self.gf_dim, 4, 4)

Where self.gf_dim is a parameter defined in the configuration file of the gan (most likely number of feature maps, but check cause they didn't write documentation about the config settings)

",34098,,,,,9/17/2021 7:21,,,,0,,,,CC BY-SA 4.0 31718,2,,31602,9/17/2021 13:13,,1,,"

The direct reason that your AI always moves the top-left piece first (assuming the computer pieces take up the bottom 2 rows) is the way your model interacts with the environment.

Because you score every column separately, identical columns will always have identical scores. The way that you pick the best score means that if several columns have the same score, you will pick the left one, and if several rows have the same score, you will pick the top one. At the start of the game, since all the column scores are forced to be the same, that means it will always pick the first column. And since the AI pieces are in two rows, and those two rows are the same, and therefore must have the same scores, that means it will always pick the first row.

(Note: I didn't actually run your code so my understanding could be slightly different from what actually happens, like if the AI is at the top of the screen or if the game starts with the player moving first. The same principles should still apply)


Additionally, I don't think this is a good way to have the AI model interact with the environment. Imagine you are playing with your friends Billy and Sally. Every time you want to make a move, you print out two copies of the game board, cut one up into rows and give the rows to Billy, cut one up into columns and give the columns to Sally. Then you ask Billy to pick his favourite column and Sally to pick her favourite row and if there isn't a valid piece to move in that row and column, you slap them both.

At best, they're just going to learn to try not to get slapped. Probably by liking the rows and columns with the most pieces in them, which gives them the greatest chance that there is a piece at the location the other one picks.

And it's even worse because Billy and Sally are the same person and they don't know which cutouts are rows and which cutouts are columns.

This is clearly not a good way to play.


Your board game falls into the same category as Chess, Go and Tic-Tac-Toe - it has a game board with a finite (but large) number of states, a finite number of moves that the player can do according to the current state of the game board, you can win or lose or draw, and so on. There are not many moves or board states in Tic-Tac-Toe, more in Chess, and even more in Go.

Board games of this type are traditionally played with any kind of minimax algorithm, which looks at all the possible moves the computer could do, all the possible moves the player could do after that computer move, all the possible moves the computer could do next, and so on. Because there are way too many possible moves to check all of them (except in Tic-Tac-Toe), the computer has to skip most of the possibilities, and after looking a certain number of moves ahead it has to estimate how likely it is to win, instead of searching even farther ahead.

Both of these are where a perceptron could come in. If you can train a perceptron to predict how likely the computer is to win, you can ignore computer moves that make it unlikely to win, player moves that make the computer likely to win, and you can use it as your estimate when you decide to stop searching deeper.

The simplest version of this tree search algorithm just searches one move ahead and there is no reason to discard anything:

def get_next_move(board):
    best_move, best_score = None, -Infinity
    for move in all_valid_moves_for_computer(board):
        board.do(move) # only "pretending" to do the move, because we undo it afterwards
        move_score = call_perceptron(board) # if we did this move, how good is our situation?
        if move_score > best_score:
            best_move, best_score = move, move_score
        board.undo(move)
    if best_move is None:
        # stalemate - no valid moves! game ends. do something about it here
    return best_move

A "move" means something you can do in the game, like moving a piece from a2 to a3. You could represent it in your program as {"from": (0,1), "to": (0,2)}. all_valid_moves_for_computer is a function that returns a list of all the moves the computer is allowed to make according to the rules of the game.


Because your game has two different goals, you might want to train a different perceptron for each. Or, you can use the same one in reverse: if you only have a perceptron that tells you whether the computer will win as a defender, but the computer is playing as the attacker, then you can pretend the attacker is the computer (by turning the player pieces into computer pieces, computer pieces into player pieces, and turning the board upside down), check how the perceptron would like to win, and then do the opposite of what it says (do the lowest scored move).

You might get away with using the same one for both, but I expect the gameplay is quite different as attacker and defender and the computer needs to be able to learn that.


Whatever algorithm you use, don't forget that you have to train it to win the game, not just to make valid moves. Actually in the tree search algorithm (see above), you have normal code that figures out what the valid moves are. Normal computer programming is already pretty good at that. The perceptron's job is to see how good the computer's situation is if it does the move. You want it to have a high score when the computer is going to win and a low score when the computer is going to lose.

You can do this by playing and recording a whole bunch of games against human friend or against yourself. Then you train the perceptron to predict whether the attacker will win or lose, just by looking at the board. Each time one player makes a move, that's a piece of training data, and at the end of the game you find out whether the labels for all those pieces of training data were 1 (more likely to win the game) or -1 (more likely to lose the game).


If you are brave, you can also try a multi-layer perceptron. It will need more and better-quality training data.

",28406,,28406,,9/17/2021 13:30,9/17/2021 13:30,,,,10,,,,CC BY-SA 4.0 31720,2,,31714,9/17/2021 13:45,,1,,"

Tricky question. In my experience is better to just look for math resources on classic upsampling method, since deep learning papers and books tend to give them for granted, or not something related to AI (they are after all analytic methods). Another reason is probably that the math is not that hard, and already the wikipedia pages offer a good description of the basic methods (nearest-neighbours, bilinear interpolation, bicubic interpolation).

For some others interesting and more advance techniques I found useful this paper: Mathematical Techniques for Image Interpolation even though it's not exhaustive since it doesn't mention well known alternatives to bilinear or bicubic upsampling like Lanczos, so for completeness I would read also A Study of Image Upsampling and Downsampling Filters

Regarding the deep learning part, it's more valuable to search for papers that studies not the upsampling methods themselves, but the artifact they introduce, especially related to tasks like super-resolution. As a starting point I would dig into this blogpost and its references.

",34098,,,,,9/17/2021 13:45,,,,0,,,,CC BY-SA 4.0 31721,2,,27947,9/17/2021 15:20,,1,,"

These papers are also very close to what I meant in the question (too long for a comment).

The following references come mostly from work on speech recognition.

  • Mockingjay In this work, they use an analogy of Bert architecture that is fed by Mel-spectrogram, with some audio segments "masked".
    • The model is asked to reconstruct the masked parts. To avoid the model using local smoothness of audio-data, they always mask several subsequent frames, so that reasonably long segments are being masked.
    • They evaluate on downstream "Phoneme classification tasks" using the learned features and show that these learned features are stronger than raw spectrogram, in particular if little training data is available.
  • Audio Albert Same story, but they use shared weights in the transformer layers. This significantly reduces memory and computational requirements and it is shown that results are comparable with Mockingjay (at least for phoneme-classification tasks).
  • Tera; another Bert variant where instead of masking, they use various "Alterations" of certain audio segments.
  • This and many more references are within this project with code.
",9092,,9092,,9/17/2021 16:34,9/17/2021 16:34,,,,0,,,,CC BY-SA 4.0 31722,2,,31702,9/17/2021 16:19,,2,,"

The identity function can be useful in some cases. For example, if you are doing regression, the output of your neural network needs to be a real (or floating-point) number, so you use the identity function. (If you were doing logistic regression or classification, that wouldn't probably be the case). The identity function is also used in the residual networks (see figure 1). There are probably other examples of its usage and usefulness.

Some people may not consider the identity an activation function because it does nothing to the input. However, whether you consider the identity an activation function or not is a matter of convention, in the same way that considering a model with only identity (aka linear) functions a real neural network or a perceptron a neural network is a matter of convention.

I don't think there's still a consensus on this subject. In fact, you will often hear (or see) people say (or write) that they use "no activation (function)" or "linear activation" rather than saying that they use the "identity" (for example, see the documentation for the parameter activation here).

",2444,,,,,9/17/2021 16:19,,,,0,,,,CC BY-SA 4.0 31723,1,31725,,9/17/2021 19:34,,0,228,"

Suppose that I have a 1D dataset with 6 features.

Can I apply a 2D convolutional neural net to this dataset?

",20721,,2444,,9/20/2021 13:38,9/20/2021 13:38,Is it possible to apply 2D convolution to 1D data?,,2,0,,,,CC BY-SA 4.0 31724,1,31728,,9/17/2021 19:36,,1,138,"

Is CNN only applicable to time-series data or image data?

When should we use CNN instead of MLP?

",20721,,2444,,9/20/2021 13:36,9/20/2021 13:36,When should we use CNN instead of MLP?,,1,1,,,,CC BY-SA 4.0 31725,2,,31723,9/17/2021 22:26,,1,,"

This depends on the engine you use, but in general, yes, of course.

For example, in the TensorFlow height and width are separate variables, so nothing in your way to set one of them to 1 to have 1D data in it.

",49564,,,,,9/17/2021 22:26,,,,0,,,,CC BY-SA 4.0 31726,1,,,9/18/2021 3:36,,2,15,"

Consider the following statements from the research paper titled Deep Residual Learning for Image Recognition by Kaiming He et al.

#1:

We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

#2:

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as $\mathcal{H}(x)$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}(x) := \mathcal{H}(x)−x$. The original mapping is recast into $\mathcal{F}(x)+x$. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

The research paper contains the word "unreferenced" twice for a function, which I think is the actual function $\mathcal{H}(x)$. In normal deep neural networks, the function $\mathcal{H}(x)$ is generally learned. But, $\mathcal{F}(x)$ is learned in the residual neural networks and hence $\mathcal{H}(x)$.

Why this paper is calling the original function $\mathcal{H}(x)$as unreferenced function?

",18758,,2444,,9/18/2021 14:23,9/18/2021 14:23,Why actual mapping is called as unreferenced mapping in this context of residual framework?,,0,0,,,,CC BY-SA 4.0 31727,2,,31723,9/18/2021 7:16,,0,,"

You can certainly reshape the data to make it fit a 2D network. You could set the width or height to 1 as suggested by DamirTenishev, but you could also set the features/channels to 1 and treat the features as a height, if you wanted to convolve that way (would be a bit strange).

",28406,,,,,9/18/2021 7:16,,,,0,,,,CC BY-SA 4.0 31728,2,,31724,9/18/2021 7:23,,3,,"

CNN applies the same filters to every "chunk" of the input data. It's applicable when you think every chunk should be processed the same way.

For example, we think a face in the top-left of the image should be recognized just as well as a face in the bottom-right. So we do expect to process each part of the image the same way, and a CNN is good for this.

If we did not use a CNN, the network would need to learn separately at each position. It would learn to recognize faces in the top-left based on training examples where faces were in the top-left, and it would learn to recognize faces in the bottom-right based on training examples where faces were in the bottom-right. Because a CNN necessarily uses the same weights on each part of the image, seeing a face anywhere helps it update the shared weights which helps it recognize faces in every part of the image. Therefore, less training data may be needed.

A CNN might not be so good at classifying MNIST digits, for example, for the same reason. A /-shaped line in the top-left probably indicates a 6, but a /-shaped line in the bottom-right is more likely to indicate a 9, so we actually don't want the same weights there. It could still learn to sort this out in its final dense layer, of course. But if you wanted to find digits in a larger image you'd use a CNN.

",28406,,,,,9/18/2021 7:23,,,,0,,,,CC BY-SA 4.0 31729,1,,,9/18/2021 7:28,,-1,54,"

Looking for suggestions on how to define the following NLP problem and different ways in which it can be modeled to leverage machine learning. I believe there are multiple ways to model this problem. Deep-learning-based suggestions also work as there is a good amount of data is available for training.

Will evaluate different approaches for the given dataset. Please share relevant papers, blogs, or GitHub repos. Thanks!

Input: Given a sentence S having words W1 to W10.

S = W1 W2 W3 W4 W5 W6 W7 W8 W9 W10

The sentence has some syntactic and semantic patterns, but it is not exactly freely written natural language but it's in English. These are words, can be punctuation

Output: should be something like this.

Label1 - W4

Label2 - W3

Label3 - [W2 W1] continuous // semantically related. Means words [W2 W1] in-order are assigned a Label3. Also okay with solutions that don't output in-order.

Label4 - [W6 W8]

Label5- W10

Noise- W7, W9. Means words W7 and W9 independently are assigned a Label3.

Label7- W5

Need to solve the problem. Looking for research/thoughts on how this problem can be defined in different ways to exploit different patterns in the structure of sentences. Looking for similar tasks which are already defined in NLP such as token labeling, parsing which can be used.

Would be really helpful to get the suggestions to the latest research on solving/defining this problem.

",49848,,49848,,9/18/2021 19:49,5/3/2022 17:01,NLP problem Phrase/Token labeling,,1,2,,,,CC BY-SA 4.0 31730,1,,,9/18/2021 9:06,,1,86,"

Background:

I am working on a research project to use (demonstrate) the possibilities of Machine Learning and AI in artistic projects. One thing we are exploring is demonstrating deep fakes on stage. Of course, a deep fake is not easy to make. Still, we are exploring creating a "minor quality" deep fake live on a stage (or maybe in some other ways where people can make deep fakes of themselves) in which we put words into someone's mouth. I discovered that a semi-nice deep fake of the facial movement is possible, now a also want to add voice.

There are a lot of text-to-speech systems, which allow using a voice that is created from the recording of the voice of a real person. That is already nice.

The video is based on the facial movements of another person. So the Audio has to match the facial movements. The easiest way would be, if the "fake" voice says it exactly the same way, as the person doing the "facial acting" for the deep fake said it.

Question:

Is it possible to do a fake voice of a person in this way:

  1. Another person (the actor, or source) speaks the words in his voice and records it.
  2. The person with the faked voice (the destination) gives a voice sample with some random spoken text.
  3. An AI/algorithm/whatever modifies the recording from (1) in such a way, that the tone/voice is modified so it matches the voice from (2).

Do systems/research like this exist? I did not find anything using google, but maybe I did not use the correct keywords.

",49849,,2444,,9/20/2021 13:19,9/20/2021 13:19,Is Speech to Speech with changing the voice to a given other voice possible?,,0,4,,,,CC BY-SA 4.0 31731,1,31732,,9/18/2021 12:46,,2,144,"

In the image below taken from a Youtube video, the author explains that the neural network can be used to fit a relational graph for a set of data points shown by the green line. And that this is accomplished by using weights, biases and activation functions.

My slight confusion is that, initially, the weights and biases and randomized, and they are re-adjusted by backpropagation. This means that, at the end of the output layer, we must have the actual values of the target function anyway.

So what problem does the neural network really solve?

So, for example, we want to find the target function for dosage and efficacy, we are given the data points shown in blue. If we initially choose randomized values for the weights, biases and activation function, then, at the output layer, we determine an output value for efficacy, but there is no way to know whether this value is in fact correct or not. So, we need the actual values to determine the difference.

What about when we choose a value of dosage which has not been observed, for example, 0.25? Doesn't this rely upon a best-fit relation graph that has already been fitted to the data prior to adjusting the neural network?

",49852,,2444,,9/20/2021 13:32,9/20/2021 13:32,What problem does the neural network really solve?,,1,2,,,,CC BY-SA 4.0 31732,2,,31731,9/18/2021 14:52,,0,,"

This means that, at the end of the output layer, we must have the actual values of the target function anyway.

Yes, this is necessary for supervised learning. You will often see this called a labelled dataset, where the "label" is an output value that you know is associated with each input. A set of labels associated with some inputs, that you have collected for training may also the called the "ground truth".

We do not need all possible values though, but enough examples that the neural network can interpolate between them. How many examples that is depends on the complexity of the function we want to learn.

So what problem does the neural network really solve?

There are three main things it solves, and these are shared with most other machine learning approaches:

  • The neural network learns a function from examples of input and output.

  • The neural network will learn an expected value (or for classifiers, a probability distribution) when trained using noisy or stochastic data.

  • The neural network makes few assumptions about the relationship between input and output, and can learn successfully even for quite complex relationships.

These traits are all useful when you do not have a strong sense of what the correct function should be, in terms of writing an equation, but do have many examples of it.

What about when we choose a value of dosage which has not been observed, for example, 0.25?

The neural network will still produce an output. In a very simple scenario, where you had trained with example inputs at e.g. 0.2 and 0.3, then the output will likely be somewhere between the outputs for those two values. For a neural network, this in-between value can be much more sophisticated than a simple mean of the nearest examples. ML that uses the nearest values exists, that is called k-nearest neighbours.

If this process has worked well, and the trained neural network produces useful, accurate predictions from unseen inputs, it is said to generalise well. Very often, that is the goal for training a neural network or other machine learning system.

It is worth mentioning some additional facts and features of training neural networks (and ML in general):

  • When generalisation is the goal (and it often is), then you need to test for it. This is done by keeping some example data back, not using it to train, but instead using it to check results on unseen inputs. In fact, this is so important, the data is often split into three sets - a training set, a cross-validation (aka development) set, and a test set.

  • The lack of assumptions in the basic model can lead to needing many training examples. If you know something specific or useful about the function being learned, you can pre-process the inputs to help - this is called feature engineering.

  • Machine learning can be good at interpolation, i.e. calculating outputs for inputs that are not in the original training data, when the unseen inputs are in-between or close to the training examples. Even when good at interpolation, it will still be bad at extrapolating to new unseen inputs that are outside of the ranges of the training examples. That is because it has used a very general/flexible system to fit some line or curve to examples, it has not learned an analytical function.

Exceptions to all of these points exist. They are the norm, but it will depend on the details of what you are trying to do.

",1847,,2444,,9/20/2021 13:25,9/20/2021 13:25,,,,3,,,,CC BY-SA 4.0 31733,1,31735,,9/18/2021 15:03,,3,94,"

Is it possible to use any particular strategy to explore (e.g. metaheuristics) in on-policy algorithms (e.g. in PPO) or is it only possible to define particular policies to explore in off-policy algorithms (e.g. TD3)?

",49444,,2444,,9/20/2021 0:26,9/20/2021 0:26,Is it possible to apply a particular exploration policy for the on-policy RL agents?,,1,0,,,,CC BY-SA 4.0 31735,2,,31733,9/18/2021 15:44,,1,,"

In part it depends on the on-policy method you are using. In general you are not free to change the policy arbitrarily for on-policy policy gradient methods such as PPO or A3C.

However, if you are willing to consider the added exploration strategy as part of the current target policy, and can express it mathematically, you should be able to add an exploration term to on-policy approaches:

  • For value-based on-policy methods like SARSA, then there is no requirement to base the current policy on the learned value function. However, you will probably want to reduce the influence of the exploration heuristic over time, otherwise the algorithm may not converge. A simple way to do this would be to weight the heuristic for each action, and add it to the current value estimates when deciding the greedy action, slowly decaying the weight of the heuristic down to zero over time.

  • For policy gradient methods, the adjustment is harder. Your heuristic needs to be introduced under control of a parameter of the policy function, and should be differentiable. You might be able to do this simply and directly, but it will depend on details. For some exploration functions it not be possible at all.

Perhaps a more tractable approach to work with on-policy policy gradient methods would be to pre-train the policy network to approximate the heuristic function. The policy would then start like the heuristic then evolve towards optimal control. This works provided your heuristic outputs a probability distribution that is only dependent on current state, and without additional information such as number of times the same action was already taken.

If you want to explore using a heuristic without decaying it, or losing the exploration in the longer term, but still target an optimal policy, then you must use an off-policy method. Most off-policy methods are extensions of on-policy versions that have been adjusted/extended to deal with a split between behaviour and target policies. They are necessarily more complex as a result.

If you want to use a custom exploration function with policy gradients, then you may have some luck adjusting Deep Deterministic Policy Gradient (DDPG) where the exploration function is already a separate component and could be replaced - there are already a couple of variants in use.

",1847,,1847,,9/18/2021 15:55,9/18/2021 15:55,,,,2,,,,CC BY-SA 4.0 31736,1,31741,,9/18/2021 18:16,,0,310,"

I am a researcher in a field, and new to the whole of AI and machine learning techniques. May the following question is trivial or not framed in the ML language but I try my best.

I have two sets of representations (I can extract feature vectors, etc., from the datasets) from vastly different domains. I want to find, if any, a relationship exists between these two sets. In other words, I want an algorithm (the idea of an algorithm) to learn both representations and find the connections and convert one representation to another.

There is neither apparent one-to-one correspondence nor both need to be the same lengths.

Any suggestion on how to approach this problem is appreciated.

I thought of one method; write an encoder-decoder for each of these presentations separately and swap the decoders. I am not sure whether it works or not, and besides I may not have any idea what's going on there.

I prefer a general approach if it exists.

",49858,,2444,,9/19/2021 0:42,10/19/2021 1:04,"How to find ""relationships"" between two data representations?",,1,3,,,,CC BY-SA 4.0 31737,1,,,9/18/2021 18:25,,5,431,"

I am doing my MSc thesis on deep learning. My model takes many hours to train. Part of what I do is trying different parameters and settings hoping that they will achieve different results. But I often notice that the result differences are too small to conclude whether the set of parameters A is better than B. Sometimes it happens that on a first run, set A seems to work better than B, but on a second run the opposite is suggested.

The logical approach for that would be to repeat experiments and average out. But it seems unpractical given that each run could take so many hours.

I wonder how expert AI researchers deal with that, do they perform multiple experiments, even if this takes extremely long? Do they draw conclusions from single runs?

",48816,,2444,,9/19/2021 13:59,9/19/2021 15:12,Should I repeat lengthy deep learning experiments to average results ? How to decide how many times to repeat?,,2,1,,,,CC BY-SA 4.0 31739,2,,31569,9/18/2021 19:06,,3,,"

The following link satisfied my inquiries:

https://www.mdpi.com/1999-4907/12/2/131/htm

I hope this is useful for someone else! Justin

",38271,,,,,9/18/2021 19:06,,,,1,,,,CC BY-SA 4.0 31740,2,,31633,9/18/2021 19:34,,0,,"

I looked at this not long ago. You need to understand that the slide is referring to an optimal (not expected) value function & optimal Action-Value function. Let's look at his diagram. From left to right you have state C1 = 6 for its optimal value, C2 has 8 and C3 has 10.

If we take C10, we can see there are 2 choices for the transition, 1) to the pub for a reward of +1 or 2) to study for +10 reward. There are no probabilities assigned to our decision, so we will take the action that maximizes our action-value. So, being in C3 and deterministically choosing to study gives a reward of 10. The action is studying and the reward is 10 so the action-value is 10 + the undiscounted value of the next state. From there you end up in the sleep state (with value 0) with no further reward.

If you are in C3 and deterministically choose the action to go to the pub, you receive a reward of +1 with 3 possibilities of states/values. Given this action value of C3 & going to pub, you can apply an expectation over values of where you end up (not in the choice you make from C3). Since you can end up in C1, C2 or C3 with their own probabilities and values, you end up with an action-value of 8.3 by choosing to go to the pub.

Likewise for calculating optimal action values for C1 and C2. You see which deterministic action gives the maximum sum of reward and next state.

",49844,,,,,9/18/2021 19:34,,,,0,,,,CC BY-SA 4.0 31741,2,,31736,9/18/2021 20:33,,1,,"

Well, I suppose one can use some kind of contrastive learning in this case.

A famous example of the establishment of relation between two different representations is the CLIP - Contrastive Language–Image Pre-training, where model gets a huge corpus of image captions and images and the image caption is passed through the language model, and the image itself through convolutional or ViT backbone, and the model learns to make the image embedding (output of the visual model) to be similar to the output embedding (output of text model) and dissimilar (in the sense of cosine distance) to the embedding of captions, that belong to other images in the training dataset.

In case your two data representations allow for the notion of similar and dissimilar, I expect you can apply a similar procedure.

",38846,,,,,9/18/2021 20:33,,,,1,,,,CC BY-SA 4.0 31743,1,,,9/18/2021 23:47,,1,32,"

After reading about YOLO V3 and Faster R-CNN, I don't understand why the weights for the regression head aren't the same across all boxes of the same size. Given that the backbone of these systems is fully convolutional, the location of the outputted features should only depend upon the local region of the image which telescops to that feature map. Given that we want the object detector to behave the same way regardless of the object location in the image, shouldn't the weights be the same across anchors of the same size?

",32390,,2444,,9/19/2021 23:58,9/19/2021 23:58,"In anchor based object detection, why don't the anchors share the same weights?",,0,0,,,,CC BY-SA 4.0 31744,1,31783,,9/19/2021 4:44,,3,88,"

I am new to RL and I am following Sutton & Barto's book.

My doubt is, when we talk about the policy of our agent, we say it is the probability of taking some action $a$ given the state $s$. However, I think that the policy should be defined in terms of observations and not states because I think it is not always possible for an agent to fully capture the state due to various reasons, maybe lack of sensors or lack of memory.

So, why are policies defined as functions of states and not observations?

",40229,,2444,,9/19/2021 23:44,9/21/2021 15:06,"In reinforcement learning, why are policies defined as functions of states and not observations?",,1,5,,,,CC BY-SA 4.0 31746,1,,,9/19/2021 8:52,,1,90,"

What features of image are linear or non-linear, any example ?

",49870,,22301,,9/22/2021 12:45,9/22/2021 12:45,What are Linear and Non-Linear Features of an image in the context of Convolutional Neural Network?,,0,1,,,,CC BY-SA 4.0 31749,2,,31737,9/19/2021 10:15,,3,,"

I wonder how expert AI researchers deal with that, do they perform multiple experiments, even if this takes extremely long? Do they draw conclusions from single runs?

Unfortunately, the question you ask in the main body of your question here ("how do expert AI researchers do things") often turns out to actually be different from the question in your title ("how should I do things").

The very short answer would be that, ideally, you run as many repetitions as you can, but in practice it is indeed very often not feasible to do more than one or a handful.


For the long answer, I think it actually may be quite different also depending on what kind of machine learning you are doing. Personally, I am much more familiar with Reinforcement Learning and similar kinds of problems, i.e. problems where we're generating the "training" data ourselves by making agents act in environments, and similarly also evaluating by again making trained agents act in environments and measuring their performance. I'm not as familiar with the state-of-the-art research in "standard" machine learning tasks like classification/regression, but probably the problem is much worse in RL-style problems because:

  1. We actually have to generate our own data, which takes a huge amount of time (sometimes much more time than the actual training itself takes)
  2. There is often a lot of randomness in how we generate our training data, so we often have very different training datasets in different runs, which can of course also lead to wildly different levels of performance across different runs
  3. We pretty much always compute gradients from subsets (batches) of data in RL, which is again a source of randomness. In contrast, in supervised ML you could consider estimating gradients from the entire dataset at once per epoch, and then at least that part of your training process becomes deterministic (and hence replicable).

For the case of RL, recently a paper titled "Deep Reinforcement Learning at the Edge of the Statistical Precipice" appeared on arXiv, which gets into various tools and approaches you can use to more reasonably draw principled conclusions even from a small handful of repetitions of your training process (in better ways than the common practice of just reporting a mean, or even worse, just reporting the best result).

For supervised ML, some similar techniques may be applicable, but the need for this may also be less great (especially if you have little randomness in your training process).


Part of what I do is trying different parameters and settings hoping that they will achieve different results. But I often notice that the result differences are too small to conclude whether set of parameters A is better than B. Sometimes it happens that on a first run, set A seems to work better than B, but on a second run the opposite is suggested.

For this specific situation, I would be interested to know by how much A is sometimes better than B, and B is sometimes better than A. If we're talking about like 0.1% differences in accuracy either way at a baseline accuracy of 99.5%, that probably just means they're both equally good. If you were "hoping" for one of them to outperform the other, well then that's probably disappointing... but for the overall training process in general, you could actually draw a more positive conclusion that, apparently, it is robust; it performs similarly across different runs even with different parameters!

On the other hand, if you observe differences in performance of significant magnitudes (like, 80% vs 90% accuracy), but sometimes A better by that much and sometimes B better by that much... then this strongly suggests that you indeed are dealing with high variance and you're going to have to do many repetitions to get statistically meaningful results.

",1641,,,,,9/19/2021 10:15,,,,0,,,,CC BY-SA 4.0 31751,2,,31737,9/19/2021 15:12,,1,,"

In addition to the other good answer, if you can afford the luxury of running your experiments multiple times, you can also use hypothesis testing to test whether there is any significant difference between the performance (e.g. accuracy) of the two models.

Hypothesis testing is not widely used (or, at least, reported in research papers) in the ML/DL community (given that, to apply these tests, you often need to repeat the experiments several times, which can be expensive, as you already know), but there are some researchers (for example, see this paper) that often use these statistical tools.

I must also admit that I don't know how valuable they can really be. I've used them in the past, but not enough to say that they can really be useful in the context of machine learning.

",2444,,,,,9/19/2021 15:12,,,,0,,,,CC BY-SA 4.0 31753,1,,,9/19/2021 18:49,,0,70,"

I am a beginner with DL. I did some tutorials and I know the basics of TensorFlow. But I have a problem understanding how to construct more advanced NNs.

Let's say I have 6 inputs and a list of 500 names from which you can pick any, but only 6 at the time. The output should be one value between $0.0$ and $1.0$.

My question is, how I can handle random order in inputs? In inputs 1-6 you have names ABCDEF and the output score is 0.7. I need the same output if input will be in order CEDBFA. How can I handle this? Should I make random shuffle on inputs during training, or should I make for every output value 500D binary vector-like $[0,0,1,0,1,...,0,1,0,0,0]$, where index position in the array is the corresponding token of name and then feed it in 500 inputs? Or there is some better way?

",49876,,2444,,9/20/2021 0:06,10/15/2022 16:03,How to handle random order of inputs and get same output?,,1,0,,,,CC BY-SA 4.0 31754,1,,,9/19/2021 19:04,,2,292,"

Context: I was reading Chapter 3 in the following book (here) about graph representation learning. Before I get to node embeddings, I wanted to make sure that I do understand what is meant by the phrase 'node features' used numerous times throughout the book. Examples are as follows:

Chapter 5, page 50:

Node Features: note that unlike the shallow embedding methods discussed in Part I of this book, the GNN framework requires that we node features $\mathbf{x}_{u}$, $\forall u \in \mathcal{V}$ as input to the model. In many graphs we will have rich node features to use (e.g. gene expression features in biological networks or text features in social networks)....

Question: What is a simple, concrete example of different node features? I have read the paragraph above, but I am not sure whether I have interpreted it correctly. For example, if we imagine a social network of some friends, would some example node features be: address, age, height, weight, etc.? Would it be as simple as that? What are some more advanced/subtle bits of information which could be counted as node features. Perhaps one could be 'number of friends' (i.e. the degree of the node), but what about others.

",49689,,2444,,9/19/2021 23:51,9/19/2021 23:51,What are examples of node 'features' in graph networks?,,0,0,,,,CC BY-SA 4.0 31755,1,,,9/19/2021 19:48,,1,24,"

I was reading Chapter 3 from the following book (here) on graph representation learning. The chapter is about node embeddings.

Question: What is the point of using node embeddings?

Do we use them:

  • to save space/memory by mapping our graph into a form which is of lower dimension?
  • to find a representation of the graph which can be fed into a neural network?
  • perhaps some other reason?

It isn't clear to me what the actual purpose/end-goal is of finding a representation is.

Any help would be greatly appreciated as I have also been reading about graph neural networks, which aim to find an embedding for the nodes (but I don't understand why that is of any use to us).

",49689,,2444,,9/19/2021 23:49,9/19/2021 23:49,What is the reason behind using node embeddings?,,0,0,,,,CC BY-SA 4.0 31756,1,,,9/19/2021 23:39,,0,37,"

I'm new to the field of ML so please bear with me while I try to explain what I'm looking for. In most machine learning pipelines that deal with images there is a requirement to "normalize" the data in some way so that images of different dimensions can be used as inputs for the function that is being optimized. As in, if the function takes its input as an $n\times n$ grid of pixels (assuming we're dealing with 1-channel images) then any image that is not of the right shape must be re-shaped so that it can be used as input. We can assume $n = 2$ without losing any generality because any larger image can be reduced to the $2\times 2$ case for what I'm about to describe.

So if we assume we have a $2\times 2$ image then there is an obvious way to map such an image to a function defined on $[0,1]\times [0,1]$ ($f:[0,1]\times[0,1]\rightarrow\mathbb{R}$) by using convex combinations of the points in the image. If the points of the image are labeled as $x_{00},x_{01},x_{10},x_{11}$ where $x_{00}$ is the top left corner and $x_{11}$ is the bottom right corner then given a point $(a,b)\in[0,1]\times[0,1]$ the value of $f$ at $(a, b)$ can be defined as $$f(a, b) = (1-b)((1-a)x_{10}+ax_{11})+b((1-a)x_{00}+ax_{01})$$

Assuming I got all the signs right it's obvious that this idea can be extended to any grid of pixels by mapping the horizontal and vertical dimensions to $[0,1]$ and then interpolating between the grid points as in the $2\times 2$ case. So this mapping from grids of pixels to functions provides a uniform representation for all images as functions defined on $[0,1]\times[0,1]$.

Now my question is the following: Is there any work that tries to use this kind of representation of images and if there isn't does anyone know what exactly are the obstructions to doing so? It's possible I'm missing something that makes this approach non-viable but I wasn't able to find anything that explained one way or the other why the usual tensor representation is preferable to a functional one as above that reduces all images to functions on $[0,1]\times[0,1]$.

",49880,,,,,9/19/2021 23:39,Uniform representation of images for machine learning,,0,5,,,,CC BY-SA 4.0 31757,2,,31703,9/20/2021 0:20,,1,,"

Yes, typically, a fully connected layer is an affine transformation, which can or not be followed by a non-linear activation function, but, in many (if not most) cases, it's followed by a non-linearity, such as ReLU, sigmoid, or tanh (an exception is when you do regression), which is what makes the neural network be able to approximate non-linear/complicated functions.

",2444,,,,,9/20/2021 0:20,,,,0,,,,CC BY-SA 4.0 31758,1,31774,,9/20/2021 2:42,,0,43,"

In the book Learning from Data written (by Abu Mostafa), we have the following exercise:

Let $\rho$ be minimum attainable from $y_n(W^{*T}X_n)$ where $W^*$ is the vector that separates the data. Show $\rho > 0$. Also assume the Perceptron Learning Algorithm is initialized with the 0 vector.

How to prove the above statement?

I thought that it could be negative since a Perceptron function returns either +/-1?

Even I wonder if I comprehend this proof question correctly.

",49794,,2444,,9/21/2021 13:49,9/21/2021 13:49,"How to show $\rho > 0$ when $\rho$ be minimum attainable from $y_n(W^{*T}X_n)$, where $W^*$ the vector that separates the data?",,1,0,,,,CC BY-SA 4.0 31760,1,,,9/20/2021 3:40,,2,85,"
User: What is the tallest mountain?
Agent: Everest
User: Where is it located? # Agent hears: "Where is Everest located?"
Agent: Nepal

I want to be able to generate a sequence that has been generated using the user's current query as well as the past conversation.

More specifically, I am using Google's T5 for closed-book question answering, but, instead of trivial questions, we use the user's frequently asked queries.

I want to be able to encode their past questions and the agent's past answers, then use them to generate the agent's next answer. How can I do that?

",49061,,2444,,9/20/2021 12:56,9/20/2021 12:56,How to generate a response while considering past questions as well?,,1,1,,,,CC BY-SA 4.0 31762,1,,,9/20/2021 8:16,,0,47,"

For the image datasets, there may be a bounding box for each image at the dataset. It is an annotation for an image. It is a rectangular box intended for focusing on something inside the image.

I read about the following two types of representations for a bounding box.

  1. using two points $(x_1, y_1)$ and $(x_2, y_2)$.

$$<x_1><y_1><x_2><y_2>$$

  1. Using a point $(x_1, y_1)$, width, and height. $$<x_1><y_1><width><height>$$

How do understand both the representations, Specifically, does the point $(x_1, y_1)$ used to denote the top right or top left or bottom right or bottom left in both cases?

",18758,,18758,,9/21/2021 0:59,11/16/2022 6:07,"How to understand the common practices followed for writing a ""bounding box"" for an image in datasets?",,2,1,,,,CC BY-SA 4.0 31763,2,,31760,9/20/2021 8:22,,1,,"

What you want to look for is called anaphora resolution. You basically keep a record of the past conversation and try and find an antecedent for any occurrences of it, he/she, her/his, etc. You probably want to have a pre-processing step where you substitute the antecedent before passing the input sentence on to the agent.

",2193,,,,,9/20/2021 8:22,,,,0,,,,CC BY-SA 4.0 31764,2,,24788,9/20/2021 10:56,,1,,"

You could handwrite different templates and choose probabilistically, according to writing style or pragmatic effects like irony and so on, but that very much depends on the domain. If you have tabular data, from which you want to generate text, you should probably forget about GPT and so on. You only have few control (despite copy mechanism) over the generation process, you actually predict the next most probable word sequence for a given length - GPTs don't author coherent text across paragraphs, especially not when text length is more than a few hundred words.

Check the linguistic counterpart (https://arxiv.org/abs/1703.09902) to end2end generation systems. Breaking up the networks, pipelining again and using networks for controllable tasks. e.g. build a module that selects which attributes to produce. Create RDF triples from the column head words and values in your data base. Take a Text to text Transfer model (Google's T5) and transform into surface text. You should also have a look into the webNLG challenge (https://webnlg-challenge.loria.fr/challenge_2020/). This might help. Much of this is still open research. I am quite busy in this topic, so feel free to ask.

",49892,,,,,9/20/2021 10:56,,,,3,,,,CC BY-SA 4.0 31765,2,,30157,9/20/2021 11:31,,1,,"

This is rather something for V&L Models, training on associated texts with circuit images. Data which should be hard to come by. I doubt these models are yet capable of catching enough detail in pictures for producing the desired results. I mean results that dont dissolve is smoke when soldering the circuit.

Mapping from natural language to some formal description language of circuits (lets call it fdloc) is definitively possible, but you need a lot of training data on text basis, lets say more than 1k pairs of human written texts and the corresponding fdloc expression. Do you have this data? Then machine translation networks are all you need. Fine-tuning a large system might be enough, depending on how your fdloc is syntactically build, e.g. does it contain many chars that a normal GPT Model would just not recognize as tokens?

Alternatively, if you don't have this data and don't want to get rendered circuit images, you could try to Learn the soldering paths to the respective electric pieces by natural language understanding:

Just a draft:

  1. Use NER (Named Entity Recognition) to spot all electric components in the text.
  2. The text between these components most probably deals with how to connect the different capacitors etc., so you need to map these relations onto "paths" that you register as connections between your components, e.g. now solder the negative connector of the switch S1 to the blabla of the light bulb socket B2" or something comparable.
  3. Watch out! Its natural language, so even if its written in a dry and technical style, you will have to resolve co-reference, something like "now connect THIS to the output jack ...". I think you could extract some sort of a directed graph, which contains the components and the path descriptions of connections to all other components.
  4. From this you then need to build a schematic of the circuit. Alternatively, If you have something like a fdloc some program can read in and produce the circuit for you, then you don't need to do step 4 on your own.
",49892,,49892,,9/20/2021 11:39,9/20/2021 11:39,,,,0,,,,CC BY-SA 4.0 31766,2,,28570,9/20/2021 11:46,,0,,"

There are other possible metrics, e.g. meteor and BLEURT. They compensate some of the basic problems most researchers would like to avoid BLEU for. The downside of not using known metrics is, that your model is even harder to evaluate against other candidates. If you compare to human gold standard corpus, you should not count on BLEURT too much since it is actually intended to evaluate two systems against a gold corpus and tell you which is better.

",49892,,,,,9/20/2021 11:46,,,,0,,,,CC BY-SA 4.0 31767,2,,23809,9/20/2021 11:58,,0,,"

This is probably an issue of complete underfitting. How many training data do you use? What is your vocab size? What is your batch size and how many epochs did you train? Transformers always need more data than RNNs to reach good text quality.

",49892,,,,,9/20/2021 11:58,,,,0,,,,CC BY-SA 4.0 31768,2,,23786,9/20/2021 12:00,,0,,"

This is the task of so-called V&L (vision and language models) which effectively encode information from both worlds. There are also many training corpora covering this field already. Here is a quite recent paper on this: https://www.researchgate.net/publication/354617904_What_Vision-Language_Models_See'_when_they_See_Scenes

",49892,,,,,9/20/2021 12:00,,,,0,,,,CC BY-SA 4.0 31769,2,,31753,9/20/2021 12:56,,0,,"

Note: This is always going to be an estimate until you actually run the experiment. ML is not always predictable.

If order truly does not matter, then I think it will be better to design a network architecture that automatically ignores order, instead of using one that cares about order and then training it to ignore order. If nothing else, less training data will be needed since you don't need to train it on permutations of the input - similar to why CNNs are useful for image recognition.

One network architecture could be to have an "input processing block" (a group of layers) and an "output processing block". First apply the input processing block to each input. Then add together all the outputs of the input processing blocks. Because addition is insensitive to order, this step completely discards any information about the order. Finally, apply the output processing block to the sum of those and the output from that block is your final output.

           output
             ^
             |
           +--+
           |NN| (different NN to the one below)
           +--+
             ^
             |
+---------------------------+
| add all together          |
+---------------------------+
 ^    ^    ^    ^    ^    ^
 |    |    |    |    |    |
+--+ +--+ +--+ +--+ +--+ +--+
|NN| |NN| |NN| |NN| |NN| |NN| (same NN 6 times)
+--+ +--+ +--+ +--+ +--+ +--+
 ^    ^    ^    ^    ^    ^
 |    |    |    |    |    |
 C    E    D    B    F    A

This is just one idea. You are not limited to addition; you can use any commutative operation in the middle. Even something like an attention layer could be used.

",28406,,,,,9/20/2021 12:56,,,,0,,,,CC BY-SA 4.0 31771,2,,29906,9/20/2021 16:25,,2,,"

This should be possible but I've never seen it done in practice. Whether or not this will even actually work is unclear to me and will be highly dependent both on your training data and choice of loss. I'd take a step back and look into the literature to see if you can't find a more established approach to your problem, perhaps with RNNs. That being said, I believe the following should do what you're asking.

Consider network $N$ to be a dense neural net with $k$ layers, $N_i$ to be the $i$th layer of $N$, $L$ to be the max length of the input, and $V$ to be the number of terms in the input (the number of $v$s). To accomplish what you want in the above scenario, you can add three additional layers to $N$, $N_{k+1}$ $N_{k+2}$, and $N_{k+3}$:

$N_{k+1}$ is a simple dense layer that has $L$ neurons and takes as input the output of $N_k$. This layer can be skipped if layer $N_k$ already has $L$ neurons.

$N_{k+2}$ takes as input the output $N_{k+1}$ and takes the Hadamard product (elementwise multiplication) of it with a second input, a binary vector of length $L$ with a prefix of $V$ $1$s and suffix of $L-V$ $0$s. For example if $L=5$ and $V=2$, you would supply as the second input the vector $[1, 1, 0, 0, 0]$, which effectively "zeroes out" the third, fourth, and fifth positions.

$N_{k+3}$ is your new output layer, which also has $L$ neurons. $N$ can now be trained and, given the target data is in the proper format to achieve this, it should output results like in your question.

",29873,,,,,9/20/2021 16:25,,,,1,,,,CC BY-SA 4.0 31772,2,,31762,9/21/2021 0:49,,0,,"

With Bounding box people usually mean literally the box which bounds something. So, more intuitive way to use the points in the first case as bottom left and top right points and in the second case bottom left point and sizes. This corresponds to the logic of points on the Cartesian grid. Of course, you may reverse sometimes against Y axis in case your task requires so. For example, some software has X origin at top left corner of the monitor. In this case, sometimes y subscripts can reverse.

",49564,,,,,9/21/2021 0:49,,,,0,,,,CC BY-SA 4.0 31773,1,,,9/21/2021 3:53,,0,111,"

I fed the same set of 1.4 million data to two different models:

  1. MLP
  2. CNN model

In both cases, I used the same parameters and hyperparameters.

The CNN is showing comparatively lower accuracy (80%) than that of the MLP (82%).

Why?

And, also, what does this experiment tell us?

Edit:

Is the data images, video or audio, or other grid-based signals (same signal repeated at multiple locations with a meaningful distance metric between them) in your case?

It is protein C-alpha distances data that has 3 classes (helix, strand, coil) and n-number of features, where n is an even number.

In fact, what is it, what was your expectation of the relative performance of the two models, and why?

I thought CNN would be more efficient and thereby would demonstrate better test/validation accuracy.

",20721,,20721,,9/23/2021 20:53,9/23/2021 20:54,Why is the validation accuracy lower in case of CNN?,,1,6,,,,CC BY-SA 4.0 31774,2,,31758,9/21/2021 3:59,,0,,"

We have a vector $w*$ such that it has separated all of the data points. This implies if the correct classification of a point is -1, then $w^{*T}x_n$ is also negative. If $y_n$ is positive, then $w^{*T}x_n$ is positive. Thus is because we've stated every point is correctly classed. Thus $\rho>0$

",49794,,,,,9/21/2021 3:59,,,,0,,,,CC BY-SA 4.0 31776,2,,31773,9/21/2021 7:12,,2,,"

To get a full understanding of your problem, one would like to know what approximately the $n$-features are.

Whether, it is about the geometrical structure, protein is described by a graph, where vertices correspond to atoms and edges to bonds within them - I would consider use of GraphNN, there is some research, that has demonstrated the success of GraphNN for protein prediction:

In case, your data is some general form tabular data with features of different nature and form - like some continuous features, binary features, categorical features, whatever, there is no notion of proximity between different features.

The efficiency of CNN is based a lot on the ability of them to have a notion of locality, aggregate the information from the neighborhood, and construct the hierarchy of low-level (for several neighboring pixels) features and global (that understand the image or a signal as a whole) features.

Two neighboring pixes from the cat's ear have a notion of proximity and relatedness, whereas the red color and square shape do not have.

For unstructured tabular data I would recommend to start from some tree ensembling approach:

  • Random forest
  • Gradient boosting

EDIT

Seems like what you receive as input is a matrix of pairwise distances $\rho(i, j)$ between all possible amino residues - Protein contact map.

I would think about it as a weighted adjacency matrix. However, there is a rather special structure,

Therefore, there is no need for generic GNN, most likely.

In the literature, the recent research solves this problem with the help of CNN:

Probably, you should tune some hyperparameters, or introduce residual connections, if there are no such at the moment.

",38846,,38846,,9/22/2021 4:35,9/22/2021 4:35,,,,2,,,,CC BY-SA 4.0 31778,1,,,9/21/2021 11:51,,2,369,"

I am using the model in this colab https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_generation.ipynb#scrollTo=AM2Uma_-yVIq for Shakespeare like text generation.

It looks like this

class MyModel(tf.keras.Model):
  def __init__(self, vocab_size, embedding_dim, rnn_units):
    super().__init__(self)
    self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
    self.gru = tf.keras.layers.GRU(rnn_units,
                                   return_sequences=True,
                                   return_state=True)
    self.dense = tf.keras.layers.Dense(vocab_size)

  def call(self, inputs, states=None, return_state=False, training=False):
    x = inputs
    x = self.embedding(x, training=training)
    if states is None:
      states = self.gru.get_initial_state(x)
    x, states = self.gru(x, initial_state=states, training=training)
    x = self.dense(x, training=training)

    if return_state:
      return x, states
    else:
      return x

I look for strategies to improve the model, under the hypothesis that there is a reliant why to assess the goodness of my model; how could adding one or more GRU layers improve the model, e.g., number of rnn_units, number of layers, for such a stacked model ?

For instance, this gives extremely bad results

class MyModel(tf.keras.Model):
  def __init__(self, vocab_size, embedding_dim, rnn_units):
    super().__init__(self)
    self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
    self.gru = tf.keras.layers.GRU(rnn_units,
                                   return_sequences=True,
                                   return_state=True)
    self.gru_2 = tf.keras.layers.GRU(rnn_units,
                                   return_sequences=True,
                                   return_state=True)
    self.dense = tf.keras.layers.Dense(vocab_size)

  def call(self, inputs, states=None, return_state=False, training=False):
    x = inputs
    x = self.embedding(x, training=training)
    if states is None:
      states = self.gru.get_initial_state(x)
    x, states = self.gru(x, initial_state=states, training=training)
    if states is None:
      states = self.gru_2.get_initial_state(x)
    x, states = self.gru_2(x, initial_state=states, training=training)
    x = self.dense(x, training=training)

    if return_state:
      return x, states
    else:
        return x
",49793,,49793,,9/21/2021 20:09,9/5/2022 22:20,Multiple GRU layers to improve a text generation,,1,3,,,,CC BY-SA 4.0 31783,2,,31744,9/21/2021 15:06,,4,,"

Ultimately, a policy must be such that is is possible for an agent to execute it.

If the policy depends on the state, the implicit assumption is that the agent has knowledge of the state and can therefore choose its actions accordingly. This is the common case of a MDP as an underlying framework for RL.

If the state is not known to the agent, it may instead perceive observations that have some relation to the state (although they might not fully reveal the state). Then, one can condition policies on the last received observation.

However, it is useful to note that conditioning on the last observation in a partially observable setting is in general not sufficient for acting optimally. There are cases where one may need to remember a longer history of observations to decide which action is best. In general, acting optimally in such a partially observable setting requires that the policy is a function of the complete history of past actions taken and observations perceived. The underlying framework is then the partially observable MDP, or POMDP.

",45529,,,,,9/21/2021 15:06,,,,2,,,,CC BY-SA 4.0 31784,1,,,9/21/2021 17:30,,-1,88,"

TL;DR
Would providing AI the capability of experiencing something as close as possible to the subjective human experience and from that acquiring empathy in the process be a solution, or contribute to a solution, that seeks to prevent AI to cause the same kind of harm we ourselves as a species have caused to each other in the past, and continue to do so, to us humans and also to itself and/or to other versions of AI?

LONG VERSION:
I was thinking about a question I listened being asked on a podcast where the topic was AI, which was something like: does AI need to have a body in order to understand what it is or how it feels to have a body. The person being interviewed said no. AI can understand it without having a body.

My perception is that a lot of what goes on in AI development at the moment is about AI emulating and simulating human behavior in a way, at the external level, that is by observing human behavior and processing it, looking for patterns etc: speech, motion, the human DNA etc.

I was thinking also that in order for a human being to relate and engage with society and with himself in a constructive, healthy way, and with minimal damage, it will need to possess a certain level of empathy. That is to relate to others and possess the ability to somehow feel what the other is feeling.

Now feeling is not an objective state or something you can observe. Maybe you can measure by observing the human behavior that is a result of feeling but the thing in itself can't be measured or captured and transmitted to AI in a way it can learn from human feeling. One can argue it can be done by studying the brain activity at the moment feeling is occurring. But the subjective experience of experiencing feeling and then being able to recognize that experience in others, understanding what the other is feeling because you have felt the same, that is a subjective occurrence.

It is one thing to understand the human body by observation and study of it's patterns, it's something different to be locked inside one and experience its pains and pleasures.

I am left wondering how can AI understand the human condition if it will not experience what it is to be a human because it is built around the material world, built around what can be objectively observed and measured.

The closest thing there is in human experience to an AI in this regard could be a human that suffers some condition which makes him/her being unable to experience empathy. I was about to use the word Psychopathy but being a loaded word might not be the best choice here. But suffice to say that humans that suffer from some kind of psychiatric condition, or trauma based experience which causes the person to lack the capability to feel empathy towards others might, or not, be easier for that person to in the future engage in actions which can cause harm to others and/or him/herself.

We have cases in human history that might be used as an example. In the past countless human beings engaged in actions which due to its nature I can only conclude this people lacked any empathy or were placed in a situation where empathy was drained out of them. Like for example in war, usually soldiers and the whole country will go through a phase of propaganda which consists of dehumanizing the enemy. I believe this is done in order to facilitate and eliminate any sense of empathy and humanity in the population and building up the necessary will, energy and determination for the killing of the enemy.

The solution to this question might be to create in AI the capability of experiencing subjectively what it is to be human, or what it is to have a body, or come up with something that makes it possible for AI to experience something similar to human empathy. One thing comes to mind which is the merge of AI and the human mind. But this might be another topic altogether.

It has been argued in some places I remember, even in some Science-Fiction that actually the main problem of destructive human behavior is the ability to feel emotion and feeling. And the solution might just be to eliminate or diminish that part of ourselves. I really don't agree with this view and it seems to me that ATM our society does engage and promote solutions of this type: drugs such as some anti-depressants for ex. instead of going to the root of the problem which in many situations are trauma based. I really hope we don't go this way. I believe that if we have a future the solution is to find ways of developing and cultivating empathy.

",49912,,11539,,8/26/2022 6:28,8/26/2022 6:28,"Would empathy in AI be a reliable tool/capacity, or contribute to a solution to avoid harm done to humans or to other versions of AI?",,1,10,,,,CC BY-SA 4.0 31787,1,,,9/22/2021 1:07,,5,2198,"

The depth of a neural network is equal to the total number of layers in the neural network (excluding the input layer by convention). A neural network with "many layers" is called a deep neural network.

On the other hand, the width is the name of a property of a layer in a neural network: it is equal to the number of neurons in that particular layer. So, it may be apt to use the phrase "the width of a layer in a neural network".

But, is it valid to use the phrase "width of a neural network"?

I got this doubt because the phrase "wide neural network" is widely used. The phrase gives the impression that the width is a property of a neural network. So, I am wondering whether the width of a neural network has a definition. For example, say, the width of a neural network is the number of neurons in the widest layer of that neural network.

",18758,,2444,,5/22/2022 21:33,5/24/2022 16:27,Is there a widely accepted definition of the width of a neural network?,,1,1,,,,CC BY-SA 4.0 31791,1,,,9/22/2021 7:05,,0,66,"

I have used the Markov Clustering Algorithm (MCL) to cluster tweets, based on their similarity. However, I got a too high number of clusters, and most of the clusters have only one tweet. Any suggestions to reduce the number of clusters?

",43279,,2444,,9/22/2021 12:28,11/18/2022 7:07,How to reduce the number of clusters produced by the Markov Clustering Algorithm?,,2,0,,,,CC BY-SA 4.0 31792,2,,31791,9/22/2021 7:40,,0,,"

Depending on the implementation you're using, you can adjust the granularity, which will influence how many clusters you will get.

See this description of MCL.

",2193,,,,,9/22/2021 7:40,,,,2,,,,CC BY-SA 4.0 31795,2,,25747,9/22/2021 10:52,,1,,"

We have deployed one project in the real world that uses offline RL algorithms. Evaluating the performance of a policy is indeed a very tricky problem. Unfortunately, most existing OPE method is not really matured enough for many practical problems, especially when evaluating relative complex tasks and policies. The final solution we use in the end is actually a combined approach:

  • Train multiple policies with different seeds and initial hyperparameters. Compared with online RL algorithms, most existing offline RL policy learning, even the most performant one, such as CQL or Fisher-BRC have very large policy variance during training. The reasons could be due to not be able to generalize well at unseen data (still severe distributional shift during evaluation) as well as training instability. Training multiple policies is a must-do step.
  • The policies should be trained until Q-values have converged/reached a plateau. Training offline RL with too many steps usually does not lead to good performance. Training until Q-value converges is the best practice one can make for most cases. Do not rely too much on OPE, as most existing methods are not performing well right now.
  • For OPE, the only method that works for our project is actually the simple fitted Q evaluation (FQE), which produces relatively reliable policy evaluation. This method is not perfect, but is relatively stable and can help rule out policies with low and falsely learned Q-values. It is helpful to filter out some of the bad policies, but not guaranteed to find the best policy. For other OPE methods, importance sampling-based methods are completely unusable due to large variance; doubly robust methods still involves the importance weights hence still suffers the high variance and inaccuracy issue; marginalized importance sampling (MIS) based methods theoretically have lower variance (e.g. DualDice and other Dice family method), but in our experiments not very stable and also hard to train. The only method we find reasonable is FQE, but it can only help you filter out some of the truly bad policies.
  • The final approach we use for policy selection is actually as follows. Train multiple policies using offline RL. Fit an ensemble dynamics/reward model using offline data, and rollout a few steps with the trained policies. If a policy leads to unreasonable states (based on domain knowledge) or a reward drop in most of the dynamics models, this policy is removed from the candidate set. Run FQE for all the policies in the candidate set, and filter out policies with very low Q values. The above two-step will help you remove 50-80% of not-so-well policies. The resulting policies are then deployed and test in a real-world environment. Although the above process cannot provide the best policy, it helps narrow down the set of policies that need to be tested in the real world (unfortunately, this is unavoidable given the current capability of offline RL algorithms and off-policy evaluation methods).
  • The following paper actually conducted a series of empirical studies on multiple OPE methods, which might be helpful: "Voloshin C, Le HM, Jiang N, Yue Y. Empirical study of off-policy policy evaluation for reinforcement learning. 2019."
",49925,,,,,9/22/2021 10:52,,,,0,,,,CC BY-SA 4.0 31796,1,,,9/22/2021 10:54,,1,110,"

I applied for a Ph.D. in AI, my advisor told me that my thesis is about safe applications of deep RL algorithms in healthcare. So I decided to do as the first paper, a comparison of Deep RL algorithms in terms of their inherent safety. However, after lots of research, I could not find an answer to my question, that is: How to measure Deep RL algorithms in terms of safety?

",49798,,,,,9/22/2021 13:23,How to measure Deep RL algorithms in terms of safety?,,2,0,,,,CC BY-SA 4.0 31797,2,,31796,9/22/2021 11:32,,1,,"

I think you should first start with definition of software safety in health domain. For example, you should start with Therac-25 accident. Then look at the current scientific articles and standards about software safety in medical domain. Then think about how your algorithm will be tested.

You are thinking Deep RL algorithms as a blackbox but they are software in the end. If Deep RL algorithms will be used in hospitals, they will have to be tested. The benchmarks, conditions and restrictions of normal software must apply to RL algorithms too.

",4300,,,,,9/22/2021 11:32,,,,2,,,,CC BY-SA 4.0 31798,1,31805,,9/22/2021 12:04,,8,1147,"

Since I haven't found any good training data for my university project, I want to use pictures and videos from public Instagram profiles. Am I allowed to do that?

",43632,,2444,,9/23/2021 0:29,9/23/2021 11:30,Is it okay to use publicly available Instagram videos to train an AI?,,3,2,,,,CC BY-SA 4.0 31800,2,,31796,9/22/2021 13:23,,1,,"

There is this paper talking specifically about the safety of ML/DL Algorithm but in industrial application. Since the algorithm is purely learning function you have to define the safety in terms of the application you are using it for example diagnosis or surgery.

",40072,,,,,9/22/2021 13:23,,,,0,,,,CC BY-SA 4.0 31801,2,,31784,9/22/2021 13:25,,0,,"

SHORT ANSWER: Yes. Empathy can is a good emotion for an AI, but no the best for avoid harm people, could contribute? YES.

ANSWER ABOUT PROVIDE HUMAN EXPERIENCE TO AN AI:

For Humans empathy is not the thing most important for avoid people hurt each other. We may need more, and this is a good law system with people believing in that (political, religious or cultural), a system based in laws that makes the people think twice before committing a crime, a system of laws based on the punishment if a crime is committed. that make that people have a afraid in the future for do it crimes, genocides and damage to the society and people. With this system an AI would be lose more than earn.

I think empathy is not enough for the AI you need laws just like the people. If I do this then that happen to me. a good example can be if a AI harm people eventually will be turned off just like we do with criminals with the death penalty or life imprisonment. The Asimov laws is a good starting point.but we need more than that.

Providing AI with "human emotions": We may have more problems, for example, AI may develop strong emotions such as hatred, Resentment, Envy, Fear, Revenge and eventually a War this is the main reason for the Holocaust, Rwandan-Genocide, and a lot of conflicts in the world. We are a violent species if we have to create an AI it will be to be better than us, these emotions are useless. so you have to be eliminate any emotions that lead to conflicts.

Here are some extension lectures:

The amygdala — a part of the brain involved in fear, aggression and social interactions — is implicated in crime.

",30751,,27229,,9/23/2021 2:13,9/23/2021 2:13,,,,3,,,,CC BY-SA 4.0 31802,1,,,9/22/2021 18:39,,1,65,"

I notice that most Deep Reinforcement Learning (DRL) works focus on Markov Decision Process (MDP) with an infinite time horizon.

Are there any algorithms that work well on finite MDP and non-trivial terminal reward?

My definition of non-trivial terminal reward is that the reward function at the terminal time is different from the one at non-terminal timestamps. Many environments or games fall into this category. For example, many games are on $[0, T]$, the total reward is the summation of the accumulated reward plus the final bonus.

",45021,,2444,,12/22/2021 12:33,12/22/2021 12:33,Are there any deep RL algorithms that work well on finite MDPs and non-trivial terminal rewards?,,0,0,,,,CC BY-SA 4.0 31804,2,,31798,9/22/2021 23:59,,3,,"

Disclaimer that every attorney will give unless formally engaged: This does not constitute formal legal advice.

  • This data was published with the expectation of public view

Viewing that data is a form of utilization—taking in information which may be used to make a decision, or just as divertment (here consumption of entertainment media by humans.)

If any observer can input that data, you can surely use that data for analysis by your learning algorithm.

  1. It's research
  2. It's not for profit
  3. There is no infringement

Selling that dataset might constitute infringement, in that copyright in inherent to natural persons when they create novel content in any memory medium (paper, film, clay tablets, high-level representations of machine instructions, etc.)¹

Re-publishing that data without permission is definitely infringement, except in cases of fair use.

https://en.wikipedia.org/wiki/Fair_use#File_sharing

We in the AI community are on much firmer ground than file sharers, documentarians, internet publishers, and other professional communities because all we are doing is accessing a public dataset for the purpose of scientific research.

The internet itself constitutes a database,² freely available except in the case of firewalls.

My feeling is that these public databases (instagram, et al) are meant to be processed, analyzed, and utilized by any networked users, just as a mathematical formula might.

The internet and all servers and nodes are part of a computational process. Compiling into a dataset is just an extension of that computational process. That data is reproduced and mirrored locally when it is put into packets and transmitted to any system requesting the data.

  • This content is all in the public domain

They may not have a creative commons statement, but, my feeling is, in this case, so long as you don't re-publish for profit, you're not infringing—there is no financial harm, thus no damages³

Even if an Instagram user believes you are infringing, they must first engage an attorney to send you a cease-and-desist letter. So must Instagram.

In that likely instance, simply comply or challenge by refusing.

  • Sociologists use datasets like this for research, and I haven't head of any sociologists being sued for studying Facebook!

The analysis of those datasets, even by sociologists, requires some form of automated analytics, but which we can mean intelligence.


[1] I'd have to research further to determine this b/c many are surely doing it or have tried. But not your question, so will leave unanswered.

[2] Web pages and websites are a form of database, typically with open read access to any user, human or algorithmic. The internet is a structured database of structured data elements (pages and sites).

[3] I'm taking some graduate IT courses now and my sense is the legal advice schools are being given is to require citation even where citation is not necessary because it's fuzzy. It creates a lot of unnecessary citations of rewordings of common knowledge, but they are choosing strict minimax (paranoia). But this may be b/c non-traditional programs are teaching students to google then reword summaries to avoid detection by current gen plagiarism checkers, and thereby successfully simulate research, even where all the content is generic.

The NFL says specifically "not reproduce without express consent of the NFL", but that's broadcast/satellite/cable, which is robustly regulated.

Damages are very hard to prove, and it's a very expensive proposition, with no guarantee of success. It's a losing proposition for Instagram no matter how you slice it. (These are the mechanics of law.)

",1671,,1671,,9/23/2021 1:24,9/23/2021 1:24,,,,2,,,,CC BY-SA 4.0 31805,2,,31798,9/23/2021 0:49,,8,,"

Under US copyright law, this is probably fair use

...but beware of memorization. You may run into more trouble if the AI outputs things very similar to the original work.

Also, consult a lawyer to help you apply the law to your specific situation. This is just information on general legal principles, not any specific situation, and also I'm not a lawyer.


First of all, the vast majority of images on Instagram are almost certainly subject to copyright protection, as any creative work is automatically subject to such protection (with exceptions for things like government works). You'd either need a license to use it, or an exception such as fair use.

Fair use is a four-factor balancing test

It requires consideration of four factors by a judge:

  1. the purpose and character of your use
  2. the nature of the copyrighted work
  3. the amount and substantiality of the portion taken
  4. the effect of the use upon the potential market.

Let's go through them for this.

The purpose and character of your use

This is probably best known as whether the work is transformative. Exactly how transformative it is depends on exact what type of AI you're training (an AI to do object recognition is probably more transformative than an AI that outputs Instagram-esque photos, for instance), but I'd expect that this would generally lean in favor of the AI trainer.

The nature of the copyrighted work

It's a creative work, not a factual work. Probably leans against the AI trainer. It would lean further against the AI trainer if the works were unpublished rather than publicly viewable on Instagram.

The amount and substantiality of the portion taken

This one's complicated. In some sense, you've used the whole work to train the AI. In another sense, you've only taken some general information about what's in the work. This may also be affected by how much is reproduced from the work by the AI. Probably leans in favor of the AI trainer.

The effect of the use upon the potential market

If you're using the AI to copy someone's artistic style and compete with them in that manner, that might lean against you. Otherwise, it seems unlikely that training an AI with Instagram data would affect the market for the original images. Probably leans in favor of the AI trainer.

Overall, these four factors lean in favor of using such data for training AI/ML algorithms.


Postscript: there is also a relevant legal case, but as it involves my employer, I won't be commenting on it beyond providing that link.

",45340,,45340,,9/23/2021 1:05,9/23/2021 1:05,,,,1,,,,CC BY-SA 4.0 31806,1,,,9/23/2021 1:28,,0,52,"

Consider the following two excerpts from the research paper titled Densely Connected Convolutional Networks by Gao Huang et al.

#1: From abstract

Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output.

#2: From discussion

One explanation for the improved accuracy of dense convolutional networks may be that individual layers receive additional supervision from the loss function through the shorter connections. One can interpret DenseNets to perform a kind of “deep supervision”.

Both excerpts mention the type of connections called shorter connections, especially to the layers that are close to the input and the output layers of the deep convolutional neural network. What does it mean by shorter connections here?

",18758,,18758,,9/23/2021 23:49,10/19/2022 2:57,"What is meant by ""shorter connections"" in the case of deep convolutional neural networks?",,2,0,,,,CC BY-SA 4.0 31807,2,,31806,9/23/2021 4:27,,0,,"

Shorter connections are nothing but concatenating output tensors along the channel axis. Having shorter connections between layers, will help in gradient flow throughout the network easily and hence better training.

",46243,,,,,9/23/2021 4:27,,,,0,,,,CC BY-SA 4.0 31812,2,,31798,9/23/2021 11:30,,2,,"

I'd argue that the legality depends strongly on how you access the data, and that automating this may put you in breach of facebook/instagram's terms of use for their data.

It's not that accessing a set of images wouldn't count as fair use, it's that automatically scraping them probably breaches the terms of service.

The most likely consequence, though, is that something clever on instagram's servers blocks you, and, in fact, this does happen when you try things like this.

",49946,,,,,9/23/2021 11:30,,,,0,,,,CC BY-SA 4.0 31813,1,,,9/23/2021 13:18,,0,121,"

Consider the following decision making problem. We have a controller that selects locations from a grid of coordinates and captures an image (observation $o_t$) with a camera at each location (action $a_t$). We try to find an optimal sequence of locations for a specific goal. This decision making problem can be formalized as a Partially Observable Markov Decision Process (POMDP). Here, we seek an optimal stochastic policy $\pi^{*}_{\theta}(a_t|h_t)$ that maps the history $h_t= \langle o_1, a_1, ..., o_{t-1},a_{t-1},o_t \rangle$ of actions and observations up to the current time $t$ to action probabilities. The history $h_t$ can be summarized by the hidden state of a RNN and we can use a policy gradient method, e.g. REINFORCE, to update the policy parameters $\theta$.

Suppose now that we want to select multiple locations, i.e. actions, simultaneously. According to my understanding, we could formalize the problem as a Mutliagent POMDP (MPOMDP) [1]. In this formalism, we would replace the single action of the previous problem by joint actions $\vec{a}_t = \langle a^1_t, ..., a^N_t \rangle$, the single observation by joint observations $\vec{o}_t = \langle o^1_t, ..., o^N_t \rangle$ and the history by $h_t= \langle \vec{o}_1, \vec{a}_1, ..., \vec{o}_{t-1},\vec{a}_{t-1},\vec{o}_t \rangle$, where $N$ is the number of agents. We would now try to find an optimal joint policy $\vec{\pi}^{*} = \langle \pi^{1*}, ...,\pi^{N*} \rangle$ consisting of sub-policies $\pi_{\theta_n}(a^n_t|h_t)$ that map the history $h_t$ to the action probability of each agent $n$. This would mean that the RNN would have $N$ output nodes and each sub-policy $\pi^n$ would be parametrized by $\theta_n$, a sub-set of weights of the output layer [2]. Would it be correct to assume that an optimal or near-optimal joint policy $\vec{\pi}^{*}$ can be obtained by simply applying the policy gradient method used above to each sub-policy $\pi^n$?

I would be curious to hear what you think about the MPOMDP formalism applied to the latter decision making problem or whether you would suggest something else.

[1] Oliehoek, Frans A., et al. "A concise introduction to decentralized POMDPs." Springer, 2016.

[2] Gupta, Jayesh K., et al. "Cooperative multi-agent control using deep reinforcement learning." International Conference on Autonomous Agents and Multiagent Systems. Springer, Cham, 2017.

",49948,,,,,9/24/2021 11:04,Can a Reinforcement Learning problem with multiple simultaneous actions be formalized as a Multiagent Partially Observable Markov Decision Process?,,1,0,,,,CC BY-SA 4.0 31815,2,,31787,9/23/2021 14:54,,5,,"

The width of a neural network layer is an agreed upon term.

According to Lou et al., in the paper The Expressive Power of Neural Networks: A View from the Width (page 4), the width of a neural network is the width of the widest layer of the neural network.

The architecture of neural networks often specified by the width and the depth of the networks. The depth $h$ of a network is defined as its number of layers (including output layer but excluding input layer); while the width $d_m$ of a network is defined to be the maximal number of nodes in a layer
(emphasis added)

So, I would caution you to be careful with how you use the phrase "the width of a neural network", due to interpretability and scale, and the fact that neural networks often contain layers with varying numbers of neurons, depending on the layer.

From this Wikipedia page on "Large width limits of neural networks":

The number of neurons in a layer is called the layer width.

From a nice machine learning resource page

Finally, there are terms used to describe the shape and capability of a neural network; for example:

  • Size: The number of nodes in the model.
  • Width: The number of nodes in a specific layer.

A node [is] also called a neuron or Perceptron

",49894,,145,,5/24/2022 16:27,5/24/2022 16:27,,,,1,,,,CC BY-SA 4.0 31819,1,31844,,9/23/2021 18:17,,2,278,"

I am implementing an RL application in an environment with illegal moves. For handling the illegal moves, I am currently just picking an action as the maximum Q-value from the set of legal Q-values.

So, it is clear that when deciding on actions we only pick from a subset of valid Q-values, but, when using the Q-learning algorithm, do we also want to consider the subset of invalid actions for the $\max\limits_{a}Q(s_{t+1},a)$?

My gut tells me that we consider all actions for the max function, purely based on the lack of documentation on the subject, but only considering the subset of legal actions makes more sense to me. I'm having a hard time finding any reliable sources addressing this topic. Any advice/direction would be greatly appreciated.

",43651,,43651,,9/27/2021 6:12,9/27/2021 14:40,How to handle invalid actions for next state in Q-learning loss,,1,2,,,,CC BY-SA 4.0 31821,1,32451,,9/24/2021 0:52,,1,129,"

I am asking this question from the mathematical perspective of the vanishing and exploding gradient problems that we face generally during training deep neural networks.

The chain rule of differentiation for a composite function can be expressed roughly as follows:

$$\dfrac{d}{dx} (f_1(f_2(f_3(\cdots f_n(x))))) = \dfrac{df_n}{dx} \dfrac{df_{n-1}}{dy_1} \dfrac{df_{n-2}}{dy_2} \cdots \dfrac{df_1}{dy_{n-1}}$$

We know that multilayer perceptrons are composite functions of layer functions. So, if layers are increasing, then the gradient terms to multiply will increase on the right-hand side.

If all the gradient terms on the right-hand side are between 0 and 1 then the gradient will become less and less if layers keep on increasing the layers. This phenomenon is called the vanishing gradient problem. Similarly, if all the gradient terms on the right-hand side are greater than 1. Then the product will become more and more. This phenomenon is called exploding gradient problem.

Since it is customary to use the same activation function across all the layers in deep neural networks, all the gradients on the right hand behave in a similar manner, i.e. either most of the gradient terms on the right-hand side fall between 0 and 1 or greater than one, which causes either vanishing gradient or exploding gradient problem.

Do my mathematical interpretation of the vanishing and exploding gradient problem true? Am I missing anything?

",18758,,18758,,4/1/2022 5:58,4/1/2022 5:58,"Mathematically speaking, Is it only the product operation used in the chain rule causing the vanishing or exploding gradient?",,1,1,,,,CC BY-SA 4.0 31825,2,,31813,9/24/2021 11:04,,1,,"

I guess it depends on what the goal is. If the goal is a general reward function, this formulation as an MPOMDP could make sense. One way to think about this, is as a way of modeling a general (centralized) POMDP with factored actions and observation spaces.

However, it seems that what you are describing might be an active perception problem, where the goal is to minimize uncertainty? In that case, there could be better formulations. I personally did some work in this direction with Yash Satsangi and others. E.g., these papers might be of interest:

Yash Satsangi, Shimon Whiteson, Frans A. Oliehoek, and Henri Bouma. Real-Time Resource Allocation for Tracking Systems. In Proceedings of the Thirty-Third Conference on Uncertainty in Artificial Intelligence (UAI), August 2017.

Yash Satsangi, Shimon Whiteson, Frans A. Oliehoek, and Matthijs T. J. Spaan. Exploiting submodular value functions for scaling up active perception. Autonomous Robots, 42(2):209–233, February 2018.

(and there is a large literature on this topic as well - look at the references in the papers).

Hope that is useful.

",49962,,,,,9/24/2021 11:04,,,,0,,,,CC BY-SA 4.0 31826,1,,,9/24/2021 13:16,,0,30,"

Take a RNN network fed with Shakespeare and generating Shakespeare-like text.

Once a model seems mathematically fine, as can be assessed by observing its loss and accuracy over training epochs, how can one assess and refine the goodness of the result ?

Only human eyes can judge of the readable character of a text, its creativity, its grammatical correctness etc.

QUESTION : Which systematic approach can be used to refine a generative model (text) ?

",49793,,49793,,9/25/2021 3:18,9/25/2021 3:18,How to assess the goodness of a text generation algorithm,,0,3,,,,CC BY-SA 4.0 31828,2,,30469,9/24/2021 18:50,,1,,"

Should the constraints reward also be normalized for all 10 constraints?

You should choose a "natural" balance between rewards where possible.

If you have many separate goals to take account of, ideally you should convert them all into some comparable metric that is meangful to the success of the agent. Such as a financial gain/loss, or energy gain/loss, or similar. You can normalise them after this, but the ratios between the values should be kept the same.

This is not always possible with constraints.

For strict constraints, you should ideally ensure that breaking any constraint will score worse than not breaking the constraint but scoring very badly at everything else. If your system is gaining positive rewards from operating within bounds (and it seems that it is from the description), then one simple way to achieve this is to terminate the episode early and score $0$ for breaking the constraint. If the constraint relates to an ongoing state measurement this may be the best option. That is because the agent learning how to "escape" from an unreachable state may not be useful.

For soft constraints, you need to decide a relative cost. For example your constraint:

the agent should not start and stop heating up frequently (like around four starts of the device daily is desirable)

Looks very much like a soft constraint - using words like "around" and "desirable". For something like that, I would probably allow four starts, then add largish penalty for each start beyond that. What that value is, should be related back to the natural balance between rewards and why you want this constraint.

As an aside, in order for the agent to learn about this constraint, you must add the number of starts so far for each device to the state. This is true for all constraints - there must be data inside the current state that the agent could use to predict that the constraint will come into play. It doesn't need to know the limit you are applying, but does need to know the current value of any variables used to decide whether a limit should be enforced.

I think you will also want to store what the last action was, or which device is currently on, so that the agent knows to keep it on in order not to waste one of its four uses of the device per day.

Is it also possible to tell the Reinforcement Learning agent some rules directly without any constraints?

Yes for absolute rules that should prevent the agent taking a specific action in the first place, and that will also be enforced in any production system. For example this rule:

I have two storage systems, and the agent is only allowed to heat up 1 for every time-slot.

Can be easily expressed by having three actions:

  • $a_0$ no heating
  • $a_1$ heat device 1
  • $a_2$ heat device 2

Although you may want:

  • $a_0$ do nothing
  • $a_1$ start device 1 (and stop device 2 if running)
  • $a_2$ start device 2 (and stop device 1 if running)
  • $a_3$ stop whichever device is running

This second list may help the agent from flip/flopping devices during early stages of learning, and speed things up a little.

Other action sets could work too. The important thing is that there is nothing to gain by presenting the agent with action choices that you are ruling out as not possible immediately. It would mean adding more negative rewards and constraints to what already looks like a complex problem.

",1847,,1847,,9/24/2021 18:56,9/24/2021 18:56,,,,11,,,,CC BY-SA 4.0 31829,2,,23117,9/24/2021 19:41,,0,,"

you are kind of right. but no necessarily. domain randomization : when you widen the range of your training data parameters to make your model more generalized. this can be done for any purpose. (even if you are not doing domain adaptation).

domain adaptation : when you train your model on data from a certain domain (lets say to detect cars) and then test it on data from another domain (lets say for detecting trucks).

  • the data you used earlier helped "pre-train" your model.
  • you may "fine-tune" this model with whatever small amounts of data you have available from the test domain (trucks in our case).
  • you may also use the concepts of domain randomization (make sure you include different varieties of car images) during your training.

these things are optional, and do not always HAVE to go together. Hope this helps!

",49974,,,,,9/24/2021 19:41,,,,0,,,,CC BY-SA 4.0 31830,1,,,9/25/2021 0:47,,2,234,"

Convolutional neural networks are widely used in image-related tasks in artificial intelligence.

The input of a conventional neural network is generally an image. The output of a convolutional neural network can also be an image. But the output of the hidden/ intermediate layers of a convolutional neural network is generally the feature maps of the input image.

In general, channels of an image, represent colors used. If the input and output of a convolutional neural network are RGB images. Then it is true that three channels of input and output images are representations for red green and blue colors. Is it true for feature maps also? Do the channels in feature maps are also the representations of colours?

I got this doubt because of the reason that I remember channels as colours and feature maps may contain an arbitrary number of channels. If they are also represented under then it is difficult for me to understand.

",18758,,,,,9/25/2021 20:09,Is it true that channels always represent colours of an image?,,1,0,,,,CC BY-SA 4.0 31831,1,,,9/25/2021 6:27,,2,76,"

I am sorry but I have to explain my question using an example, I do not know how to ask it in proper scientific terms.

Let's assume, I have trained a deep learning model on classifying hand gestures, but training and testing datasets' images are shot only in one lighting conditions and I achieved certain accuracy, let's assume 85%. As far as I understand, adding more data of the same hand gestures images but shot with different lightning should increase my model's "generalization" capabilities, right?

So the question is, if I take this model, trained in two lightning conditions, and test it only on the dataset of the first lightning conditions, would that increase it's accuracy (the 85%) or maybe this "generalization" would only mean that it can now also classify correctly images with different lightning, but not increase the accuracy on the first set?

",22659,,,,,9/28/2021 8:37,How general is generalization?,,2,0,,,,CC BY-SA 4.0 31832,1,31835,,9/25/2021 10:38,,2,155,"

I have an image data set on which I am training a CNN. The data set is slightly unbalanced. So, my solution up till now was to delete some images of the majority class.

But I now realize that there are cleaner ways to deal with this. But I haven't been able to find ways to fix unbalanced image data sets. Only structured data-sets.

I would like someone to guide me to fix the unbalance, other than deleting data from the majority class.

",44598,,2444,,9/26/2021 3:38,9/26/2021 3:38,How do you handle unbalanced image datasets?,,1,0,,,,CC BY-SA 4.0 31833,1,,,9/25/2021 11:05,,0,518,"

I am currently running a program with a batch size of 17 instead of batch size 32. The benchmark results are obtained at a batch size of 32 with the number of epochs 700.

Now I am running with batch size 17 with unchanged number epochs. So I am interested to know whether there is any relationship between the batch size and the number of epochs in general.

Do I need to increase the number of epochs? Or is it entirely dependent on the program?

",18758,,2444,,9/26/2021 3:36,10/21/2022 7:00,Is there any relationship between the batch size and the number of epochs?,,2,0,,,,CC BY-SA 4.0 31834,2,,31833,9/25/2021 11:58,,0,,"

Smaller batch size means the model is updated more often. So, it takes longer to complete each epoch. Also, if the batch size is too small, each update is done without "seeing" all the data - the batch itself might not be a good representative of the dataset. So, there might be too much "wiggling", which makes it harder to get real minimum. Larger batches will get you "near" minimum for quickly, as they have larger and step sizes.

As you only asked the relation between batch size and epochs, above is the answer. But in practice, one hardly uses very large batches as it doesn't get very close to the minimum for the same reason they reach there faster - once you are there, you want to have smaller steps to get very close to the minimum. Smaller batches might take longer but once they are there, their wiggly nature becomes a strength, and gets them closer to the minimum.

For a more in depth discussion and some references, see this post.

And batch sizes are usually picked to be a power of 2, so I would go for 16 instead of 17 if I were you. Check this discussion for the reason for this.

",22301,,,,,9/25/2021 11:58,,,,3,,,,CC BY-SA 4.0 31835,2,,31832,9/25/2021 13:01,,1,,"

You can always adjust class weights accordingly. I know the reference is not for image data but it shouldn't matter if you are doing classification. Here is another answer more direct to the point.

",22301,,,,,9/25/2021 13:01,,,,0,,,,CC BY-SA 4.0 31836,2,,31830,9/25/2021 15:41,,2,,"

No, channels do not have to only represent colours. It is common for them to represent other things, even without considering feature maps. For instance RGBD images, where D is a depth measurement or distance from a sensor. Or when CNNs are applied to grid-based games, such as chess or go with AlphaZero, where the input channels are information about game pieces on a board.

Mathematically, there is little to differentiate between a channel or a feature map. Both are numerical values stored in some muti-dimensional array, most often with the following assumptions:

  • All values in a single channel or single feature map represent a measurment of the same concept. That might be how much blue light was detected at a sensor at a point in space, or it may be the degree to which pixels close to that point match patterns associated with the centre of a certain type of cat's nose.

  • The values are considered co-located with other feature maps or channels within the same system, such that values at index $i,j$ (or just $i$ of 1D or $i,j,k$ for 3D etc) in one channel are considered to be at same location as values in related channels or feature maps, at least within the same layer.

You will tend to find channels used to describe inputs and outputs that can be directly visualised, whilst feature maps tend to be used to describe the more abstract pattern matching that occurs in the outputs of a CNNs hidden convolutional layers. However, the two terms can be used loosely, and sometimes interchangeably.

The feature maps within a CNN typically do not carry separate colour channels. Although it is possible to design architectures that keep colour information separate, this is very rarely used - normal CNN architectures allow mixing of all layer channels/features with each new layer, through the mechanism of having weights that connect every input channel/feature to every output channel/feature between layers.

You will sometimes see colour channel information extracted from the neural network weights of the first convolutional layer, in order to visualise what that layer is matching to. That is because the first layer's weights (and only the first layer's weights) can be interpretted as template matching to the input channel for each output feature map. This is not the same as visualising the output feature maps - whilst those maps are influenced by the input colour channels, thus do in a general sense carry colour information, they do not measure colour intensity in the same sense as an image channel used for input.

More generally, because human perception is strongly tied to RGB colour channels, and because computer displays and image formats are designed around this, whenever you see any representation of what a CNN layer is doing, you will see one of:

  • A greyscale representation of feature map values. This is the closest to "true" representation, but sometimes it is not very informative.

  • A heat map of feature map values. Using colour may help with visualisation, but it is false colour in the sense that the same colours are not actually in the feature map.

  • A representative input that would cause the feature map to activate. This can be informative about the feature map, but it is not showing what the feature map is doing directly, and the channels defined in the input are used for colour.

",1847,,1847,,9/25/2021 20:09,9/25/2021 20:09,,,,0,,,,CC BY-SA 4.0 31838,2,,31833,9/25/2021 20:46,,2,,"

The smaller the batch_size is, the larger number of batches is processed per one epoch.

On one hand, since one makes more steps per epoch, one can think, that less epochs are required to achieve the same level of accuracy.

On the other side, smaller batch size leads to more noisy and stochastic estimates of the gradient, therefore, convergence would not be as steady most likely.

I think it is difficult to give a definite answer about the exact relation on the number of epochs - since say, to achieve a certain level of accuracy use of small batch may be more beneficial since it allows for more exploration and is more likely to escape from local minima and saddles, but when one reaches the approximation limit of the network and is in the vicinity of the good optimum - large batch would descend better to the extremum.

Good strategy usually is to start from smaller batches to find wide and plain minima, which are better from the generalization point of view, and then increase batch size for steadier convergence.

",38846,,,,,9/25/2021 20:46,,,,0,,,,CC BY-SA 4.0 31839,1,,,9/25/2021 23:23,,1,31,"

For object detection tasks I have a few minutes of video footage from a surveillance camera, converted to a sequence of images and ground truth bounding boxes for all people walking by.

Now what's the best way to split this into training, validation and test sets (80/10/10)?

  1. I could randomly select 10% for testing and 10% for validation and rest into training.
  2. The first 80% go into training, next 10% to validation, rest to testing.

The first way has the advantage of having a good distribution of different people walking by and also more varying densities and locations of people in the test set. But the disadvantage would be that for each testing image a very similar image exists in the training set.

The second way would have the advantage of the testset being more truly "never seen" during training, but at cost of less variety.

",48641,,40434,,9/29/2021 7:10,9/29/2021 7:10,Recommended way to spilt image sequence for training/validation/testing,,0,0,,,,CC BY-SA 4.0 31842,1,,,9/26/2021 15:23,,1,117,"

I am rather new to deep learning and got some questions on performing a multi-label image classification task with keras convolutional neural networks. Those are mainly referring to evaluating keras models performing multi label classification tasks. I will structure this a bit to get a better overview first.

Problem Description

The underlying dataset are album cover images from different genres. In my case those are electronic, rock, jazz, pop, hiphop. So we have 5 possible classes that are not mutual exclusive. Task is to predict possible genres for a given album cover. Each album cover is of size 300px x 300px. The images are loaded into tensorflow datasets, resized to 150px x 150px.

Model Architecture

The architecture for the model is the following.

import tensorflow as tf

from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential

data_augmentation = keras.Sequential(
  [
    layers.experimental.preprocessing.RandomFlip("horizontal", 
                                                 input_shape=(img_height, 
                                                              img_width,
                                                              3)),
   layers.experimental.preprocessing.RandomFlip("vertical"),
    layers.experimental.preprocessing.RandomRotation(0.4),
   layers.experimental.preprocessing.RandomZoom(height_factor=(0.2, 0.6), width_factor=(0.2, 0.6))
  ]
)

def create_model(num_classes=5, augmentation_layers=None):
  model = Sequential()

  # We can pass a list of layers performing data augmentation here
  if augmentation_layers:
    # The first layer of the augmentation layers must define the input shape
    model.add(augmentation_layers)
    model.add(layers.experimental.preprocessing.Rescaling(1./255))
  else:
    model.add(layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)))

  model.add(layers.Conv2D(32, (3, 3), activation='relu'))
  model.add(layers.MaxPooling2D((2, 2)))
  model.add(layers.Conv2D(64, (3, 3), activation='relu'))
  model.add(layers.MaxPooling2D((2, 2)))
  model.add(layers.Conv2D(128, (3, 3), activation='relu'))
  model.add(layers.MaxPooling2D((2, 2)))
  model.add(layers.Conv2D(128, (3, 3), activation='relu'))
  model.add(layers.MaxPooling2D((2, 2)))
  model.add(layers.Flatten())
  model.add(layers.Dense(512, activation='relu'))

  # Use sigmoid activation function. Basically we train binary classifiers for each class by specifiying binary crossentropy loss and sigmoid activation on the output layer.
  model.add(layers.Dense(num_classes, activation='sigmoid'))
  model.summary()

  return model

I'm not using the usual metrics here like standard accuracy. In this paper I read that you cannot evaluate multi-label classification models with the usual methods. In chapter 7. evaluation metrics the hamming loss and an adjusted accuracy (variant of exact match) are presented which I use for this model.

The hamming loss is already provided by tensorflow-addons (see here) and an implementation of the subset accuracy I found here (see here).

from tensorflow_addons.metrics import HammingLoss

hamming_loss = HammingLoss(mode="multilabel", threshold=0.5)

def subset_accuracy(y_true, y_pred):
    # From https://stackoverflow.com/questions/56739708/how-to-implement-exact-match-subset-accuracy-as-a-metric-for-keras

    threshold = tf.constant(.5, tf.float32)
    gtt_pred = tf.math.greater(y_pred, threshold)
    gtt_true = tf.math.greater(y_true, threshold)
    accuracy = tf.reduce_mean(tf.cast(tf.equal(gtt_pred, gtt_true), tf.float32), axis=-1)
    return accuracy

 # Create model
 model = create_model(num_classes=5, augmentation_layers=data_augmentation)

 # Compile model  
 model.compile(loss="binary_crossentropy", optimizer="adam", metrics=[subset_accuracy, hamming_loss])

 # Fit the model
 history = model.fit(training_dataset, epochs=epochs, validation_data=validation_dataset, callbacks=callbacks)

Problem with this model

When training the model subset_accuracy hamming_loss are at some point stuck which looks like the following:

What could cause this behaviour? I am honestly a little bit lost right now. Could this be a case of the dying ReLU problem? Or is it wrong use of the metrics mentioned or is the implementation of those maybe wrong?

So far, I tried to test different optimizers and lowering the learning rate (e.g. from 0.01 to 0.001, 0.0001, etc..) but that didn't help either.

",49997,,2444,,4/8/2022 16:44,1/3/2023 21:01,What could cause the hamming loss and subset accuracy to get stuck in a multi-label image classification problem?,,1,0,,,,CC BY-SA 4.0 31843,2,,31842,9/27/2021 2:17,,0,,"

Generally when you see metrics stagnate like this it's because the model has converged incorrectly (ex: always predicting 0, or gradients/weights have dropped to 0, etc.). Can you see if this is what's happening for your problem?

I'd expect that perhaps your model has converged to predict the majority class for each label.

",50004,,,,,9/27/2021 2:17,,,,0,,,,CC BY-SA 4.0 31844,2,,31819,9/27/2021 7:26,,1,,"

do we also want to consider the subset of invalid actions for the $\max\limits_{a}Q(s_{t+1},a)$

No.

Doing so would go against the theory behind the Bellman equation from which the update derives. The value of $r_{t+1} + \gamma \max\limits_{a'}Q(s_{t+1},a')$ needs to match to a realisable trajectory, otherwise the eventual expected values may be estimates for a different MDP than the one being learned.

For intuition, you can construct a valid MDP which would clearly give the wrong update if the update maximised over non-valid actions. For instance, in a maze game where as well as the classic N,E,S,W moves, the agent can see and pick up treasure in each location. The "pick up" (P) action is only allowed in a location that has treasure and scores $+10$ reward when successfully used, all other outcomes grant $-1$ reward to encourage the agent to act efficiently and escape the maze.

It is worth considering two broad types of Q-learning here - an approximate learner that uses a neural network (e.g. DQN), and a tabular learner that stores action values in a table.

  • The approximate learner will associate successful uses of the P action with a higher expected return, and generalise this to other states. If the P action is included in updates where there is no treature to pick up in state $s_{t+1}$, it will incorrectly increase the expected return from all actions. Actions that move away from treasure would score similarly as actions that move towards it.

  • The tabular learner's behaviour will depend on how Q values are initialised. The table entries for non-valid actions would never be updated, so they would remain at the initialised value. If that was e.g. $0$, then it may sometimes be better than valid actions that lead to longer stretches without treasure. Which in turn means that in some locations, multiple valid actions going in different directions would look the same to the agent - future expected returns would be capped at minimum $-1$ (the immediate reward) irrespective of whether they go towards the exit or not. Some actions, that head towards real treasure, may score higher. However, when there is no treasure or exit nearby, all actions would seem the same.

In both cases, the behaviour of the agent will be compromised. It may ignore nearby treasure, or travel into a dead end or loop.

I have deliberately constructed an MDP where maximising over all actions including non-valid ones will cause a problem. Some MDPs will not cause a problem - for instance the same maze MDP with one difference, where the P action is allowed even when there is no treasure, and scores -1 reward for wasting time. However, having a valid environment where the agent fails, demonstrates that an agent built to maximise over non-valid actions will not be reliable in all MDPs, it would have a designed in "bug" and fail for some use cases.

",1847,,1847,,9/27/2021 14:40,9/27/2021 14:40,,,,1,,,,CC BY-SA 4.0 31845,1,31847,,9/27/2021 7:57,,1,219,"

In the stochastic gradient descent algorithm, the weight update happens for every training sample.

In the mini-batch gradient descent algorithm, the weight update happens for every batch of training samples.

In the batch gradient descent algorithm, the weight update happens for all samples in the training dataset.

I am confused with the procedure of training that happens in the mini-batch gradient descent algorithm. I am guessing one of the following two must be correct

  1. Passing each input individually at each layer and calculating the output. This happens for a number of training samples that are equal to batch size.

  2. Passing a batch of inputs at once at each layer and collecting the batch output at each layer.

Which of the above is true in general implementations of mini-batch gradient descent algorithms to train your neural networks?

",18758,,2444,,9/30/2021 12:52,5/25/2022 0:48,"In mini-batch gradient descent, do we pass each input in the batch individually or all inputs at the same time through the layer?",,2,0,,,,CC BY-SA 4.0 31846,2,,31845,9/27/2021 8:54,,0,,"

What happens in mini-batches is not very different from the way updates are made in batch gradient descent, only the number of samples is different. In mini-batch, you process all the data in the batch, and the update happens after that. It is detailed in this video after 6:11.

",22301,,,,,9/27/2021 8:54,,,,2,,,,CC BY-SA 4.0 31847,2,,31845,9/27/2021 9:55,,1,,"

In the usual scenario, case 2 occurs. In the deep learning frameworks, Tensors have a special dimension (usually corresponding to the 0 axes) which numerates the example in the batch. Look for example in the PyTorch documentation of Conv2d or Tensorflow documentation of Conv2d. The same is true for any Layer - Linear, MultiheadAttention, RNN.

All samples from the batch are processed at once, as the integral entity. Most operations process each sample from the batch independently, without a combination of features from $i^{th}$ and $j^{th}$ sample. Linear layer, Convolution doesn't construct linear combinations of inputs, corresponding to different samples.
However, there is an exception - the BatchNormalization Layer which subtracts the mean and divides by the standard deviation.

However, one may want to work with large batches in order to have a less noisy and more precise estimate of the gradient at each step, but the memory usage increases linearly with the batch size since one has to allocate batch size times the memory, that is required for propagating a single sample through the network. In case, the computational resources do not allow for storing such a large batch in memory, one can pass a part of a large batch or even a single example, and only then aggregate the results.

This situation corresponds to the case 1. Such functionality is implemented in the PyTorch-Lightning library .

",38846,,18758,,5/25/2022 0:48,5/25/2022 0:48,,,,0,,,,CC BY-SA 4.0 31849,1,,,9/27/2021 14:29,,0,63,"

I understand that in order to add additional inputs to a CNN, e.g. in self driving, I can append the data to a flattened layer after the convolutions and before the fully connected layers.

However, a few things confuse me. In a paper the authors want to feed speed measurements into the driving network. Instead of just appending a normalized speed value, they first feed it into several FC layers. Why would they do that? What kind of features could you extract from a single real value? Could there be another reason?

(p. 3, paper 1)

Part two of my question is: In another paper, information about the turn signal is appended to a layer as one-hot encoding. The authors talk about how that didn’t work due to vanishing weights (not gradients). So they scaled weights by a constant factor. What do they mean by vanishing weights and how do I scale weights (e.g. in PyTorch)?

(p. 6, paper 2)

",50015,,50015,,9/27/2021 21:36,9/29/2021 14:02,Correctly input additional values into CNN,,2,4,,,,CC BY-SA 4.0 31850,1,31851,,9/27/2021 15:43,,1,173,"

I understand the idea of mini-batch gradient descent for neural networks in that we calculate the gradient of the loss function using one mini-batch at a time and use this gradient to adjust the parameters.

My question is: how many times do we adjust the parameters per mini-batch, i.e. how many optimisation iterations are performed on a mini-batch?

The fact that I can't find anything in the TensorFlow documentation about this to me implies the answer is just 1 iteration per mini-batch. If this assumption is correct, then how does an optimisation algorithm, like adam, work which uses past gradient information? It seems strange, since then gradients from past mini-batches are being used to minimise the loss of the current mini-batch?

",50018,,2444,,9/30/2021 12:47,9/30/2021 12:47,How many iterations of the optimisation algorithm are performed on each mini-batch in mini-batch gradient descent?,,1,0,,,,CC BY-SA 4.0 31851,2,,31850,9/27/2021 16:23,,0,,"

How many optimisation iterations are performed on a mini-batch?

Just one, as you suspected.

then how does an optimisation algorithm like adam work which uses past gradient information?

It uses the gradient estimates from each mini-batch as its input sequence.

It seems strange since then gradients from past mini-batches are being used to minimise the loss of the current mini-batch?

Each mini-batch generates a gradient which is an estimate of the true gradient of the loss function over the whole dataset. There can be a lot of randomness in this, depending on the problem and the size of the minibatch, but the expected value of the gradient for any fairly sampled minibatch is the mean gradient for the whole dataset.

As there is an update between each minibatch, the expected gradient will change, because the new parameters will generate a new loss value from a new location in the parameter space that is being searched for optimal values.

Adam is not intended to calculate a single true gradient for the whole dataset or population. If you wanted that outcome, then you could use the entire dataset for each step instead of minibatches. Instead, Adam applies some second-order effects to gradient step updates:

  • Adam does not take a direct deepest descent step, but normalises steps in each parameter against a rolling average of gradients seen so far.

  • Adam applies a form of momentum to update steps.

These manipulations of gradient data appear to work well in practice when optimising parameters over complex loss function spaces with many ridges, valleys and saddle points.

",1847,,,,,9/27/2021 16:23,,,,0,,,,CC BY-SA 4.0 31852,2,,31831,9/27/2021 17:25,,0,,"
Generalization

In machine learning, generalization describes a model's ability to properly correct its algorithms to predict new data from the same distribution as the data used to train the model.

By providing additional training for your model (on data with varying lighting conditions), you are correct that you would be increasing the capabilities of your model.

Does better generalization equal better performance?

Consider two models and assume they have the same number of coefficients/ layers & nodes:

  • Model 1, which was trained solely on a single lighting condition
  • Model 2, which was trained on multiple lighting conditions

Let's say we have two test sets as well:

  • Test Set 1 : data with the same lighting condition as the data used to train Model 1
  • Test Set 2 : data with the multiple lighting conditions

Model 1 would be expected to have an equal or better performance than Model 2 on Test Set 1, due to "over" fitting on that lighting condition, but as you noted, Model 1 would not perform as well on Test Set 2 as on Test Set 1. We also would not expect Model 2 to perform better on Test Set 1 than Model 1 due to the generalizability achieved during training.

Simply put, you're probably sacrificing some lighting condition-specific accuracy for better accuracy across multiple lighting conditions.

However

By allowing your Model 2 to increase the layers & nodes, or coefficients (including interactions), Model 2 may well be capable of performing just as well. All this depends on the size of training sets as well. For instance, if Model 1 is trained on 1,000 data points from the single lighting condition, and Model 2 is trained on 500 data points, Model 1 is generally expected to perform better on Test Set 1.

",49894,,,,,9/27/2021 17:25,,,,0,,,,CC BY-SA 4.0 31853,1,,,9/27/2021 22:19,,1,42,"

I'm looking for any introductory/accessible reading on AI that can play games which involve social intelligence.

Games like poker, where you might bait someone into overcommiting their hand or threaten them into not betting a lot when they should.

Or warewolf, where there is a group of villagers, and a warewolf. Every night the warewolf "eats" a villager, and in the morning all the villagers get together to kick out a person who they think is the wolf. Wolf wins if he's the last one alive, villagers win if they manage to kick out the wolf before everyone dies.

I know there's some online content on the game warewolf, but how does AI tackle such concepts in general? How are the rules represented, and how does the AI "persuade" and be persuaded?

",4335,,2444,,4/1/2022 9:28,4/2/2022 17:35,"AI for games which involve social intelligence. Games like warewolf where players must persuade, charm, threaten etc",,0,9,,,,CC BY-SA 4.0 31854,1,,,9/27/2021 22:29,,0,25,"

I have been looking in detecting a specific rhythm/pattern within the temporal domain for a time-series signal. For this purpose, how "Wake Up" words work for devices like Alexa have gained my interest!

I have 2 questions with regards to this matter:

  1. Is this just, say, training a model (e.g. CNN) on a specific word (I know audio processing is not this simple, I meant it in it's simplest form) and running it continually until a hot word is detected? if so is the name just a formalization and not a different method of implementing a ML model?

  2. If this type of model can be used for a time series signal, namely audio, is there any restriction on applying it on a different set of temporal signals? for example, detecting the onset of an earthquake?

Also, I would really appreciate it if someone could reference a good paper to have a read through on this matter, I have looked online but have not been able to find interesting stuff.

",50024,,,,,9/27/2021 22:29,"""Hot Word Detection"", bur for different applications",,0,2,,,,CC BY-SA 4.0 31856,1,,,9/28/2021 3:12,,1,110,"

Deep learning researchers have to work with a lot of models. The models may include different types of Layers: They include convolutional neural network layers, recurrent neural network layers, batch normalization layers, polling layers, and many others.

Along With their own visualization, it is also necessary to keep the model detailed enough visually to teach about the model.

Although there are widely used visualization methods available in several packages such as model summary and others. I want to know the availability of animation tools that are useful to simulate the models of deep learning that are more visually intuitive.

Are there any contemporary animation packages available true to draw and simulate deep learning models?

",18758,,1671,,9/30/2021 0:55,9/30/2021 0:55,Are there any animation tools available to visualise and simulate deep neural networks?,,1,1,,9/30/2021 12:44,,CC BY-SA 4.0 31857,1,,,9/28/2021 3:45,,0,71,"

Imagine that I am training a model to classify handwritten digits. Suppose there are some bad quality images that could be classified by a human as either 0 or 8, 1 or 7 or other commonly misclassified pair of digits. My question is, should I simply remove such ambiguous samples? Should I annotate it as the most similar digit, even though there are other similar answers? Should I present it repeat the sample, presenting it once per each 'acceptable' answer?

",48816,,,,,10/23/2022 5:05,Should I train my network for classification on samples whose ground truth label is ambiguous?,,1,0,,,,CC BY-SA 4.0 31858,2,,31857,9/28/2021 3:54,,1,,"

This depends on the behaviour you want. If the ambiguous sample's ground truth is classified by a range of people, your network will get an average* based on that group. If it's only by one person, your network will be biased to how that one person classifies these samples.

Alternatively, depending on your loss function, you could train the network to classify ambiguous samples with an ambiguous label. For example, if 40 people label the digit as 1, and 60 as 7, you could have the desired output for the network be 0.4 for 1 and 0.6 for 7 (assuming it's a probability).

This doesn't have an exact answer, it's whatever behaviour you deem best for your scenario. If you want to keep it simple, you can remove the samples and then see how the network performs on these ambiguous samples at testing time. Assuming your dataset is good, you'll probably find the network performs about the same on these samples anyway.

",26726,,,,,9/28/2021 3:54,,,,0,,,,CC BY-SA 4.0 31859,1,,,9/28/2021 7:21,,1,262,"

Let's say I have two channels that I wish to feed into a CNN. One of the channel contains 4 traces and has a width of 512. Stacking them on top of each other therefore yields an image with dimensions (4, 512). The other channel is just 1 trace, so its dimensions would be (1, 512).

I then have convolutional filters that are of dimension (1, 5) as an example. That means that the filters run over each trace separately. The first channel (containing the 4 traces) will then have a set of filter weights, shared among the 4 traces. The second channel (containing the 1 trace) will have a completely different set of weights (as per this SE question).

TLDR: Can convolutional layers in a CNN have different dimensions? Putting this in the context of images: Could we have a CNN that takes an image that has dimensions (100, 100) for the red channel, (100, 100) for the green channel, and (50, 100) for the blue channel?

",50031,,,,,10/25/2022 14:05,Is it possible to have different channel dimensions in a CNN?,,1,1,,,,CC BY-SA 4.0 31860,1,,,9/28/2021 7:25,,0,40,"

I want to implement a single perceptron for linear regression using the following formulas:

The input data for the first case is one column (x(392, 1); y(392, 1)) and for the second case is (x(392, 7); y(392, 1)). The NaN values have been removed and x values have been standardized x-x.mean()/x.std()

This is my Python implementation:

class LinearRegression(object):


    def __init__(self, x, y, n_iter):
        self.x = x
        self.y = y
        self.n_iter = n_iter
        self.cost_iteration = []
        # Initializing model parameters (w, b) to zeros
        self.weights = np.zeros((1, self.x.shape[1]))   # w: weights
        self.biases  = np.zeros((1, 1))                 # b: bias 

    def feedforward(self):
        # return the feedforward value for x
        #self.weights, self.biases = self.update_params()
        z = self.x @ self.weights.T  + self.biases
        return z
    
    def loss(self):
      # return the loss value for given x and y
      z = self.feedforward() 
      loss = self.y-z
      cost = np.sum(loss**2)/self.y.shape[0]
      return loss, cost 

    def backpropagation(self):
      # return the derivatives with respect to weight matrix and biases
      loss, cost  = self.loss()
      db = -2*np.sum(loss)/self.y.shape[0]                    # dJ/db
      dw = -2*np.dot(self.x.T, loss)/self.y.shape[0]          # dJ/dw
      return dw, db   
    
    def update_params(self):
      # update weights and biases based on the output
      dw, db = self.backpropagation()
      self.weights -= dw.T
      self.biases  -= db
      return self.weights, self.biases
    
    def fit(self):
      # fit method for the training data
      for iter in range(self.n_iter):
        self.update_params()
        print(self.biases)
        l, c = self.loss()
        self.cost_iteration.append (c)
      return self.cost_iteration

The final cost should be approximately 23.9 and 11.6 for the two models, respectively. But I can't figure out why it's not the case when I use my code.

",37540,,2444,,9/28/2021 13:58,9/28/2021 13:58,Why isn't my perceptron having the expected costs?,,0,4,,,,CC BY-SA 4.0 31861,2,,31831,9/28/2021 8:37,,1,,"

I think there's a crucial point missed in the question, touched by jros answer but without further elaboration.

If you train a model on domain A: single lightning condition and test it on domain B: two lightning condition then you're not evaluating generalization but transfer learning capabilities. Or to phrase it differently you're evaluating how close domain A and B are for the model you trained.

The test set as you said is truly made of instances never seen by the model during training, but it should nevertheless be representative, i.e. correctly sampled, from the training domain, or from the same distribution as jros wrote. So the generalization of your model, trained on single lightning condition, should be evaluated on single lighting condition as well.

A final remark about the rest that have been said:

  • everything holds only under the assumptions that the initial training dataset is not only unbiased but also balanced. In a real case scenario changing the training distribution from something specific (single light condition) to another distribution (multiple light conditions) might well be lead to a worse model, simply cause the problem is now inherently harder to solve.

So the answer to your question (regarding both, true generalization on same distribution and what you describe, transfer learning) is actually just empirical.

",34098,,,,,9/28/2021 8:37,,,,3,,,,CC BY-SA 4.0 31862,1,31864,,9/28/2021 9:36,,2,86,"

Assume we have a neural network and we want to train it on a classification problem. The hidden layers of the neural network are kind of feature representations of the input data.

If the neural network is big and the amount of data isn't enough for the complexity of the model, does it usually learn worse representations than a smaller neural network which is good for the amount of data we have?

",48700,,2444,,9/28/2021 14:01,9/28/2021 14:01,"Does a bigger neural network learn ""worse"" representations than a small neural network when the amount of data isn't enough?",,1,0,,,,CC BY-SA 4.0 31863,2,,31856,9/28/2021 10:21,,3,,"

I suggest you take a look at Chris Olah's blog. Has several interesting post including ones on visualizing weights and interpretability. Most of his papers also have Google Colab links so you can reproduce the results.

If you want something more similar to the model.summary() method you mention, TensorBoard Graph Dashboard might help.

",22301,,22301,,9/28/2021 10:26,9/28/2021 10:26,,,,0,,,,CC BY-SA 4.0 31864,2,,31862,9/28/2021 10:29,,0,,"

Yes, it is a well known problem called Curse of Dimensionality. It happens when a finite number of data samples is used to train a network with a high-dimensional feature space (very deep network).

With regard to your question: yes, smaller networks (representational spaces with lower dimensions) describe better smaller datasets.

Why?

Because of data sparsity. As the features to represent a single sample increases the space between samples in the network representation also increases. When the samples are represented in too sparse space (all samples are very far away from each other in this high-dimensional space) then you can not draw any conclusions or relationships about them.

There is a balance between network dimensionality and dataset size, both of them must be in accordance to each other. You can think of this in this way: if data is represented in a very low dimensional space you can not tell them appart (1D space), if the data is represented in a very high dimensional space, then you can not find relationships within it. The dimensionality must be just the right one.

Image from this post

",26882,,,,,9/28/2021 10:29,,,,2,,,,CC BY-SA 4.0 31865,1,,,9/28/2021 11:39,,-1,44,"

When we take a look at the literature there are so many opinions. I was wondering what are some generally good practices to design an architecture, like how much depth would you prefer and how much width would you prefer. Should the number of training data influence your decisions of designing the architecture. What should the number of parameters be ? etc.

",44529,,44529,,9/28/2021 12:20,9/28/2021 12:20,What practically makes a good architecture of ANN?,,1,3,,,,CC BY-SA 4.0 31867,2,,31865,9/28/2021 11:56,,1,,"

I am not sure if there really are contradicting opinions on this matter. CNNs, RNNs, LSTMs all have specific types of data they are good at predicting. Depth and width, or in general the size of the neural network mostly depends on the size of your dataset. You don't want to build a too large network that will overfit the available data, which can usually be understood to be the case after running a few epochs but in general, number of trainable parameters should be smaller than to total number of independent data points you have to avoid memorizing the data definitely.

Width and depth of the network depends on the size of the data available (as mentioned before), but can simply be thought just as another hyper-parameter that is to be decided by training itself. It can be defined as a part of cross validation process of building a neural network.

",22301,,,,,9/28/2021 11:56,,,,0,,,,CC BY-SA 4.0 31870,1,,,9/28/2021 22:53,,1,42,"

Is there any great game theory book or course that discusses the application of game theory to modern reinforcement learning or multi-agent systems? Or a classic reference book that can help me get a full understanding of papers like $\alpha$-rank.

",8689,,2444,,1/14/2022 18:50,1/14/2022 18:50,Book/course recommendation on game theory application to multi-agent system (reinforcement learning),,0,0,,,,CC BY-SA 4.0 31872,1,,,9/29/2021 8:23,,1,135,"

I am studying Q-learning in reinforcement learning. My question is about the Bellman equation.

In Q-learning, the Bellman equation is often introduced as follows.

\begin{align} Q_{new}(s,a) &= Q_{old}(s,a) + \text{learning rate} \times \text{error}\\ &= Q_{old}(s,a) + \alpha(\text{target} - \text{actual})\\ &= Q_{old}(s,a) + \alpha((\text{reward} + \text{discount factor} \times \text{max next } Q) - Q_{old}(s,a)) \\ &=Q_{old}(s,a) + \alpha[r(s,a)+\gamma\times max(Q(s',a'))-Q_{old}(s,a)] \end{align}

The update equation of gradient descent (which is used in the context of neural networks and other fields) is as follows.

$$ w_{new} = w_{old} + \eta\frac{dE}{dw} $$ So, why the Bellman equation depends on the error after the learning rate while the gradient descent depends on the error gradient? I feel confused.

",50053,,2444,,9/29/2021 14:22,9/29/2021 14:22,What is the difference between gradient decent in neural networks and temporal difference in reinforcement learning?,,1,4,,,,CC BY-SA 4.0 31873,1,,,9/29/2021 9:17,,1,20,"

I am running a code on generative adversarial networks. The code is designed in such a way that it outputs a fake image after every 5 epochs. The total number of epochs is 800 in number.

After the completion of the program, when I check the images generated by the generator of the generative adversarial network while training, I am so confused about the results.

The phenomenon is as follows:

Image after epoch n is very realistic, while the images n -5 and n + 5 are not highly realistic. I can see many such n's. And vice-versa sometimes.

Although I am interested to know why it happens. My question here is not directly about why it happens. My doubt is about the decision regarding the generator on which I need to evaluate some metrics.

It is a general practice to evaluate the metric on the generator after the last epoch, that is, the 800th generator. But if the phenomena I told holds, then it may be true that the last generator is not capable of generating realistic images, and the generator 795 or the generator 805 may be good.

So, if I am correct, I need to check the fake sample generated by the generator and then apply my metric to the generator which generated high quality and realistic image. Am I correct?

",18758,,18758,,9/29/2021 9:24,9/29/2021 9:24,"How to understand the results of a generator that switches, for metric evaluation?",,0,4,,,,CC BY-SA 4.0 31875,2,,31849,9/29/2021 12:09,,1,,"

I would expect the dense layers to be able to detect certain speed ranges. This neuron activates for 0-10, this one for 10-20, this one for 20-30, this one for 20-50, this one for 47.6-89.2...

Of course a later layer could also do that, but it looks like there aren't many layers after this one.

",28406,,,,,9/29/2021 12:09,,,,0,,,,CC BY-SA 4.0 31877,1,,,9/29/2021 13:36,,0,31,"

State values are always presented as a central concept in RL, notoriously in the bible, the Sutton&Barton’s book.

I have done some exercises trying to improve my understanding, but it is clear that I am missing some important point(s), at least in the tabular case.

I do not understand, in the case where we have a complete MDP model of the environment, why we should bother doing value iteration or/and policy iteration if we can directly find the optimal policy by calculating each state-action value. I mean that having the state-action values, we can by the argmax function obtain the optimal policy for any starting point.

In my learning/example, link, with an observation space of one million states and three different actions, using a desktop notebook it took 21 iterations in 19 minutes to find every state action value, and consequently, the optimal policy.

Perhaps it could be done quicker, selecting only the interesting states. For instance, if there is a unique starting state and the simulation has a finite-small number of steps per episode, probably most of the states are uninteresting because never visited. In that case, we can select the target states using some test simulations, but, is it worth?

",33566,,33566,,9/29/2021 13:56,9/29/2021 13:56,Usefulness of the state_values calculation in Dynamic Programming,,0,9,,,,CC BY-SA 4.0 31878,2,,31849,9/29/2021 14:02,,1,,"

Paper 1-

If I'm understanding the paper correctly, the "Measurements " just represents a collection of auxiliary information. It's not necessarily a single speed measurement, but any auxiliary information, so perhaps gas level, engine temperature, etc. I think them choosing to show a speedometer as a measurement might not be the best choice considering they are predicting the speed value.

Paper 2-

They said in the paper

The information of both turn indicators (on or off) is mapped to the values 1.0 and 0.0, respectively. However, the turn indicators do not contain any structural information convolutional layers can make use of. For this reason, this additional information is directly fed into the second fully connected layer after the feature extraction layer (which then contains 102 hidden units).

So perhaps feeding a value of 1 into the network at that level is just too small of a value to have any effect at the layer they are injecting the information. Perhaps the layer they are injecting has a average weight magnitude that is high enough that injecting a single 1 would be close to injecting a 0. This is my best guess. I would expect the network would be able to adjust the weights by themselves, but there are many things that could go wrong here.

",43651,,,,,9/29/2021 14:02,,,,0,,,,CC BY-SA 4.0 31879,2,,31872,9/29/2021 14:03,,2,,"

They have a few similarities, but they are quite different. Let me first give you a general description of both approaches/algorithms, so that you start to get a sense of their differences and similarities.

Description

Gradient descent (GD) can be applied to solving any optimization problem where your loss (aka cost or objective) function is differentiable with respect to the parameters that you want to update. For example, if you're training a neural network with gradient descent to solve a classification problem, you could be using a cross-entropy function, which should be differentiable with respect to the weights of the neural network (so you should make sure that all the operations in the neural network, in particular, the activation functions, are differentiable or, at least, you define their derivatives). So, the only restriction to use GD is that your loss function is differentiable, so you could use GD to solve classification, regression, or even reinforcement learning problems (and this has actually be done).

Temporal-difference (TD) learning is a specific approach to reinforcement learning, where you update your current estimate of a value function (in your case, the action, aka state-action, value function) with a value that is the difference between estimates at different time steps (hence the name temporal difference).

How are they different/similar?

  • They can both be seen as learning algorithms/approaches, although people in other areas other than machine learning may view gradient descent "just" as an optimization algorithm.

  • TD learning is applied in the specific context of reinforcement learning, while GD is applied to any optimization problem where your cost function is differentiable.

  • In TD learning (or, more generally, in RL), you want to find a value function (or policy), which could be seen as the parameters that we want to find. (So, here, the parameters are the variables that we want to find). On the other hand, in GD, we want to find the parameters of a model (that define some function or distribution, which could be a value function, but not necessarily).

  • TD learning can be combined with neural networks (for example, see this paper), which leads to a new field often known as deep RL. In this case, you may use GD to update the parameters of this neural network, which represents the value function or policy. So, GD can be used to solve RL problems

  • In both cases, we have a learning rate, which determines the magnitude of the changes to the current estimate of the parameters.

  • You can estimate/approximate gradients (or derivatives) with finite-differences, or finite-differences could be seen as the discrete version of derivatives. In fact, derivatives can be defined as limits of differences. Moreover, if you read e.g. this paper, you will see a lot of gradient symbols. Given that TD uses these "differences", this could be the reason why you're confused.

",2444,,,,,9/29/2021 14:03,,,,1,,,,CC BY-SA 4.0 31880,2,,30450,9/29/2021 19:45,,1,,"

You could use Mutual Information between the model's prediction, and that particular feature as a regularization term. This will minimize the dependence of the output to that particular feature. Note that simply removing the feature from the dataset might not work if other features are associated with the feature which you don't want your model to depend on.

",32621,,32621,,9/29/2021 19:55,9/29/2021 19:55,,,,2,,,,CC BY-SA 4.0 31881,1,,,9/29/2021 20:58,,1,203,"

I am trying to find an accurate and fast multi-person human pose estimation that I can train on with custom data. I have been searching for a little while and I may not be up-to-date on the newest techniques. I will start by posting what I have found and looked into (a little):

  1. Openpose: This is supposedly real-time (I assume on a GPU, 24fps?) and they provide training code
  2. Lightweight OpenPose: Runs in realtime >20fps confirmed, training code is provided
  3. mediapipe: runs in realtime > 20fps confirmed, training code is NOT provided
  4. posenet: No training code, can one even train tfjs models?
  5. movenet: Very fast but no way to train?
  6. hrnet & lightweight-hrnet: seem to be slow? Can anyone confirm? training code provided
  7. blazepose: haven't tried it yet, looks like tf implementation, but no discussion bout speed. Training code included
  8. alphapose: haven't tried it, but looks to run at 16fps (maybe faster?) but is only intended for research not commercial. Training script available.
  9. MoVnect: Looks new, and fast (haven't tested it) but looks like it uses student-teacher training.

What are some other human poses estimation models out there?

I care more about speed and training. I am thinking #2 is my best bet but it's a few years old. And not the most friendly to train Anything newer? Can anyone confirm or reject my findings?

",42384,,2444,,10/3/2021 23:33,10/3/2021 23:33,What is the fastest multi-human pose estimation model?,,0,1,,,,CC BY-SA 4.0 31884,1,31886,,9/30/2021 7:30,,0,95,"

I have a rectangular area, where I need to place some 2 dimensional geometrical shapes - like a square or circle or a little more complicated shapes. And after the arrangement these shapes should be cut out.

Requirements to the disposal of shapes:

  • These shapes are not allowed to intersect
  • And also they must disposed on the recatangular area
  • They must have at least a minimum distance
  • The waste should be minimized
  • When more than one shape is arranged on this area it is desirably that the shapes have a certain quantity (e.g. shape A: 50 %, shape B: 30 %, shape C: 20 %)

After the arrangement I get the coordinates of the single shapes so that I can cut out my shapes...

To solve this I thought of (deep) reinforcement learning but because I'm new to ML I'm not sure if there is a more appropriate method to solve this problem.

I hope that you can give me some hints or simply confirm my assumption that (deep) reinforcement learning is appropriate. And perhaps you can also offer me some useful links...

Many thanks in advance for your help!

And lastly a little picture which is showing a possible bad result because shape A and shape E intersect. And probably there is to much waste.

",50072,,50072,,9/30/2021 8:19,9/30/2021 9:22,Appropriate ML algorithm to solve a cutting pattern problem,,1,2,,,,CC BY-SA 4.0 31886,2,,31884,9/30/2021 9:22,,0,,"

You might want to start on googling the "Irregular Cutting Stock Problem". I think your problem formulation is similar to Irregular Cutting Stock Problem. Some cool papers are up in the results such as this heuristic method which is tested on real-world based problem instances.

By browsing the existing heuristic/metaheuristic methods, you may get inspiration on how to represent the solution, how to evaluate immediate shapes placement, and the existing local search operators. By then, you can try ML-based adaptive operator selection (AOS) method so that given the current state of the sheet and the existing shapes, you can choose the "best" local search operator to improve the placement. On the other hand, if you can embed the current state of the sheet as well as the current considered shape, you can predict the action of placing that shape (x,y,rotation degree) and train your model with RL methods assuming you have defined the appropriate reward function for the action you've taken.

",44920,,,,,9/30/2021 9:22,,,,2,,,,CC BY-SA 4.0 31888,2,,31859,9/30/2021 11:20,,0,,"

You can do whatever the heck you want.

Of course you will have to design the data flow through the network so that it can make whatever inferences you intend it to make.

The first channel (containing the 4 traces) will then have a set of filter weights, shared among the 4 traces. The second channel (containing the 1 trace) will have a completely different set of weights (as per this SE question).

Sure, you can do that. No reason why you couldn't. Will it work well? Who knows. Have to try it and see.

The best way to combine them will depend on what the NN is supposed to actually do. With the architecture you've described, at least this layer is unable to relate traces to each other. This would be bad when processing, for example, colour images - you don't want to treat red, green and blue the same way as each other, and you want to detect certain combinations of red, green and blue. If you also want the network to have this ability, then maybe you should treat each trace as a channel so the network can see all of them at once.

At some point you will obviously have to combine the results together.

Could we have a CNN that takes an image that has dimensions (100, 100) for the red channel, (100, 100) for the green channel, and (50, 100) for the blue channel?

As I said, it depends on what the network is supposed to do. Are these channels totally separate? Then you can process them with different sized CNNs - or even the same CNN until the dense layers - and combine the results in the dense layers at the end. But if these are the RGB components of one image, you'd be better off just stretching the blue channel so the CNN can recognize colours like yellow.

",28406,,,,,9/30/2021 11:20,,,,0,,,,CC BY-SA 4.0 31890,1,,,9/30/2021 13:27,,0,50,"

In supervised machine learning, it is common to say that we learn a function of the form

$$y=g(x) + \epsilon.$$

Generally, $\epsilon$ is used to denote noise or, more precisely, any influence by latent variables such as measurement inaccuracies (right?).

Is it, therefore, correct to say that we use $\epsilon$ to denote the model's imperfection to the real world (caused by anything unknown)?

",50077,,2444,,9/30/2021 14:32,9/30/2021 16:22,Is the noise term $\epsilon$ in $y=g(x) + \epsilon$ used to denote the model's imperfection to the real world?,,1,0,,,,CC BY-SA 4.0 31891,2,,31890,9/30/2021 14:09,,2,,"

Yes, precisely.

What do you mention is known in the literature as the Bayes Error. See page top of the page 114 https://www.deeplearningbook.org/contents/ml.html.

The ideal model is an oracle that simply knows the true probability distribution that generates the data. Even such a model will still incur some error on many problems, because there may still be some noise in the distribution. In the case of supervised learning, the mapping from $\mathbf{x}$ to $y$ may be inherently stochastic, or $y$ may be a deterministic function that involves other variables besides those included in $\mathbf{x}$. The error incurred by an oracle making predictions from the true distribution $p(\mathbf{x}, y)$ is called the Bayes error.

Regardless of how clever is your model, the best error you can achieve for the prediction on the data distribution is $\varepsilon$. Note that it holds for the whole data distribution, not a sample of data.

Say, you would like to fit something like $\sin(x) + \varepsilon$. There are 10 points, and one can fit them perfectly with the 9th-degree polynomial, but this an error on training data, and, in case one samples more data points, the error will be likely do exceed the optimal $\varepsilon$.

",38846,,2444,,9/30/2021 16:22,9/30/2021 16:22,,,,0,,,,CC BY-SA 4.0 31892,1,,,9/30/2021 14:32,,0,408,"

I'm building an object detection model with convolutional neural networks (CNN) and I started to wonder when should one use either multi-class CNN or a single-class CNN. That is, if I'm making e.g. a human detector from image data and a cat detector also from image data, then when should I have a specific model for each task, and when should I just combine all the data into one and use just one general multi-class CNN?

I've understood from the No-Free-Lunch-Theorem and generally from estimation theory, that there there does not, in theory, exist a model which is simultaneously optimal for every problem. In other words, case specific models should, in general, beat the "all-purpose"-models in the same task.

I have a difference in opinion with a colleague of mine whether to use one-class of a multi-class CNN and I would like to hear the communities opinion on this.

",27971,,,,,9/30/2021 15:15,When to use Multi-class CNN vs. one-class CNN,,1,0,,,,CC BY-SA 4.0 31893,2,,31892,9/30/2021 15:15,,1,,"

I am not really a fan of the One vs All approach.

From my experience it is never convenient to transform a multi-class classification problem with, say, $N$ possible classes to a bunch of binary classification problems.

Reason #1

The number of binary classifiers you need to train scales linearly with the number of classes. Hence, you can easily find yourselves training lots of binary classifiers. What if each one of them has a huge number of neurons? As you can understand, the computational burden here is quite a problem.

Reason #2

With a small $N$, the computation is less of a problem, but still.. why would you do that? By doing things like this, you can easily end up in awkward situations such as two or more of your binary classifiers give a positive outcome, or none activates. How do you handle these issues?


However, there exists a very specific setup where you might want to use a set of binary classifiers, and this is when you're facing a Continual Learning(CL) problem. In a Continual Learning setting you don't have access to all the classes at training time, therefore, sometimes you might want to act at a architectural level to control catastrophic forgetting, by adding new classifiers to train. However, even in CL there exist other methods that work better.

To conclude, I wouldn't recommend anyone go for this option. You can train a multi-class classifier much more easily and avoid all the aforementioned issues.

",37576,,,,,9/30/2021 15:15,,,,4,,,,CC BY-SA 4.0 31895,1,,,10/1/2021 7:34,,0,92,"

In practical applications, we generally talk about three types of convolution layers: 1-dimensional convolution, 2-dimensional convolution, and 3-dimensional convolution. Most popular packages like PyTorch, Keras, etc., provide Conv1d, Conv2d, and Conv3d.

What is the deciding factor for the dimensionality of the convolution layer mentioned in the packages?

",18758,,2444,,10/9/2021 0:00,10/9/2021 0:00,What do people refer to when they use the word 'dimensionality' in the context of convolutional layer?,<1d-convolution><2d-convolution><3d-convolution>,2,7,,,,CC BY-SA 4.0 31896,1,,,10/1/2021 8:08,,1,69,"

I am reading the paper about the fully convolutional network (FCN).

I had some questions about the part where the authors discuss the filter rarefaction technique (I guess this is roughly equivalent to dilated convolution) as a trick to compensate for the cost of implementing a shift-and-stich method.

Consider a layer (convolution or pooling) with input stride $s$, and a subsequent convolution layer with filter weights $f_{i,j}$ (eliding the irrelevant feature dimensions). Setting the lower layer’s input stride to 1 upsamples its output by a factor of s.

  1. How does setting the input stride of the lower layer to 1 leads to upsampling (and not in the reduction of output dimension)? I am confused about what the terminologies lower/higher layer and input/output stride refer to here.

To reproduce the trick, rarefy the filter by enlarging it as
$f'_{i,j} = \begin{align} \begin{cases} f_{i/s,j/s} & \text{if $s$ divides both $i$ and $j$} \\ 0 & \text{otherwise} \end{cases} \end{align}$
(with $i$ and $j$ zero-based). Reproducing the full net output of the trick involves repeating this filter enlargement layer-by-layer until all subsampling is removed.

  1. I was wondering how the rarefaction defined here leads to the filter enlargement. Based on the equation, it seems that $f$ and $f'$ has the same size, with $f'$ having different filter weights based on $s$.
",50089,,,,,5/18/2022 11:21,"FCNs: Questions about the filter rarefaction in the CVPR paper [Long et al., 2015]",,0,1,,,,CC BY-SA 4.0 31900,2,,31895,10/1/2021 11:00,,1,,"

Kernel dimensionality and presence of filters decides the dimension of convolution operator. N Dimensional Convolutions have N dimensional kernels. For example, from Keras Documentation on 2 Dimensional Convolutions:

kernel_size: An integer or tuple/list of 2 integers, specifying the height and width of the 2D convolution window. Can be a single integer to specify the same value for all spatial dimensions.

If you have more than one filter in the layer, that also adds another dimension to the layer. So, we can say that a 2D convolutional layer is in general 3 dimensional, where 3rd dimension is the number of filters: (k,k,F). For the special case of a single filter, F=1 and we can treat it as 2 dimensional.

",22301,,22301,,10/1/2021 13:03,10/1/2021 13:03,,,,3,,,,CC BY-SA 4.0 31902,1,,,10/1/2021 17:15,,0,82,"

I'm working with very weird data that is apparently very hard to fit. And I've noticed a very strange phenomenon where it can go from roughly 0.0176 validation MSE to 1534863.6250 validation MSE in only 1 epoch! It usually then will return to a very low number after a few epochs. Also, no such fluctuation is seen in the training data.

This behavior of instability is consistent across shuffling, repartitioning & retraining. Even though I have 16,000 samples and a highly regularized network (dropout + residual layers + batch normalization + gradient clipping).

I mean I realize I could have more data, but, still, this behavior is really surprising. What could be causing it?

P.S. Model is feedforward with 10 layers of size [32,64,128,256,512,256,128,64,32,1], using Adam optimizer. Also, this question may be related (my experience is also periodic validation loss), but I don't think they experienced the same massive instability I am seeing.

",42996,,2444,,10/2/2021 14:41,10/2/2021 14:41,What can cause massive instability in validation loss?,,0,2,,,,CC BY-SA 4.0 31903,1,,,10/2/2021 4:08,,1,199,"

I just got my agent training, and I'm wondering if the terminal flags are necessary when sampling from the replay buffer. The game I'm implementing the agent in has two different ways the game can end, and so far my agent seems to be learning without terminal flags. I was wondering how important this feature is, as it's in all the pseudocode but doesn't seem to be necessary in my implementation.

",49602,,2444,,10/3/2021 12:56,3/2/2022 14:05,Do you need a terminal state when using double deep q networks?,,1,0,,,,CC BY-SA 4.0 31904,1,31923,,10/2/2021 7:36,,1,206,"

In various neural network detection pipelines, the detection works as follows:

  1. One processes the input image through the pretrained backbone
  2. Some additional convolutional layers
  3. The detection head, where each pixel on the given feauture map predicts the following:
    • Offset from the center of the cell ($\sigma(t_x), \sigma(t_y)$ on the image)
    • Height and width of the bounding boxes $b_w, b_h$
    • Objectness scores (probability of object presence)
    • Class probabilities

Usually, detection heads produce not a single box, but multiple.

  • The first version of YOLO - outputs 2 boxes per location on the feature map of size $7 \times 7$
  • Faster R-CNN outputs 9 boxes per location
  • YOLO v3 - outputs 9 boxes per pixel from the predefined anchors : (10×13),(16×30),(33×23),(30×61),(62×45),(59× 119), (116 × 90), (156 × 198), (373 × 326)

These anchors give the priors for the bounding boxes, but with the help of $\sigma(t_x), \sigma(t_y), b_w, b_h$ one can get any possible bounding box on the image for some pixel on the feature map.

Therefore, the network will produce plenty of redundant boxes, and a certain procedure - NMS suppresion has to be run over the bounding box predictions to select only the best.

Or the purpose of these anchors is to start from a prior, reshape and shift slightly the bounding box, and then compare with the ground truth.

Is it the case, that if one used only a single bounding box for detection - it would be hard to train the network to rescale the initial bounding to, say, 10 times, and produce some specific aspect ratio?

",38846,,,,,10/4/2021 13:07,Why do the object detection networks produce multiple anchor boxes per location?,,1,0,,,,CC BY-SA 4.0 31905,2,,31903,10/2/2021 9:47,,1,,"

It's an important feature, and you drop it at the risk of the agent failing to learn successfully.

The difference between the TD target without the terminal flag

$$G_t = R_{t+1} + \gamma \text{max}_{a'} Q(S_{t+1}, a')$$

and with the terminal flag applied to $S_{t+1} = S_T$

$$G_t = R_{t+1}$$

is important whenever $Q(S_{T}, a')$ might be evaluated as non-zero. The true action value of the terminal state is always defined as zero.

In some cicumstances, values based on a function approximator estimate could be far from zero - if at any point the estimator moves significantly away from zero then this could lead to value estimates diverging. This can be a problem at any time, because the estimator is usually not trained on actions taken from the terminal state - due to them never being observed. In principle you could add training data to keep the function approximator in line with this estimate, but as it is an absolute value it is far more common (when using approximators such as neural networks) to change the TD target as above.

Double Q learning improves estimates of bootstrap values by removing some of the maximisation bias from the next action value. This may help in your case, if the expected future rewards from the terminal state remain close to zero.

I would expect Double Q learning to work OK without using terminal state informaton, and without adding "mitigating" training to keep terminal state estimates close to zero, if rewards are not sparse, and do not increase close to the terminal states. That would mean the estimator should recognise expected future reward becomes closer to zero as the agent approaches a terminal state, thus would not take much extrapolation to predict close to zero for the terminal state - close enough that it does not create runaway feedback or influence the optimal policy much.

With sparse rewards, or with significantly high rewards close to the terminal state (e.g. a goal state which the agent must reach to gain all the reward), then using the terminal flag in the normal way becomes more important. It is unlikely that double Q learning would help much in that case. However, it may still be possible to find the optimal policy but with highly inaccurate action values (e.g. action values could all be double what they should be).

",1847,,1847,,10/2/2021 9:55,10/2/2021 9:55,,,,0,,,,CC BY-SA 4.0 31910,1,,,10/2/2021 22:20,,0,37,"

If a certain task T is solved by a non-learning-based method A (let's say, an optimization-based approach). We now train a machine learning model B (let's say a neural network) on the same task.

What are some metrics that we can use to compare their efficiency in terms of finding the solution (Assuming the quality of both solutions is comparable)?

",50099,,2444,,10/3/2021 12:59,10/5/2021 7:44,Compare the efficiency of a trained ML model with a non-learning-based method for solving the same problem,,1,0,,,,CC BY-SA 4.0 31912,1,,,10/3/2021 18:00,,0,74,"

Following the standard setup/notation for a VAE, let $z$ denote the latent variables, $q$ as the encoder, $p$ as the decoder, and $x$ as the label. Let the objective be to maximize the ELBO, where a single sample monte carlo estimate of the ELBO is given by \begin{align*} \log p(x \, | \, z) + \log p(z) - \log q(z \, | \, x) \end{align*}

Now I want to focus only on the $\log p(x \, | \, z)$ term, where $x$ is an image, and the decoder $p$ outputs the mean/variance of a normal distribution for each pixel.

My understanding is that pixel values should be integer 0-255. Now consider a single pixel: suppose that the ground truth for that pixel is the value 10, and the encoder predicts the mean, variance $\mu, \sigma^2$ respectively. Now when computing the ELBO, we have this term \begin{align*} \log p(x \, | \, z) = \log f(10, \mu, \sigma^2) \tag{1} \end{align*} where $f$ is the probability density of the normal distribution. My question is why it is justified to compute $\log p(x \, | \, z)$ using the density considering that image data should be discrete valued. It seems to me that any sampled output between 9.5-10.5 would all get mapped to the correct value of 10. Then it seems that you should take the term in the ELBO as \begin{align*} \log p(x \, | \, z) = \log \big(F(10.5, \mu, \sigma^2) - F(9.5, \mu, \sigma^2)\big) \tag{2} \end{align*} where $F$ is the CDF of the normal distribution.

It seems that all references calculate the ELBO as (1) and none as (2). Why is this justified?

",47080,,47080,,10/6/2021 2:23,10/6/2021 2:23,variational autoencoder - decoder output for images,,0,5,,,,CC BY-SA 4.0 31913,1,,,10/3/2021 18:15,,2,55,"

Is there a method to detect shapes like these accurately and efficiently? I have tried the OpenCv Haar Casacade Classifier which does not work well. These shapes should all be the same class object and can be of different sizes and a little differently shaped (more circular or more angular). In the attached picture there are 4 separate shapes, of which 2 overlap each other.

",50109,,,,,10/3/2021 18:15,Which method can accurately detect circular/angular shapes? (attached example),,0,3,,,,CC BY-SA 4.0 31917,1,31925,,10/4/2021 7:41,,0,151,"

I am studying GNNs. I am interested in the Weisfeiler-Lehman Isomorphism Test (WL-Test).

I was looking for information about whether the test always ends or not, but I didn't find a definitive answer.

I know that we can choose how many iterations can be done, or the test is finished if the iteration makes the same result.

My question is: What if we don't choose how many iterations should be done and the iterations don't make the same result between two graphs? Will the iterations keep going on (i.e. infinitely)?

",50119,,2444,,10/7/2021 15:02,10/22/2021 9:25,Does the Weisfeiler-Lehman Isomorphism Test end?,,1,0,,,,CC BY-SA 4.0 31919,1,31921,,10/4/2021 11:06,,6,8117,"

I've got an array of integers ranging from -3 to +3.

Example: [1, 3, -2, 0, 0, 1]

The array has no obvious pattern since it represents bipolar disorder mood swings.

What is the most suitable approach to predict the next number in the series? The length of the array is about 700 entries.

From where can I start the investigation? (provided that I've got some experience in Python and Node.js, but only a hello-worldish acquaintance with TensorFlow). Which training model might be suitable in this case? How can I chunk the data set properly?

",35881,,16229,,10/5/2021 12:32,10/5/2021 12:32,How can I predict the next number in a non-obvious sequence?,,4,4,,,,CC BY-SA 4.0 31920,2,,30364,10/4/2021 11:50,,1,,"

Simply, it is just a design choice.

Isotropic gaussian is one of the easiest density to work with. It has an easy-to-compute likelihood and easily reparameterizable.

You are free to use other distribution, but might face computational or implementation hurdles.

",11030,,,,,10/4/2021 11:50,,,,0,,,,CC BY-SA 4.0 31921,2,,31919,10/4/2021 11:56,,9,,"

As all you have is a series of numbers, you should try using a sequence model. I suggest you look into RNNs and in particular LSTMs. Of course this is assuming despite the lack of "obvious patterns", there are some kind of hidden patterns in your data. If not, what you have is not very different than random walk in 3 dimensions - which makes the case unpredictable in the first place.

",22301,,,,,10/4/2021 11:56,,,,0,,,,CC BY-SA 4.0 31922,2,,31919,10/4/2021 12:50,,6,,"

I guess the most "suitable" approach is to look up research papers on ML/AI/Stats based methods on bipolar disorder mood swings prediction/regression etc. Focus on the abstract, intro/related works and conclusion. Find out why the method is proposed, what the well-known approaches are, what the intuition for the proposed methods are. Find out the fundamental resources cited on the intro/related works. From the intro and related works, look up the references and skim them.

As for the theoretical basis, the math and the proposed method, just skim them quick, next time you got time/the feels you can deepen them. Utilize sci-hub or lib-gen or similar webs if you/your institution is not subscribed to the publishers. Bonus points: some papers also include github/links to their implementation source code.

Quick search on google scholar with the query "bipolar mood swing prediction machine learning" resulted in cool (at least the titile) research papers. For example The impact of machine learning techniques in the study of bipolar disorder: a systematic review, and Review on Machine Learning Techniques to predict Bipolar Disorder.

Why do we go with this approach? Because your domain is specific, vast and complex it its own way. Most of the time, they already tried the "basics" prediction/regression/classification on your domain and published the methods as well as the results, so you can start from there and gain even more because of the additional knowledges/references from the papers.

",44920,,,,,10/4/2021 12:50,,,,0,,,,CC BY-SA 4.0 31923,2,,31904,10/4/2021 13:07,,1,,"

Yes, theoretically it is possible to learn the offsets to get any possible bounding box from only one anchor box. However, it is hard to learn such dramatic shifts and changes. Learning only small offsets from the prior is easier and tends to converge better.

In specific applications however, one might already know the typical size and ratio of objects, and that this is very similar for all of them. In such cases, one box per anchor can be enough to learn well.

Note that many redundant boxes are predicted anyway, even if only one anchor box is used per location, because there are usually many anchor locations distributed in a grid based fashion over the image. Therefore, NMS is a necessary step anyway and does not depend on having multiple boxes per anchor.

",42911,,,,,10/4/2021 13:07,,,,0,,,,CC BY-SA 4.0 31924,1,,,10/4/2021 13:40,,0,15,"

In Quantum-Chemical Insights from Deep Tensor Neural Networks, I would like to ask a question about how to initialize the coefficient vector of the network, because I could not understand it even after reading the paper. In the paper, it says

All presented models use atomic descriptors with 30 coefficients. We initialize each coefficient randomly

If it is initialized randomly like this, why do we initialize it randomly since we cannot use the information of nuclear charge as mentioned in the paper?

",32303,,,,,10/4/2021 13:40,How to initialize the coefficient vector of Deep Tensor Neural Network,,0,2,,,,CC BY-SA 4.0 31925,2,,31917,10/4/2021 16:29,,1,,"

Notice that a partition (set of nodes with the same label) can never get combined with another partition during an iteration. If two nodes are in different partitions, they stay in different partitions. If two nodes are in the same partition, they might stay in the same partition or get split up into different partitions. Therefore, the number of partitions increases with every iteration (except for the last one which signals the end). There can't be more partitions than nodes, therefore the algorithm eventually doesn't split any more partitions, and stops.

",28406,,28406,,10/22/2021 9:25,10/22/2021 9:25,,,,1,,,,CC BY-SA 4.0 31928,1,,,10/5/2021 1:25,,0,52,"

Upsampling and downsampling are highly used in deep learning algorithms that involve convolutional neural networks.

Upsampling increases the size downsampling decreases the size of tensors.

What is the role of the word sampling in the words upsampling and downsampling?

Does it always have a connection with the sampling techniques that are generally used in statistics? Or is it true that sometimes it does not have any such connection and the only task of upsampling and downsampling is to increase and decrease respectively the size in any way?

",18758,,18758,,10/6/2021 0:35,10/6/2021 0:35,What is the role of the word sampling in upsampling and downsampling?,,1,2,,,,CC BY-SA 4.0 31930,2,,31919,10/5/2021 5:13,,4,,"

Since you only have only 700 observations, I would not try a deep learning approach. I think it is very unlikely that any Deep Learning approach will learn a non-obvious relationship with that little data.

What you could try is create a set of features based on lags. Create a feature, that is lagged by 1, by 2, by 3, and so on. Also moving average of lagged variables could be useful, window of 2,3,5. Standard deviation could also be interesting, though at bit larger windows. And then train a regular ML model.

I would try this simple approach even if I had 10 million observations, and planned on using Deep Learning, so I could use it as a benchmark.

",25248,,,,,10/5/2021 5:13,,,,1,,,,CC BY-SA 4.0 31931,2,,31910,10/5/2021 7:44,,1,,"

The most generic answer to this question is:

the same metrics you use to evaluate the quality of your model during training or in test phase. (Plus the timing of inference if you're referring to computational efficiency)

And I'm not referring to any specific metric yet cause that's really task dependent. But in general if you have a model that perform a task and another algorithm that perform the same task, then you should be able to apply both to the same set of data, compute whatever metric is suitable to evaluate the performance on the task, and compare the two scores. Let me stress out that the test instances should be the same for a scientifically relevant comparison, and I mean literally the same.

As an example of some metrics I would refer to the web since out there there's plenty of blog posts listing and comparing metrics. Just to link a few:

The list is not exhaustive but I think it illustrates the point.
Also, as a side note: almost all machine learning algorithms are optimization-based, if you want to refer to approaches that don't fall into machine learning I think a better term is analytic methods/approaches.

",34098,,,,,10/5/2021 7:44,,,,1,,,,CC BY-SA 4.0 31932,1,,,10/5/2021 8:31,,1,224,"

Consider the actor-critic reinforcement learning setting (actor and critic parameterized by a neural network). The reward is given only at the end of the episode (or when there is a timeout there is no reward).

How could we learn the value function? Do you recommend computing intermediate rewards?

",50133,,2444,,10/5/2021 17:23,10/5/2021 17:23,How do I compute the value function when the reward is only at the end in the context of actor-critic algorithms?,,1,0,,,,CC BY-SA 4.0 31933,2,,31928,10/5/2021 9:05,,2,,"

I think that this terminology originates from digital signal processing:

Given a signal at a given frequency, one would like to get an approximation of the signal, which would be obtained by sampling with a frequency $n$ times higher ($n$ times smaller) than for the original signal. The resulting signal should be close to the original in a certain sense - in the Fourier domain, for instance.

",38846,,2444,,10/5/2021 17:05,10/5/2021 17:05,,,,0,,,,CC BY-SA 4.0 31934,2,,31932,10/5/2021 9:07,,1,,"

The reward is given only at the end of the episode (or when there is timeout there is no reward)

This is a common case. E.g. winning a board game, or reaching a goal state.

How could we learn the value function?

All RL algorithms are designed to cope with this scenario. Actor-Critic is not an exception. Value-based algorithms (including the critic in Actor-Critic) learn through time step backup updates. The simplest backup is to copy data about experience at time $t+1$ into an update for the state or state/action experienced at time $t$. That is what single-step temporal difference algorithms do. Other value-based algorithms can be more sophisticated and more efficient with assigning the update signal back in time.

A very sparse reward can be difficult for an agent to find or learn from. So some methods may work better than others. Without knowing the environment, and what the specific difficulties might be, it is not possible to recommend an approach, or to even suggest whether your algorithm needs any help.

Do you recommend computing intermediate rewards?

In general, no. However, these can help when the "natural" rewards in a problem are both sparse and hard for the agent to discover through exploration. Constructing extra reward signals to guide a learning agent is called reward shaping.

Reward shaping needs to be done with care because it can inadvertently change what the optimal solution is. But if done well, it can make a problem much easier to solve for an agent.

A starting rule of thumb for whether to add some kind of reward shaping is based on how often a random agent might accidentally obtain a difference in end rewards. Once the agent has experienced a difference between rewards, it can start to refine its predictions and begin to prefer the higher reward, which then usually leads to exploration nearer more valuable states. It may do this even if the higher reward only occurs one time in a thousand initially, say. However, if initial random actions gain no useful signal even for millions of trial-and-error episodes, then you will need to do something to assist the agent.

",1847,,1847,,10/5/2021 16:31,10/5/2021 16:31,,,,0,,,,CC BY-SA 4.0 31935,2,,31919,10/5/2021 10:26,,13,,"

This is a question of time series forecasting, since your numbers form a sequence. You may want to take a look at the "forecasting" tag at CrossValidated.

If you have only 700 data points, ML/AI methods will likely not be very useful. Whatever you do, I would recommend you benchmark your chosen method against very simple approaches, like the overall mean, or the last observation (a "random walk forecast"), or a simple Exponential Smoothing method. These very simple benchmarks can often be surprisingly hard to beat, and they are trivially easy to set up.

You next step should be to include domain knowledge, as Sanyou recommends. This can be as simple as observing that bipolar mood swings follow a day/night cycle and modeling this seasonality, e.g., in a seasonal Exponential Smoothing method. (I'm not saying this disorder does exhibit this kind of seasonality, only that if it does, this can easily be modeled.) Or model any other kinds of drivers you know.

In my experience, understanding your data and your context always beats building more fancy models, or collecting more data.

As free time series forecasting textbooks, I very much recommend Forecasting: Principles and Practice (2nd ed.) by Athanasopoulos & Hyndman and Forecasting: Principles and Practice (3rd ed.) by Athanasopoulos & Hyndman.

",28428,,,,,10/5/2021 10:26,,,,0,,,,CC BY-SA 4.0 31936,1,31946,,10/5/2021 12:55,,0,37,"

I was designing an Artificial Neural Network a while back, but hit a bump when I got to the backpropagation. I was having trouble making the script choose whether to add or subtract from the weights, when I had a thought.

Does the ANN's training data include the proper output of all the neurons, or just the input and output layers? If it's the latter, could somebody please explain how backpropagation works in a simple way?

",42652,,2444,,10/5/2021 17:15,10/6/2021 12:43,Does the ANN's training data include the proper output for every neuron?,,1,1,,,,CC BY-SA 4.0 31937,2,,29998,10/5/2021 14:04,,2,,"

Typical metrics used with segmentation problems are Recall, Precision and the F1 Score (similar or the same as the Dice score depending on the definition used). These can be evaluated per class or for all classes together, commonly referred to as micro and macro averages.

Taking it further, you may wish to have a metric more robust to changes in the threshold. Here the Area under the Curve (AUC) metric is commonly used.

For a more sophisticated analysis you may also be interested in perceptual losses. These quantify how similar an image looks as perceived by people. This is particularly useful if say the shape of the prediction is important but small shifts or scaling does not matter. Have a look at SSIM and LPIPS losses for more information on these.

TorchMetrics may be a good place to look implementations and available metrics.

",42911,,,,,10/5/2021 14:04,,,,0,,,,CC BY-SA 4.0 31942,1,,,10/6/2021 6:26,,2,18,"

I'm trying to build an entity matching model. There are 2 kinds of features - binary (0/1) and text features. Initially I made a deep learning model that uses character level embeddings of some of the text features, word level embeddings of the other text features and also uses the binary features as inputs.

The output is through a softmax for 3 classes, so basically a $n\times 3$ array (where $n$ is the input data length).

I've done three splits - train, val and test, and for training the DL model through Keras, I've specified train as the training split and val as the validation split. I measured the performance on the test split to get DL model metrics. The softmax outputs for all three splits were obtained using model_DL.predict.

Next, I used a Random Forest model as a second stage. Inputs: all the binary features PLUS the softmax outputs as inputs. e.g. I took the train split, removed the text features and added in the columns of the predicted array as separate features. To be even more specific, if predtrain was obtained by using model_DL.predict on the train split, then the additional features were added using train['class1prob'] = predtrain[:,0], train['class2prob'] = predtrain[:,1], train['class3prob'] = predtrain[:,2].

Similarly I did for test and val splits. Now I trained the RF on the augmented train split and measured its performance on the val and test splits. The F-score for the train and val splits was around 0.85, 0.74, 0.73 for the 3 classes respectively (i.e. performance was similar on both splits).

BUT for the train split the predictions were near perfect - 0.98, 0.99, 0.98 F-scores for the three classes. My intuition is that overfitting of the 2nd stage RF is understandable for train, since the softmax outputs were predicted using the 1st stage DL model, which in turn was already trained on train. Also, there's some data leakage for the val split since val was used as a validation set to finetune the DL model by Keras, so maybe even the val metrics aren't so reliable. But there is no leakage for the test set.

My question is, in this scenario have I made an error, or is this blatant overfitting normal for 2 stage models? If there's a glaring error, any way or best practice to fix that?

",38372,,,,,10/6/2021 6:26,2-stage model overfitting,,0,0,,,,CC BY-SA 4.0 31943,1,,,10/6/2021 8:38,,1,25,"

I am currently reading the paper towards robust monocular depth estimation and I have 2 doubts about it.

First of all the paper stated that there are 2 types of depth annotated, dense and sparse. What are they and what are their differences?

Secondly, the paper predicts the relative depth given an input, how do we calculate the loss when a relative depth map is predicted? I know we could simply use MSE if the model predicts an absolute depth map. If I were to train a model myself to predict relative depth maps as well, how should I calculate the loss or other evaluation metrics?

Any help would be greatly appreciated, thanks in advance!

",50155,,,,,10/6/2021 8:38,Monocular depth estimation,,0,0,,,,CC BY-SA 4.0 31945,1,35743,,10/6/2021 12:08,,2,125,"

I'm currently working on constructing a neural network from scratch (in JavaScript). I'm in the middle of working on the backpropagation, but there's something I don't understand: how does the backprop algorithm know which weights to change or which paths to take? The way I did it, it always took all of the paths/weights and changed them all. So how does the algorithm know which paths to take, which weights to change, and whether to add or subtract X amount from said weight?

",42652,,,,,8/8/2022 19:24,How does backpropagation know which weights to change?,,1,1,,,,CC BY-SA 4.0 31946,2,,31936,10/6/2021 12:43,,2,,"

Does the ANN's training data include the proper output for every neuron?

The short answer is: no (not usually or directly).

The long answer is that you can train neural networks in different ways. There's supervised, unsupervised, reinforcement learning/training, or even other ways (e.g. online learning).

The most common way of training neural networks is probably in a supervised (and offline) way. In this case, you typically have a dataset of input-output pairs of the form $D = \{(x_1, y_1), \dots, (x_N, y_N) \}$, where we assume that there's an unknown function $f$ such that $f(x_i) = y_i$, so $x_i$ and $y_i$ are the input and corresponding output of $f$. In this case, we want our neural network, which we can denote by $g_{\theta}$ (where $\theta$ are its weights/parameters), to approximate $f$, i.e. $g_\theta \approx f$, for every input and output of $f$. Unfortunately, $D$ does not usually contain all input-output pairs of $f$ (if we had access to all input-output pairs, we wouldn't even need machine learning), so that's why we can only approximate $f$ with our neural network $g_\theta$ (there's also the aspect that $D$ could be noisy, but you can ignore this for now).

So, in this supervised learning context, $y_i$ (also known as the labels or targets) are the supervisory signals to teach the neural network what it's supposed to produce for $x_i$. So, let's say the neural network produces $\hat{y}_i$ rather than $y_i$ when given $x_i$, i.e. $g_\theta(x_i) = \hat{y}_i$. In that case, we have an "error" of $\hat{y}_i - y_i$, but you could also compute the error in other forms other than the difference (depending on the nature of the targets/labels), for example,$|\hat{y}_i - y_i |$ or $(\hat{y}_i - y_i)^2$.

So, once we know the error, we should change the weights of the neural network, i.e. $\theta$, so that the error is smaller the next time we feed $x_i$ as input to $g_\theta$. The way we usually do that is with back-propagation.

You can only understand back-propagation if you understand derivatives and, in particular, how to take derivatives of functions with respect to multiple parameters. Essentially, back-propagation is really just an automatic way of computing derivatives. So, to understand it, you will first need to study a little bit of calculus. This answer is already quite long, so I will not provide more details here, but there are many videos online that explain (even just intuitively) how back-propagation works. So, you should watch them. If you're already familiar with derivatives, maybe this article may be useful (I read it when I was first learning about the topic, but I had already a good knowledge of basic calculus before doing that).

",2444,,,,,10/6/2021 12:43,,,,0,,,,CC BY-SA 4.0 31951,1,31957,,10/6/2021 22:53,,2,541,"

I am trying to use reinforcement learning to solve a task and compare its performance to humans.

The task is to find a single target in a fixed number of locations. At each step, the agent will pick one location, and check whether it contains the target. If the target is at this location, the agent will get a $+10$ reward and the trial ends; otherwise, the agent will get a hint at where the target is (with some stochastic noise), get a $-0.5$ reward, and it needs to pick another location in the next step. The trial will terminate if the agent cannot find the target within 40 steps (enough for humans). The goal is to solve the task as quickly and accurately as possible.

I am now trying to solve this problem by Deep Q-Network with prioritized experience replay. With the discount factor $\gamma=0.5$, the agent can learn quickly and solve the task with an accuracy close to 1.

My current questions are:

  1. The accuracy level is already very high, but how to motivate the agent to find the target as quickly as possible?

  2. What's the effect of $\gamma$ on the agent's task solving speed?

I am considering $\gamma$ because it relates to the time horizon of the agent's policy, but I now have two opposing ideas:

  1. With $\gamma \rightarrow 0$, the agent is trying to maximize the immediate reward. Since the agent will only receive a positive reward when it finds the target, $\gamma \rightarrow 0$ motivates the agent to find the target in the immediate future, which means to solve the task quickly.

  2. With $\gamma \rightarrow 1$, the agent is trying to maximize the discounted sum of reward in the long term. This means to reduce the negative rewards as much as possible, which also means to solve the task quickly.

Which one is correct?

I have tried training the network with $\gamma=0.1, 0.5, 0.9, 0.99$, but the network can only learn with $\gamma=0.1, 0.5$.

",37482,,2444,,10/8/2021 12:58,10/8/2021 12:58,"How to encourage the reinforcement-learning agent to reach the goal as quickly as possible, and what's the effect of discount factor?",,1,1,,,,CC BY-SA 4.0 31952,1,,,10/7/2021 6:47,,1,131,"

Consider the following excerpt from the abstract of the research paper titled Squeeze-and-Excitation networks by Jie Hu et al.

Convolutional neural networks are built upon the convolution operation, which extracts informative features by fusing spatial and channel-wise information together within local receptive fields. In order to boost the representational power of a network, several recent approaches have shown the benefit of enhancing spatial encoding.

The authors used the term "spatial encoding" and the excerpt implies that enhancing spatial encoding has the benefit of increasing the representational power of a convolutional neural network.

What is meant by the term "spatial encoding" in this context related to the convolutional neural networks?

",18758,,2444,,10/8/2021 0:35,11/23/2022 23:00,"What is meant by ""spatial encoding"" in the context of convolutional neural networks?",,1,0,,,,CC BY-SA 4.0 31954,1,,,10/7/2021 7:27,,1,31,"

How do you decide that you have tested enough hyper-parameter combinations for a specific neural network architecture to discard it and move on to a new model?

Do you have a structured (generic) approach? In practice, what gives you the necessary performance (e.g. >= 80% accuracy) the fastest (w.r.t. to your work-hours) assuming there is no SOTA that easily exceeds your requirements and extensive hyper-parameter optimization is infeasible?

",50178,,2444,,10/8/2021 0:32,10/8/2021 0:32,How do you decide that you have tested enough hyper-parameter combinations for a specific neural network architecture?,,0,1,,,,CC BY-SA 4.0 31957,2,,31951,10/7/2021 12:08,,0,,"

The accuracy level is already very high, but how to motivate the agent to find the target as quickly as possible?

You already are, in two different ways:

  • A penalty (negative reward) for each time step taken.

  • A positive reward for completing a task, plus discounting.

Both of these choices are sufficient that action values will be maximised by taking the most direct route to complete a task, and minimise expected time steps from any starting point.

In theory you could completely lose one of the two approaches, and the reward system would still work.

For DQN I would lose the positive reward at the end and use a relatively high discount factor, e.g. $\gamma = 0.99$ - which is only required for numerical stability, and not really part of the problem definition. If your goal is to minimise number of time steps, then a simple count of number of remaining time steps is already a good cost function, and negating it to make a reinforcement learning return is close to ideal. It often works well with Q learning too because it will explore away from repetition, even if at the start of training it cannot reach the target state.

What's the effect of $\gamma$ on the agent's task solving speed?

This can be complex, but is a scenario with the only positive reward at the end and a discount factor, then expected reward will depend on expected future time steps in a geometric series. If in state $s_1$, action choice $a_1$ would lead to the end goal in 3 time steps and action choice $a_2$ would lead to the end goal in 4 time steps, with the only reward $r_T$ for reaching the goal state, then $q(s_1, a_1) = \gamma^3 r_T \gt q(s_1, a_2) = \gamma^4 r_T$, making $a_1$ the obvious choice.

A stochastic environment may muddy this a little, but in general the agent is going to prefer the lower number of timesteps to get to the goal. If distribution of number of timesteps can be very different, then different $\gamma$ values may cause slightly different gambling options made by the agent to complete faster. This is the reason why I suggest dropping the positive reward at the end, if your goal is to minimise expected number of timesteps to complete. That is because technically a change to $\gamma$ is a change to the problem definition - muddied slightly by usually needing $\gamma \lt 1$ when training a DQN to improve stability.

Which one is correct? I have tried training the network with $\gamma=0.1, 0.5, 0.9, 0.99$, but the network can only learn with $\gamma=0.1, 0.5$.

Both are correct, although you do have to be concerned about $\gamma$'s dual role as problem definition parameter and solution hyper-parameter when working with approximators.

I think (but without looking at your code) that you have a problem with your neural network hyperparameters. A discount factor of $0.9$ or $0.99$ should work with DQN, and is a very common choice. I suggest try a few different architecture choices, and also DQN hyperparameters such as experience replay size, time between copying learning network to target network etc.

Another thing that occurs to me, and may be specific to the environment that you are working with: If you are comparing the performance of your agent with a human, then a human may be able to apply their memory of previous attempts to this problem, and look for trends between time steps. If your state vector does not capture or summarise the history of guesses so far, and such "trend" information is actually useful for your problem, then you may need to add some kind of memory. You could modify the state to summarise attempts so far, or you could use an agent with memory, such as one based on a RNN.

Whether an agent with memory would help depends on the nature of the different "locations" that are being guessed at, and the hints. A very simple example of where memory would make a huge difference is game where the locations are all the numbers between 1 and 100, and the agent is told "higher" or "lower" when it makes an incorrect guess. Storing (or learning to store) the bounds implied by guesses so far would be critical to good performance of the agent.

",1847,,1847,,10/7/2021 15:05,10/7/2021 15:05,,,,2,,,,CC BY-SA 4.0 31959,2,,1288,10/7/2021 14:42,,7,,"

Whether Minsky knew or not, it was definitely known to Rosenblatt, as he published those results in his really pioneering report - Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, published in 1961.

A large majority of academic and industry experts are simply unaware of the "depth" of Rosenblatt's publication on perceptrons, where he not only proved that 3-layer perceptrons (which he called elementary) are universal (check theorem 1 in section 5.2), but where he also provided results on convergence (check theorem 4 in section 5.5) and a statistical mechanics analysis of their generalization capabilities (check chapter 6 for the foundational theory, and figure 13 and 14 for an application of it, following the analysis in section 7.1.2).

It is simply unfortunate that Rosenblatt accidentally died soon after Minsky and Papert's not-so-pioneering 1969 book was published. I believe its misleading influence has regressed AI research for several decades.

If only Rosenblatt lived longer and made his presence stronger in academia, we would not be handing out Turing awards in AI to those who on critical scrutiny are objectively undeserving of it.

",50187,,2444,,10/11/2021 8:47,10/11/2021 8:47,,,,3,,,,CC BY-SA 4.0 31962,1,,,10/7/2021 20:04,,2,100,"

I'm starting a new RL project. I'm familiar with Deep Q-Learning because of an old project where I used it, but I'm not sure I chose correctly back then.

Why should or shouldn't I choose DQN, or any other RL algorithm/method for a problem? By which criteria should I judge an RL algorithm? Is there any set of guidelines to help you choose a specific RL algorithm for a problem?

I did some research, but wasn't satisfied.

",45475,,2444,,10/13/2021 0:07,10/13/2021 0:07,How should I choose a reinforcement learning algorithm?,,0,0,,10/13/2021 2:17,,CC BY-SA 4.0 31967,1,31980,,10/8/2021 6:48,,1,37,"

I now have $N$ documents, which share common content and they have special unique content.

Say I have $3$ legal documents related to the same person. Document $A$ is about land law, document $B$ is about company law and document $C$ is about marriage law. How can I extract the land, company and marriage content from each document respectively and skip the common personal information?

It sounds like text-summarization but with a very different nature. Any idea is welcome.

Edit: In my situation, $N$ varies and the nature of the unique content is unknown.

",50208,,50208,,10/10/2021 3:49,10/10/2021 3:49,"Among N documents, how to summarize the most unique content in each document?",,2,0,,,,CC BY-SA 4.0 31968,1,32014,,10/8/2021 7:16,,3,178,"

I need to build a hand detector that recognizes the chord played by a hand on a guitar.

I read this article Static Hand Gesture Recognition using Convolutional Neural Network with Data Augmentation that looks like what I need (hand gesture recognition).

I think my task is (from my point of view) a little more difficult than that in the paper, because I think it is more difficult to distinguish between two chords than between a punch and a palm.

What I don't understand clearly is how to choose the best parameters for this more complex task: is it better to have more/less convolutional layers? A higher or lower number of poolings? Max or avg pooling?

The input will be more or less like this one:

There will be a first net (MobileNetV2 trained on EgoHands) that will find the bounding box, crops the image and then passes the saturated blending between the original one and Frei&Chen edges to the second net (unfortunately I don't have a processed picture yet, I will post an example as soon as I get it)

",48858,,2444,,10/11/2021 10:18,10/11/2021 23:16,How do I choose the hyper-parameters for a model to detect different guitar chords?,,1,1,,,,CC BY-SA 4.0 31969,2,,17630,10/8/2021 7:49,,1,,"

It consists of organizing training in a series of learning problems, each relying on small "support" and "query" sets to mimic the few-shot circumstances encountered during evaluation.(an episode is a single task)

",45599,,45599,,10/8/2021 23:12,10/8/2021 23:12,,,,0,,,,CC BY-SA 4.0 31970,1,,,10/8/2021 8:34,,2,63,"

It is known that machine learning algorithms expect feature engineering as an initial step. Now, consider the following paragraph, taken from 1.1 The deep learning revolution of the textbook named Deep learning with PyTorch by Eli Stevens, Luca Antiga, Thomas Viehmann, regarding the role of feature engineering in deep learning

Deep learning, on the other hand, deals with finding such representations automatically, from raw data, in order to successfully perform a task. In the ones versus zeros example, filters would be refined during training by iteratively looking at pairs of examples and target labels. This is not to say that feature engineering has no place with deep learning; we often need to inject some form of prior knowledge in a learning system. However, the ability of a neural network to ingest data and extract useful representations on the basis of examples is what makes deep learning so powerful. The focus of deep learning practitioners is not so much on handcrafting those representations, but on operating on a mathematical entity so that it discovers representations from the training data autonomously. Often, these automatically created features are better than those that are handcrafted! As with many disruptive technologies, this fact has led to a change in perspective.

The paragraph clearly saying that we need to inject some form of prior knowledge into the learning system. What can be a concrete example for such prior knowledge we are used in deep learning systems?

",18758,,2444,,10/8/2021 12:20,10/8/2021 12:20,What can be an example for the prior knowledge used in Deep Learning systems?,,1,1,,,,CC BY-SA 4.0 31971,2,,31967,10/8/2021 8:41,,0,,"

I think the best task for your purpose is name entity recognition (NER) rather than text summarization.

The logic is the following: if the three classes of documents are truly specific, there would be specific entities for each of them, but since the documents are linked by information about a single individual, all entities related to that individual and not to the specific domain would be shared.

So the most obvious shared entity in all documents, the name of the individual, could be identified and then pruned in all document, same holds for every other shared entity (can't came up with more clever examples right now).

If you work with python, SpaCy offer pretrained models that do a great job already also for NER, and in several languages as well. But you might consider to train your own model as well, maybe retrain on top of spacy models, cause for these type of tasks, the most information you can provide about which entities belongs to which class, the best the performances, and unfortunately, generic use models can account for many entities, but they can't associate them directly to specific domains of interest.

",34098,,,,,10/8/2021 8:41,,,,2,,,,CC BY-SA 4.0 31972,1,,,10/8/2021 8:46,,0,73,"

Features in machine learning are the attributes of the elements of a data set. They are considered as random variables in probability.

Consider the following excerpt from 1.1: The deep learning revolution of the textbook named Deep learning with PyTorch by Eli Stevens, Luca Antiga, Thomas Viehmann,

On the right, with deep learning, the raw data is fed to an algorithm that extracts hierarchical features automatically, guided by the optimization of its own performance on the task; the results will be as good as the ability of the practitioner to drive the algorithm toward its goal.

When can we call a feature hierarchical? Does it refer to a random variable that is a (function on) derived from some other random variables?

",18758,,2444,,1/15/2022 0:33,1/15/2022 0:33,"When can we call a feature ""hierarchical""?",,1,6,,,,CC BY-SA 4.0 31973,2,,31970,10/8/2021 9:05,,1,,"

I would distinguish at least 2 cases when it comes to a generic expression like prior knowledge:

  • generic extra information provide to a model, really close if not the same as feature engineering.
  • literal prior probability distributions used to initialize or guide a model during training.

For the first case there's plenty of examples that we can provide. The most intuitive maybe is use of masks in computer vision. Let's say we want to clean an image (i.e. haze removal) as a pre processing step for a self driving car system. Then we could train a model and feed to it not only the image captured by the camera, but also a depth mask estimated using another model. In this case the other model works as a prior distribution, since the model is not learning it, it just leverage that extra information that comes with the image.

For the second case, there are specific class of models that learn and sometimes require prior distributions as an input, the most known to me are Bayesian Neural Networks.

Why bothering providing a prior for these models? Well, there are at least 2 reasons: sometimes we have information about a system we're trying to describe, so we can make the training more efficient, for example we're might trying to fit a coin toss model, but we know the coin is not fair and that the resulting probabilities returned by the model should not be 50/50. The second reason is that sometimes we also want to train models that do not return only raw probabilities, but also estimate the uncertainty level of those probabilities. To do that, the model learns a posterior probability over the data, and it does that by updating an initial prior. Note that the prior for this class of models can also be randomly initialize.

",34098,,34098,,10/8/2021 10:25,10/8/2021 10:25,,,,2,,,,CC BY-SA 4.0 31976,2,,31895,10/8/2021 13:51,,1,,"

The dimensionality used to discuss convolutional layers in CNNs is based on the dimensionality of the input without considering channels.

  • 1D CNNs might process raw audio sources (mono or stereo), text sequences, IR spectrometry from a single sample point
  • 2D CNNs can process photographic images (regardless of colour/depth etc information), audio spectrograms, grid-based board games
  • 3D CNNs can process voxels from Minecraft, image sequences from videos etc

It is often possible to perform signal processing that changes dimensions of signal sources. Whether that adds "channels" or adds a dimension can be a matter of convenience to fit a particular approach. In terms of defining a n-dimensional array, then the addition of channels is just another dimension. In terms of considering signal processing performed in CNNs, we care about the distinction between channels and the rest of the space that the signal exists in.

One way to decide whether something is considered a channel or a CNN layer dimension is whether there is an ordering or metric that consistently separates measurements over that dimension. If a metric such as space, time or frequency applies, then that dimension can be considered part of the "core" dimensionality that defines the problem, whilst a more arbitrary set of features (e.g. each entry in the vector embedding of a word) is more channel-like.

As standard CNN design involves summing over all input channels to create each output feature/channel, which is mathematically the same as increasing the convolution dimension (when the kernel size in that dimension matches to the number of channels), then in practice the convolution operation implemented in a CNN layer of a particular dimensionality can be one dimension size higher. E.g. a layer class labelled "Conv1D" will perform a 2D convolution operation, with the added dimension size matching exactly to the number of input channels. However, conceptually it makes sense to view this as a sum of lower-dimension convolutions, because of the need to exactly match the dimension size. This extra dimension is seen as a convenience for calculation, and not part of the definition.

",1847,,1847,,10/8/2021 14:23,10/8/2021 14:23,,,,0,,,,CC BY-SA 4.0 31977,1,32035,,10/8/2021 13:59,,3,218,"

I'm working on a VAE model to produce synthetic data of X-Ray diffraction spectrums.

I try to figure out how I can measure the quality of the spectrums. The goal would be to produce synthetic data which is similar to the training data but also different from the training data. The spectrums should keep their characteristics, but should be different in terms of noise and intensity.

I trained models which can produce those type of spectrums (because I checked some of them visual), but I don't know how to quantify the difference/similarity to the origin (1) and the difference between the produced synthetic spectrums in one dataset (2).

Are there any methods to quantify these points?

",50214,,2444,,10/9/2021 23:42,10/12/2021 19:44,How to determine the quality of synthetic data?,,1,0,,,,CC BY-SA 4.0 31980,2,,31967,10/8/2021 18:53,,0,,"

It might be worthwhile to try TF-IDF and see if that works for you.

Score each term in each document proportional to how often it occurs in that document, but inversely proportional to how often it occurs across multiple documents. Then look at the the terms that have the highest scores for each document. You can use scikit-learn's TF-IDF Vectorizer to help you with this, if you are using Python.

Presumably, words that are highly specific and relevant to each of your three documents will stand out, and words that relate to personal information (as well as generic legal terms and non-specific English words) will be common to multiple documents and get filtered out.

Note: This will get you the specific words that are particular to each document. If the type of "content" you are seeking to extract goes beyond the word level, then you might have to take a different approach. Perhaps one way is to use the words obtained from TF-IDF to highlight the places in each document where the desired content might be found.

",50223,,,,,10/8/2021 18:53,,,,1,,,,CC BY-SA 4.0 31982,2,,31583,10/9/2021 6:37,,1,,"

On creating custom environments:

... always normalize your observation space when you can, i.e., when you know the boundaries (From stable-baselines)

You could normalize them as part of the environment's state space or before passing them as input to the policy. Depending on the the agent's algorithm implementation, what works for you may vary.

(See this answer from a related question)

",40671,,,,,10/9/2021 6:37,,,,0,,,,CC BY-SA 4.0 31987,2,,31972,10/9/2021 19:11,,1,,"

You can find a brief explanation of hierarchical feature selection in the following from "An Empirical Evaluation of Hierarchical Feature Selection Methods for Classification in Bioinformatics Datasets with Gene Ontology-based Features" paper:

Hierarchical feature selection is a new research area in machine learning/data mining, which consists of performing feature selection by exploiting dependency relationships among hierarchically structured features.

Therefore, hierarchical features correspond to the dependency structure between features.

",4446,,,,,10/9/2021 19:11,,,,0,,,,CC BY-SA 4.0 31990,1,,,10/9/2021 20:29,,1,153,"

I am trying to model the following problem as a Markov decision process.

In a steel melting shop of a steel plant, iron pipes are used. These pipes generate rust over time. Adding an anti-rusting solution can delay the rusting process. If there is too much rust, we have to mechanically clean the pipe.

I have categorized the rusting states as StateA, StateB, StateC, StateD with increasing rusting from A to D . StateA is absolute clean state with almost no rust.

     StateA -> StateB -> StateC -> StateD
     ∆  ∆ ∆       |        |         |           
     |  | |       |        |         |
   Mnt Mnt Mnt    |        |         |
     |  | |_clean_|        |         |
     |  |_______clean______|         |
     |_______________________________|     
                          clean

We can take 3 possible actions:

  • No Maintenance
  • Clean
  • Adding Anti Rusting Agent

The transition probabilities are mentioned below: The states degrades from StateA to StateD. State degrades with rusting with certain amount of rust denoted by transition probabilities. Adding Anti Rusting Agent decreases the probabilty of degradation of state

The transition probabilities from StateA to StateB is 0.6 with No Maintenance

The transition probabilities from StateA to StateB is 0.5 with adding an anti-rusting agent.

The transition probabilities from StateB to StateC is 0.7 with No Maintenance

The transition probabilities from StateB to StateC is 0.6 with adding an anti-rusting agent.

The transition probabilities from StateC to StateD is 0.8 with No Maintenance.

The transition probabilities from StateC to StateD is 0.7 with an anti-rusting agent.

Action clean will move any state to StateA with probability 1

Rewards for StateA is 0.6, StateB is 0.5, StateC is 0.4, StateD is 0.3 Clean action lead to Maintenance (Mnt) state which has 0.1 reward. The Maintenance state will lead to increase in productivity after cleaning which is good, but there will be shutdown while Maintenance, so there will be loss of production. So reward is less.

I am new to MDP. It will be helpful if anyone can help me in getting the decision about when should we clean through MDP through a python implementation (python codes)? Shall we Clean at StateB, Clean at StateC, Clean at StateD?

",50242,,50242,,10/13/2021 7:53,10/13/2021 7:53,Implementation of MDP in python to determine when to take action clean,,0,10,,,,CC BY-SA 4.0 31991,1,,,10/10/2021 2:15,,2,885,"

Consider the following description regarding gradient clipping in PyTorch

torch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=2.0, error_if_nonfinite=False)

Clips gradient norm of an iterable of parameters.

The norm is computed over all gradients together as if they were concatenated into a single vector. Gradients are modified in-place.

Let the weights and gradients, for loss function $L$, of the model, be given as below

\begin{align} w &= [w_1, w_2, w_3, \cdots, w_n] \\ \triangledown &= [\triangledown_1, \triangledown_2, \triangledown_3, \cdots, \triangledown_n] \text{, where } \triangledown_i = \dfrac{\partial L}{\partial w_i} \text{ and } 1 \le i \le n \end{align}

From the description, we need to compute gradient norm, i.e. $||\triangledown||$.

How to proceed after the step of finding the gradient norm? What is meant by clipping the gradient norm mathematically?

",18758,,2444,,10/10/2021 8:09,11/1/2021 11:14,What exactly happens in gradient clipping by norm?,,1,0,,,,CC BY-SA 4.0 31992,2,,31991,10/10/2021 4:58,,2,,"

Gradient clipping is a technique that tackles exploding gradients. The idea of gradient clipping is very simple: If the gradient gets too large, we rescale it to keep it small. More precisely,

$$ \text{if } \Vert \mathbf{g} \Vert \geq c, \text{then } \mathbf{g} \leftarrow c \frac{\mathbf{g}}{\Vert \mathbf{g} \Vert} $$

where $c$ is a hyperparameter, $\mathbf{g}$ is the gradient, and $\Vert \mathbf{g} \Vert$ is the norm of $\mathbf{g}$.

Since $\frac{\mathbf{g}}{\Vert \mathbf{g} \Vert}$ is a unit vector, after rescaling the new $\mathbf{g}$ will have norm $c$.

Note that if $\frac{\mathbf{g}}{\Vert \mathbf{g} \Vert} < c$ , then we don’t need to do anything.

Check this article for more information

",48391,,48391,,11/1/2021 11:14,11/1/2021 11:14,,,,1,,,,CC BY-SA 4.0 31994,1,,,10/10/2021 8:40,,1,129,"

I am looking for best open source python repo for facial recognition. Best if it uses tensorflow backend. I know you can train images to recognize. Yolo can be used if trained on face. To name the person.

But I wonder if there is any code where you can add new faces to database without training or minimum training. As new faces are added I don't want to train the network repeatedly. Also the less amount of face picture needed the better.

If code is not available any guide or research paper will also be helpful. For example what approach can I take to make an app for a person who has difficulty remembering peoples name. So the app can take a small video or few photos with name and will be able to tell the persons name in the future. Neural network should not be retrained while adding new face to database if possible.

",50252,,,,,10/10/2021 11:52,What is the best open source python repo for facial recognition?,,1,0,,10/11/2021 10:11,,CC BY-SA 4.0 31995,2,,31994,10/10/2021 11:52,,0,,"

It seems your problem is more related to Face Identification than Face Recognition.

I understand you are looking for the implementation using a NN based approach, but if you're open to giving it a try to other approaches you could consider using Eigenfaces, which is based in PCA.

For that, you can find some references and code implementations.

Datasets you can use for testing face identification:

  • AT&T Face Database
  • Extended Yale Face Database B
  • BioID Face Database

References you can look at for additional implementation details: https://www.researchgate.net/publication/333367462_Improving_face_recognition_of_artificial_social_companions_for_smart_working_and_living_environments

A forked repository on GitHub for this approach: https://github.com/joaoquintas/facerec

Original repository for reference https://github.com/bytefish/facerec

",12352,,,,,10/10/2021 11:52,,,,0,,,,CC BY-SA 4.0 31996,1,,,10/10/2021 13:20,,2,26,"

We can say that matrix factorization of a matrix $R$, in general, is finding two matrices $P$ and $Q$ such that $R \approx P.Q^{T}$ with some constraints on $P$ and $Q$. Looking at some matrix factorization algorithms on the internet like Scikit-Learn's Non-Negative Matrix Factorization I come to wonder how this works for recommendation systems. Generally with recommendation systems we have a user-item ratings matrix, let's denote it $R$, which is really sparse so when we look at datasets we find missing values, $NaN$. When I look at examples of using matrix factorization for recommender systems I find that the missing values are replaces with $0$. My question is, how do we get actual predictions on the items non rated by users when the dot product $P.Q^{T}$ is supposed to converge to $R$?

I have tried with this simple matrix that I found here

R = [
     [5,3,0,1],
     [4,0,0,1],
     [1,1,0,5],
     [1,0,0,4],
     [0,1,5,4],
    ]
R = np.array(R)

The algorithm I used is Scikit-Learn's and no matter how I change the parameters, I can't seem to find a matrix that has actual values in place of $0$s. It always finds a really good approximation of $R$. Maybe all the hyperparameter tuning I'm doing is leading to overfitting, and let's suppose there is a set of combination of parameters for which we don't have $0$s and still we minimize $||R-P.Q^{T}||$ with regard to some norm to a decent level, how can we be sure that the predictions are accurate? I mean, there must be many different combinations of parameters that ensure both prediction different values for the $0$s and minimizing $||R-P.Q^{T}||$ to a decent level.

Thank you!

",50255,,,,,10/10/2021 13:20,How matrix factorization helps with recommendations when it converges to the initial user-items matrix?,,0,0,,,,CC BY-SA 4.0 31997,1,32269,,10/10/2021 17:39,,1,44,"

I'm asking because classification problems have very concrete metrics like accuracy that are totally transparent to understand.

Whereas regression models seem to have a very large number of possible evaluation strategies and to me at least it is not clear which (if any) of them is as reliable/interpretable as accuracy is in classification problems.

Possible Candidates:

  • Regular loss (e.g MAE): MAE is potentially quite interpretable, but again interpretation depends upon distribution statistics which vary across regression problems.
  • MAPE/Relative Loss: This is interesting and is potentially decently similar to accuracy. Yet it has obvious draw backs, like the true value being extremely small causing explosion of loss values & there being no incorporation of overall distribution statistics for the output values.
  • Chi-squared test: I like the idea of this but I have not seen it used at all for NN regression for some reason. I'm not sure why and I'm curious if people think it would be a good idea to use it for that.
  • (adjusted) R^2 coefficient: Another statistic that seems great in theory, but again I see almost never being used for NNs and I'm not sure why. This has the great advantage of being a 'bounded'/'normalized' metric like accuracy and in theory is should be just as interpretable. Why is it not used for NNs?
",42996,,42996,,10/10/2021 17:48,12/3/2021 21:03,Best way to measure regression accuracy?,,1,0,,,,CC BY-SA 4.0 31998,1,,,10/10/2021 20:01,,0,163,"

In the paper Salient Region Detection and Segmentation, I have a question pertaining to section 3 on the convolution-like operation being performed. I had already asked a few questions about the paper previously, for which I received an answer here. In the answer, the author (JVGD) mentions the following:

So for each pixel in the image you overlap on top $R_{1}$ and then $R_{2}$ on top of $R_{1}$, then you compute the distance $D$ for those 2 regions to get the saliency value of that pixel, then slide the $R_{1}$ and $R_{2}$ regions in a sliding window manner (which is basically telling you to implement it with convolution operation).

Regarding the above, I had the following question: If the region $R_{2}$ moves in a sliding window manner, won't the saliency map (mentioned in section 3.1) have a smaller size than the original image (like in convolution the output image is smaller)? If this is so, wouldn't it be impossible to add the saliency maps at different scales since they each have different sizes?

The following edit re-explains the question in more detail:

In the animation above, you can see a filter running across an image. For each instance of the filter, some calculations are happening between the pixels of the filter and the image. The result of each calculation becomes one pixel in the output image (denoted by "convolved feature"). Here, the output image is smaller than the input image because there are only 9 instances of the filter. From what I understood of the salient region operation, a similar process is being followed i.e. a filter runs across an image, some calculations happen, and the result of each calculation becomes one pixel in the output image (saliency map). Hence, won't the saliency map have a smaller size than the original image? Furthermore, when the filter size is 3 x 3, the output image size is 3 x 3. However, if the filter size was 5 x 3, the output image size would only be 1 x 3. Clearly, the output image size is different for different filter sizes. This makes the output images (saliency maps) impossible to add. There is clearly something I am missing / misunderstanding here, and clarity on the same would be much appreciated.

P.S. There is no indication of padding or any operation of that sort in the research paper, so I don’t want to assume anything because the calculations would then be wrong.

",,user48670,,user48670,10/19/2021 13:03,1/23/2023 8:05,"In this paper, if region $R_{2}$ moves in a sliding window manner, won't the saliency map have a smaller size than the original image?",,1,1,0,,,CC BY-SA 4.0 32002,1,,,10/11/2021 8:40,,2,36,"

In this page, it's written (emphasis mine)

If probabilities are thought to describe orderly opinions, Bayes theorem describes how the opinions should be updated in the light of new information

What is your understanding/definition of "orderly" opinion?

Maybe something like: a probability that is not arbitrarily chosen but well-founded and explainable?

",50271,,2444,,10/11/2021 10:10,11/7/2022 14:08,"What do we mean by ""orderly opinions"" in this sentence in the context of Bayes theorem?",,1,0,,,,CC BY-SA 4.0 32004,2,,32002,10/11/2021 13:54,,0,,"

That term exactly refers to the difference between two main paradigms in probability and statistics: Frequentism vs Bayesianism. You can find many texts for explaining the difference, for example [1] and [2].

By the way, we can briefly say it means that we assume that there exists a fixed opinion out there (like a parameter of a distribution function) and we are going to estimate it based on the observation. Despite the Baysian probability, in frequentist probability, we assume that the opinion is not fixed and changing randomly based on observations.

",4446,,2444,,10/13/2021 13:49,10/13/2021 13:49,,,,0,,,,CC BY-SA 4.0 32005,1,,,10/11/2021 14:39,,2,150,"

I have used a 3D CNN architecture, for detecting the presence of a particular promoter (MGMT), by using FLAIR brain scans. (64 slices per patient). The output is supposed to be binary (0/1).

I have gone through the pre-processing properly, and used stratification after splitting the "train" dataset into train and validation sets, (80-20 ratio). My model initialisation and training kernels look like this:

def get_model(width=128, height=128, depth=64):
"""Build a 3D convolutional neural network model."""

inputs = keras.Input((width, height, depth, 1))

x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(inputs)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)

x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)

x = layers.Conv3D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)

x = layers.Conv3D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)

x = layers.GlobalAveragePooling3D()(x)
x = layers.Dense(units=512, activation="relu")(x)
x = layers.Dropout(0.3)(x)

outputs = layers.Dense(units=1, activation="sigmoid")(x)

# Define the model.
model = keras.Model(inputs, outputs, name="3dcnn")
return model


# Build model.
model = get_model(width=128, height=128, depth=64)
model.summary()

Compile model:

initial_learning_rate = 0.0001
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
    initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
model.compile(
    loss="binary_crossentropy",
    optimizer=keras.optimizers.Adam(learning_rate=lr_schedule),
    metrics=["acc"],
)

# Define callbacks.
checkpoint_cb = keras.callbacks.ModelCheckpoint(
    "Brain_3d_classification.h5", save_best_only=True,monitor = 'val_acc', 
                             mode = 'max', verbose = 1
)
early_stopping_cb = keras.callbacks.EarlyStopping(monitor="val_acc", patience=20,mode = 'max', verbose = 1,
                           restore_best_weights = True)

# Train the model, doing validation at the end of each epoch
epochs = 60
model.fit(
    train_dataset,
    validation_data=valid_dataset,
    epochs=epochs,
    shuffle=True,
    verbose=2,
    callbacks=[checkpoint_cb, early_stopping_cb],
)

This is my first time ever working with a 3D CNN, and I used this keras webpage for the format:https://keras.io/examples/vision/3D_image_classification/

The (max) validation accuracy in my case was about 54%. I tried reducing the initial learning rate , and for 0.00001 I got to a max of 66.7%. For learning rates of 0.00005, 0.00002, I got max accuracy of about 60 and 62%.

Accuracy vs epoch plots for learning rates 0.0001, 0.00005,0.00002 and 0.00001:

It does seem like reducing the initial learning rate has a positive effect on accuracy, although the accuracy is still very low.

What other parameters can I tune to expect a better accuracy? And is it okay to just keep reducing the initial learning rate until we achieve a targeted accuracy?

I know this is a rather broad question, but I am quite confused as to how we should approach increasing the accuracy in the case of CNNs, (that too 3D), where there just seems to be a lot of stuff going on. Do I change something in my initialisations? Add more layers? Or change the parameters? Do I decrease or increase them? With so many things going on, I don't think trying every combination and just keep repeating the training process is an efficient idea...

Full notebook (including pre-processing steps): https://www.kaggle.com/shivamee682003/3d-image-preprocessing-17cd03/edit

",49964,,,,,10/12/2021 9:34,Improving validation losses and accuracy for 3D CNN,<3d-convolution>,2,1,,,,CC BY-SA 4.0 32006,2,,32005,10/11/2021 15:21,,1,,"

What is the No Information Rate (NIR)? I.e. what are the percentages of positive and negative labels? Have you looked at the predictions of your model? If it's all 0's or all 1's then it probably learned nothing, other than predicting the majority class.

When it comes to architectural choices and hyperparameters, especially if you start working with NNs, then Andrej Karpathy's blog post called A Recipe for Training Neural Networks is a really good starting point. It gives a good reference on how to approach things in the beginning when you have not much intuition. Simply reducing the learning rate will not help much if your model is way too small. You may also find it useful to add ResNet-like skip connections to improve performance for very deep models (i.e. many layer).

",37120,,,,,10/11/2021 15:21,,,,0,,,,CC BY-SA 4.0 32010,1,32018,,10/11/2021 19:36,,5,3416,"

Both value iteration and policy iteration are General Policy Iteration (GPI) algorithms. However, they differ in the mechanics of their updates. Policy Iteration seeks to first find a completed value function for a policy, then derive the Q function from this and improve the policy greedily from this Q. Meanwhile, Value Iteration uses a truncated V function to then obtain Q updates, only returning the policy once V has converged.

What are the inherent advantages of using one over the other in a practical setting?

",22424,,2444,,10/11/2021 22:20,12/26/2021 18:41,When to use Value Iteration vs. Policy Iteration,,2,1,,,,CC BY-SA 4.0 32011,1,32019,,10/11/2021 22:31,,1,37,"

Suppose I have three batches of feature maps, each of size $180 \times 100 \times 100$. I want to concatenate all these feature maps channel-wise, and then resize them into a single feature map. The batch size is equal to 10.

Consider the following code in PyTorch

import torch
from torch import nn

x1 = torch.randn(10, 180, 100, 100)
x2 = torch.randn(10, 180, 100, 100)
x3 = torch.randn(10, 180, 100, 100)


pool1 = nn.AvgPool3d(kernel_size = (361, 1, 1), stride= 1)
pool2 = nn.AvgPool3d(kernel_size = 1, stride= (3, 1, 1))

final_1_x = pool1(torch.cat((x1, x2, x3), 1))
final_2_x = pool2(torch.cat((x1, x2, x3), 1))

print(final_1_x.shape)
print(final_2_x.shape)

and its output is

torch.Size([10, 180, 100, 100])
torch.Size([10, 180, 100, 100])

You can observe that both types of polling I did are able to give a feature map of the desired size. But the first one takes a large amount of time with unsatisfactory results and the second one ignores many values in the input feature maps. I don't know whether it is okay to ignore or not.

I want to know the recommended way to perform polling in order to get the desired size of feature maps. Is there any such recommended way to perform pooling?

",18758,,2444,,10/13/2021 13:46,10/13/2021 13:46,Is there any recommended way to perform pooling in this context?,,1,2,,,,CC BY-SA 4.0 32013,1,,,10/11/2021 23:08,,1,34,"

Suppose I have a feature map with size $C_1 \times H \times W$. And I need to convert it into a feature map of size $C_2 \times H \times W$.

One way to do this is to use convolutional neural networks as Conv2d($C_1, C_2$)

I want to know whether there are any other ways in Literature to perform the desired operation?

",18758,,18758,,10/11/2021 23:18,10/11/2021 23:18,What are the recommended ways to change shape of feature maps channel wise other than using Convolutional neural networks?,,0,2,,,,CC BY-SA 4.0 32014,2,,31968,10/11/2021 23:16,,0,,"

The ideal hyperparameters is usually dependent on your dataset and will differ on a case by case basis. Go for trial and error to determine the hyperparameters that works best for you.

Few research papers similar to your use case is listed below.

",48391,,,,,10/11/2021 23:16,,,,0,,,,CC BY-SA 4.0 32017,1,,,10/12/2021 0:04,,2,128,"

I've come across two types of neural networks to predict, both from Matlab, the closed structure and the net that removes one delay to find new data.

From Matlab's app generated scripts we see:

% Closed Loop Network % Use this network to do multi-step prediction. % The function CLOSELOOP replaces the feedback input with a direct % connection from the output layer.

netc = closeloop(net);
netc.name = [net.name ' - Closed Loop'];
view(netc)
[xc,xic,aic,tc] = preparets(netc,{},{},T);
yc = netc(xc,xic,aic);
closedLoopPerformance = perform(net,tc,yc)

% Step-Ahead Prediction Network % For some applications it helps to get the prediction a timestep early. % The original network returns predicted y(t+1) at the same time it is % given y(t+1). For some applications such as decision making, it would % help to have predicted y(t+1) once y(t) is available, but before the % actual y(t+1) occurs. The network can be made to return its output a % timestep early by removing one delay so that its minimal tap delay is now % 0 instead of 1. The new network returns the same outputs as the original % network, but outputs are shifted left one timestep.

nets = removedelay(net);
nets.name = [net.name ' - Predict One Step Ahead'];
view(nets)
[xs,xis,ais,ts] = preparets(nets,{},{},T);
ys = nets(xs,xis,ais);
stepAheadPerformance = perform(nets,ts,ys)

My question is: What is the real difference between them?

Can one uses them equivalently? If yes, why? I mean, even tho the structure or how they are equipped, could be very very different, e.g. one is apple, the other is grape?

As far as I understand both can return new data if one codes them for that. For example, taking the closed net, one can predict 10 new values. Taking the net that removes one delay, one can predict one new value, but if one does this recursively 9 times, one can get the new 10 data. Is there a problem in using this last net in that way?

On another side, running both codes, as they are now (this changes depending on the code one works on), yields very different performances. Why?

Update:

I've checked this page https://www.mathworks.com/matlabcentral/answers/297187-neural-network-closed-loop-vs-open-loop, and in the answer by Greg Heath, we see

[...]

OPENLOOP: The desired output, AKA the delayed target, is used as an additional input. The OL net will produce output for the common time extent of the input and target. CLOSELOOP: The delayed target input is replaced by a direct delayed output connection. The CL net will produce output for the time extent of the input.

[...]

"The desired output, AKA the delayed target, is used as an additional input." how is this?

"The OL net will produce output for the common time extent of the input and target." and this?

"The CL net will produce output for the time extent of the input." What does this mean?

",44999,,44999,,10/14/2021 0:17,11/12/2022 23:04,Closed networks vs Networks with a removed delay to predict new data,,1,0,,,,CC BY-SA 4.0 32018,2,,32010,10/12/2021 0:38,,1,,"

Value Iteration Vs Policy Iteration

Below is the list of differences & similarities between value iteration and policy iteration

Differences

Value Iteration Policy Iteration
Architecture VI Architecture reference PI Architecture Reference
Execution starts with a random value function random policy
Algorithm simpler complex
Computation costs more expensive cheaper
Execution time Slower Faster
No of Iterations to converge significantly more takes fewer iteration to converge
Guaranteed to converge Yes Yes

Similarities

  • both are dynamic programming algorithms
  • both employ variations of Bellman updates
  • Both exploit one-step look-ahead
  • Both algorithms are guaranteed to converge to an optimal policy in the end

Policy iteration is reported to conclude faster than value iteration

USAGE PREFERENCE

As mentioned earlier in the difference, the main advantage for using Policy iteration over value iteration is its ability to conclude faster with fewer iterations thereby reducing its computation costs and execution time.

REFERENCES

  1. Research papers
  2. Book
    • Artificial Intelligence: A Modern Approach, by Peter Norvig and Stuart J. Russell Chapter 17 Making Complex decisions
  3. Architecture
",48391,,48391,,12/26/2021 18:41,12/26/2021 18:41,,,,4,,,,CC BY-SA 4.0 32019,2,,32011,10/12/2021 7:59,,2,,"

This one is a bit crazy:

pool1 = nn.AvgPool3d(kernel_size = (361, 1, 1), stride= 1)

because it averages large numbers of the features at once. Very little information about individual features will remain after doing that.

The most obvious one you have not tried is this:

pool3 = nn.AvgPool3d(kernel_size = (3, 1, 1), stride= (3, 1, 1))

which includes all the feature data, and does not try to average over large amounts of it at a time. I expect pool3 to perform better than pool1 in terms of speed, and better than pool2 in terms of metrics for the trained CNN.

If your goal is to reduce number of feature maps from 540 to 180, then pooling is not usually a good choice of operation. The motivation behind pooling assumes that there is some consistent metric space that it is pooling over - and this is used to decide values of size and stride. The sequence of channels will not usually have such a metric, it is usually arbitrary result of learning.

Instead, the usual way to reduce number of channels between layers is to add a new convolution layer, with the desired number of output channels. In this scenario it is also common to use a kernel size of 1. This adds learnable parameters to the CNN, and those will learn an optimal compression of the channels for your problem.

",1847,,1847,,10/12/2021 12:31,10/12/2021 12:31,,,,0,,,,CC BY-SA 4.0 32020,1,,,10/12/2021 8:45,,2,195,"

Winter of AI definition:

periods of reduced funding and interest in artificial intelligence research, due to unmet expectations after a period of hype. There have been at least two major AI winters in 1974-1980 and 1987-1993.

I read this question: Did Minsky and Papert know that multi-layer perceptrons could solve XOR?

Minsky and Papert show that a perceptron can't solve the XOR problem. This contributed to the first AI winter, resulting in funding cuts for neural networks. However, now we know that a multilayer perceptron can solve the XOR problem easily.

Does someone have evidence to support that the "unsolved XOR problem" in Minsky and Papert's book caused the first winter of the AI? Or was it a succession of unsolved problems for neural networks and loss of interest that caused the cuts on funding?

",30751,,2444,,10/19/2021 12:55,10/19/2021 12:55,"Did the unsolved XOR problem in ""Perceptrons: An Introduction to Computational Geometry"" 1969 book really cause the winter of the AI in 1974?",,0,1,,,,CC BY-SA 4.0 32021,2,,28833,10/12/2021 8:50,,3,,"

This is just an implementation issue. One reason is the Huggingface implementation (which is not the original implementation by Google) wants to strictly separate the tokenization from the modeling. It is a convention that the input sequences are zero-padded, but in theory, it does not have to be so. In the Huggingface implementation, you use a different tokenizer that would pad the sequences with different numbers and still get valid masking.

You are right that you can infer the mask from the input IDs at the very beginning (if you know the pad ID), but you need to explicitly use the mask in every single layer. Each layer returns a 3D tensor of floats from which you cannot say what the padded positions are, you need to have the explicit mask when calling the next layer. I guess that having the mask everywhere makes the API more consistent.

",33139,,,,,10/12/2021 8:50,,,,0,,,,CC BY-SA 4.0 32024,2,,32005,10/12/2021 9:34,,1,,"

Try removing the dropout before the prediction layer. I couldn't find the paper or article I read about this (will update the post once I find it), just found a Cross Validated post which does not add much information. As you are

If you are lowering the learning rate, you should also lower the batch size accordingly.

As for Batch Normalization layers, they probably should be applied after the convolutional layers.

",22301,,,,,10/12/2021 9:34,,,,0,,,,CC BY-SA 4.0 32025,1,32032,,10/12/2021 9:51,,3,110,"

In the article Multi-Verse Optimizer: a nature-inspired algorithm for global optimization (DOI 10.1007/s00521-015-1870-7), it's written

The results of the real case studies also demonstrate the potential of MVO in solving real problems with unknown search spaces

where MVO stands for Multi-Verse Optimizer.

What does "unknown search spaces" mean in the context of Evolutionary Algorithms and, especially, in the context of the Multi-Verse Optimizer?

",50294,,50294,,10/12/2021 16:24,10/12/2021 16:24,"What does ""unknown search spaces"" mean in the context of Evolutionary Algorithms?",,1,0,0,,,CC BY-SA 4.0 32026,1,,,10/12/2021 10:42,,1,29,"

I want to train a Neural Network (NN) using a dataset. I want to use the NN model as a prediction function in one algorithm. However, in the algorithm, any data that does not meet a specific constraint (say some parameter $\theta <10$) would not be included.

So, my question is, while generating the training data, should I include all kinds of inputs irrespective of the constraint, or should I generate only those data which meet the constraint $\theta <10$?

Currently, I am training data with constraint ($\theta <10$), and I am getting an average error of around $6\%$. Ideally, I want it below $3\%$.

I am new to NN model training. Any kind of pointers would be helpful.

",50301,,2444,,10/13/2021 13:44,10/13/2021 13:44,Should I train a neural network with data with or without a constraint?,,0,1,,,,CC BY-SA 4.0 32027,1,,,10/12/2021 11:28,,1,73,"

I am confused by the equations for bounding boxes I find online. Some articles say that

box_width = anchor_width * exp(residual_value_of_box_width))

and the coordinates have a sigmoid function.

Eg: https://www.kdnuggets.com/2018/05/implement-yolo-v3-object-detector-pytorch-part-1.html

https://christopher5106.github.io/object/detectors/2017/08/10/bounding-box-object-detectors-understanding-yolo.html

But Darknet code and GitHub have equations dividing coordinates and box width with image width.

For example, https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/data/voc/voc_label.py

def convert(size, box):
    dw = 1./size[0]
    dh = 1./size[1]
    x = (box[0] + box[1])/2.0
    y = (box[2] + box[3])/2.0
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x*dw
    w = w*dw
    y = y*dh
    h = h*dh
    return (x,y,w,h)

If the image width is used, then what is the use of anchor box width/height values in yolov3.cfg file? I can't find where it has been used in the source code other than generate the anchors file.

",46670,,2444,,10/15/2021 22:56,10/15/2021 22:56,Different equations for Yolov3 in courses/ articles and Darknet GitHub code?,,0,0,,,,CC BY-SA 4.0 32028,1,,,10/12/2021 13:11,,1,59,"

I want to train a model based on millions of fields, including text and number, that are stored in a SQL database and recommend a perfect match based on some inputs. Now, which algorithm is the best for this problem?

For instance, consider this database pattern:

Title Content Volume Count
First row1 5.36 34
Second row2 36.1 239
... ... ... ...
",50305,,,,,7/13/2022 12:41,A recommender system based on millions of fields including text and number,,1,0,,,,CC BY-SA 4.0 32029,1,32031,,10/12/2021 14:13,,5,682,"

In almost every ML model, a train-test (or train-test-val split) is critical to assess the model's performance. However, I have always wondered what the rationale is to decide a particular train-test split. I've seen that some people like an 80-20 split, others opt for 90-10, but why? Is it simply a matter of preference? Also, why not 70-30 or 60-40, what is the best way to decide?

",,user48670,2444,,10/12/2021 15:43,10/12/2021 15:43,How to decide a train-test split?,,1,0,,,,CC BY-SA 4.0 32030,1,,,10/12/2021 14:29,,1,96,"

What kind of algorithm or approach can I use to find a specific type of object in an image?

In particular, I am interested in finding an object like a windmill in an image taken, for example, from Google Maps. The image could be something like this

",50307,,2444,,10/13/2021 16:09,11/7/2022 19:02,What kind of algorithm or approach can I use to find a specific type of object in an image?,,1,0,,,,CC BY-SA 4.0 32031,2,,32029,10/12/2021 15:01,,2,,"

I don't think there is any rationale behind choosing 80/20 over 75/25 or others. But those are the numbers for rather small datasets. If your dataset is large enough (like hundreds of thousands of samples), you can even work with 98/1/1 percents for train/val/test as discussed by Andrew Ng in this video. Neural networks thrive with big data and it is always a good idea to make use most of it.

",22301,,,,,10/12/2021 15:01,,,,0,,,,CC BY-SA 4.0 32032,2,,32025,10/12/2021 15:03,,2,,"

(Disclaimer: although I have never seen a formal definition of unknown search space, here is my attempt to define it based on my knowledge of search and search algorithms in machine learning and evolutionary algorithms; I am aware of a definition of unknown environment (see chapter 2, p. 44, of Norvig and Russell's AI book), but that definition is different from the one below.)

An unknown search space is a search space (i.e. a set of objects of interest with some relationship between them) where you do not know anything about

  1. where the best/worst solutions/objects are (i.e. where the local/global minima/maxima are) and/or
  2. how these objects are related.

A typical example where the search space is known is the search space associated with a path-finding problem where you know the start and goal locations, the intermediate locations that you can go through from the start to the goal locations, and the distance between locations. If you have this information, you can find the optimal path from the start to the goal location by just exploiting your knowledge of it (you can use e.g. depth-first search). Think about finding the shortest path from one city to another.

In other (most) cases, you may not know anything about the relationship between 2 or more solutions/objects, unless you explore the search space (i.e. take random actions or some other more informed actions).

For example, let's suppose that you want to search for a function of the form $f : [0, 1]^{28 \times 28} \rightarrow \{0, 1\}$ from a set of functions $\mathbb{F}$, i.e. $f \in \mathbb{F}$. In this case, you can think of a function $y = f(\mathbf{x}) \in \mathbb{F}$ as a classifier of greyscale images, where $\mathbf{x} \in [0, 1]^{28 \times 28}$ is a $28 \times 28$ greyscale image and $y \in \{0, 1\}$ is the class (name) of the object in the image $\mathbf{x}$ (in this case, we assume there's only one main object in the image $\mathbf{x}$). In machine learning, $\mathbb{F}$ could be represented by a set of neural networks (with a specific architecture, e.g. number of layers), and $f \in \mathbb{F}$ would be one of these specific configurations. In this context, usually, we do not know anything about the relationship between these neural networks, so the search space is unknown. So, you need to explore the search space in some way, for example, by changing some of the parameters of one neural network $f \in \mathbb{F}$ to get another neural network $f' \in \mathbb{F}$ and see how that affects, for example, your accuracy at classifying images.

This definition also applies to evolutionary algorithms (and, although I have not read that paper, I assume it also applies to that context). In fact, evolutionary algorithms are usually used to solve problems with unknown search spaces. For example, test case selection or finding a policy for an agent.

",2444,,2444,,10/12/2021 15:15,10/12/2021 15:15,,,,4,,,,CC BY-SA 4.0 32034,2,,32028,10/12/2021 19:32,,1,,"

The first step

You need to decide if you want to hold each string column or not. Then you must encode your text fields into numbers which you need to use some embedding algorithms like word2Vec. Check here.

Second step

Probably, you will have a lot of columns. Now, you need to reduce the space dimension. PCA, manifold transforms, partial least squares regression, etc., may help you in this way.

Third step

Here, you will have nice and tidy tabular data which you can feed into any Recommender System you want to.

  1. I presumed that you mean millions of columns when you say "millions of fields".

  2. If you mean millions of rows, thenprobably need to use methods thatdealingh Big Data.

",33912,,33912,,7/13/2022 12:41,7/13/2022 12:41,,,,0,,,,CC BY-SA 4.0 32035,2,,31977,10/12/2021 19:44,,3,,"

Due to subjective nature, quantitative evaluation of synthetic images is difficult in general. However, there are metrics like Inception Score or FID score that are used for evaluation of generative models like GANs or VAEs. Technically, it considers two aspects of the generated data:

  1. Similarity with training data
  2. Diversity within itself

Even though such metrics do not assess new images as we humans do, but it is widely accepted in the community.

",11030,,,,,10/12/2021 19:44,,,,0,,,,CC BY-SA 4.0 32036,1,,,10/12/2021 23:15,,0,43,"

Are multi-head attention matrices weighted adjacency matrices?

The job of the multi-head-attention mechanism in transformer models is to determine how likely a word is to appear after another word. In a sense this makes the resulting matrix a big graph with nodes and edges, where a node represents a word and an edge the likelihood to appear after that. So basically it is an adjacency matrix that is created.

",48548,,48548,,10/14/2021 22:27,10/14/2021 22:27,Is the multi-head attention in the transformer a weighted adjacency matrix?,,0,2,,,,CC BY-SA 4.0 32037,1,36159,,10/13/2021 1:48,,1,169,"

The logical arguments are the basis for Artificial Intelligence. That is why I picked AI community to ask my question.

Reading from Wikipedia,

A syllogism is a kind of logical argument that applies deductive reasoning to arrive at a conclusion based on two propositions that are asserted or assumed to be true.

Again from Wikipedia, deductive reasoning,

is the process of reasoning from one or more statements (premises) to reach a logical conclusion.

As part of deductive reasoning, it lists Modus Ponens, Modus Tollens, and Law of syllogism where Law of syllogism is defined as

In term logic the law of syllogism takes two conditional statements and forms a conclusion by combining the hypothesis of one statement with the conclusion of another.

Based on these articles, is it safe to assume that Syllogism uses Modus ponens, Modus tollens, and Law of syllogism to arrive at conclusion? Does that mean "Law of syllogism" is part of "syllogism" or is it something different?

P.S.

If this community is not appropriate for this kind of question, kindly guide me to the correct StackExchange or other communities.

",50262,,2444,,10/13/2021 11:56,7/12/2022 9:07,"What is the difference between ""Syllogism"" and ""Law of Syllogism""?",,1,0,,,,CC BY-SA 4.0 32038,1,,,10/13/2021 1:58,,1,202,"

It is recommended to apply gradient clipping by normalization in case of exploding gradients. The following quote is taken from here answer

One way to assure it is exploding gradients is if the loss is unstable and not improving, or if loss shows NaN value during training.

Apart from the usual gradient clipping and weights regularization that are recommended...

But I want to know the effect of gradient clipping by normalization in the performance of the model in normal or general cases.

Suppose I have a model and I run up to 800 epochs without gradient clipping because of the reason that there are no exploding gradients. If I run the same model with gradient clipping by norm, even if it is not necessary, then does the performance of the model decline?

",18758,,2444,,10/13/2021 11:44,10/13/2021 11:44,What is the effect of gradient clipping by norm on the performance of a model?,,0,2,,,,CC BY-SA 4.0 32039,1,,,10/13/2021 7:39,,1,66,"

I am training a neural network using a mini-batch gradient descent algorithm.

Now, consider the following loss function, which is composed of 2 terms.

$$L = L_{\text{MSE}} + L_{\text{regularization}} \label{1}\tag{1}$$

As far as I understand, usually, we update the weights of a neural network only once per mini-batch, even if the loss function is composed of 2 or more terms, like in equation \ref{1}. So, in this approach, you calculate the 2 terms, add them, and then update weights once based on the sum.

My question is: rather than summing the 2 terms of the loss function $L$ in equation \ref{1} and computing a single gradient for $L$, couldn't we separately compute the gradient both for $L_{\text{MSE}}$ $L_{\text{regularization}}$, then update the weights of the neural network twice? So, in this case, we would update the weights twice for each mini-batch. When would this make sense? Of course, my question also applies to the case where $L$ is composed of more than 2 terms.

",18758,,2444,,10/13/2021 12:51,10/15/2021 7:08,When would it make sense to perform a gradient descent step for each term of a loss function with multiple terms?,,2,0,,,,CC BY-SA 4.0 32040,2,,32039,10/13/2021 9:27,,1,,"

I am not sure if the process defined in the question is meaningful at all. If you mean to simply add the contribution of each $L$ without running the algorithm for the mini-batch, it makes no difference at all if you make one update or more; as the loss functions' contribution are simply added in the update. If on the other hand you mean to run the algorithm on the mini-batch once for each $L$ before calculating its contribution including the constraints, it would most probably end up with a step in the wrong direction. Constraints (at least $L_1$ and $L_2$) do not depend on the input or the output of the model, they do not compare the prediction with the target value; they are functions of the weights only. There is no reason to expect trying to minimize this function by itself will improve the model; it will actually harm the progress by taking a (small or large) step in the wrong direction.

",22301,,22301,,10/15/2021 7:08,10/15/2021 7:08,,,,4,,,,CC BY-SA 4.0 32041,1,,,10/13/2021 10:44,,1,25,"

How do I best transfer and fine-tune a Q-learning policy that was trained on small instances to large instances?

Some more details on the problem: I am currently trying to derive a decision policy for a dynamic vehicle dispatching problem. In the problem, a decision point occurs whenever a customer requests delivery. Customers expect to be delivered within a fixed amount of time and the objective is to minimize the delay. The costs of each state are the delays realized in that state (i.e., the delay of the customers that were delivered between the last two states.

Some details on the policy: I used a Q-learning policy (Dueling Deep Q Network) to estimate the discounted future delay of assigning an order to a vehicle. The policy was trained on a small-scale instance (5 vehicles ~100 customers) using epsilon-greedy exploration and a prioritized experience replay. I did not use temporal difference learning (as the cost of a decision are only revealed later in the process) but updated the policy after simulating the entire instance.

My problem: As it is, the policy transfers well to instances of larger sizes (up to 100 vehicles and ~2000 customers) and I could do without fine-tuning. However, there certainly is room for the policy to improve. Unfortunately, when I try to fine-tune the initial model on the larger instance, the retrained model becomes worse over the training steps with regards to minimizing delay. I suspect that large gradients play a role here as the initial q-values, trained on the small instance, are obviously way off for the large instances (due to the increase in customers).

Is there a standard approach to deal with such a transfer problem or do you have any suggestions?

",50323,,50323,,10/13/2021 11:16,10/13/2021 11:16,Transferring a Q-learning policy to larger instances,,0,0,,,,CC BY-SA 4.0 32042,1,32129,,10/13/2021 11:59,,5,337,"

I try to apply RL for a control problem, and I intend to either use Deep Q-Learning or SARSA.

I have two heating storage systems with one heating device, and the RL agent is only allowed to heat up 1 for every time slot. How can I do that?

I have two continuous variables $x(t)$ and $y(t)$, where $x(t)$ quantifies the degree of maximum power for heating up storage 1 and $y(t)$ quantifies the degree of maximum power for heating up storage 2.

Now, IF $x(t) > 0$, THEN $y(t)$ has to be $0$, and vice versa with $x(t)$ and $y(t)$ element $0$ or $[0.25, 1]$. How can I tell this to the agent?

One way would be to adjust the actions after the RL agent has decided about that with a separate control algorithm that overrules the actions of the RL agent. I am wondering if and how this can be also done directly? I'll appreciate every comment.

Update: Of course I could do this with a reward function. But is there not a direct way of doing this? Because this is actually a so called hard constraint. The agent is not allowed to violate this at all as this is technically not feasible. So it will be better to tell the agent directly not to do this (if that is possible).

Reminder: Can anyone tell me more about this issue? I'll highly appreciate any further comment and will be quite thankful for your help. I will also award a bounty for a good answer.

",48758,,48758,,11/5/2021 8:17,11/7/2021 13:24,Is it possible to tell the Reinforcement Learning agent some rules directly without any constraints?,,3,2,,,,CC BY-SA 4.0 32043,1,,,10/13/2021 12:29,,1,25,"

In policy gradients, is it possible to learn the policy if the chain of actions is selected and performed manually/externally (e.g. by myself or by someone else who I have no influence over)?

For example, we have four actions, and I choose in the beginning an action 2, and we end up in a given state, then I choose action 4 and we end up in another state, etc. (the actions can follow some logic or not but the question is general; some of the actions will end up with positive rewards).

Can we learn any meaningful policy network from such a chain of actions?

",50133,,50133,,10/13/2021 13:42,10/13/2021 13:42,Can we learn a policy network via a sequence of manually determined actions?,,0,3,,,,CC BY-SA 4.0 32044,1,,,10/13/2021 15:52,,0,100,"

I am training a semi-supervised GAN network using data from multiple subjects. I separated the labeled and unlabeled data based on my subjects, so there is no leakage, while having much more unlabeled data than labeled data. After few epochs training accuracy hits 100% which normally indicates overfitting, however the performance on the validation and test sets keeps increasing for 200-300 epochs. Is this considered overfitting and is there an explanation for this behavior?

",44792,,44792,,10/24/2021 14:16,10/30/2021 7:02,Validation set performance increasing even after seemingly overfit on training set,,2,6,,,,CC BY-SA 4.0 32045,2,,32030,10/13/2021 16:05,,0,,"

The computer vision problem that you are describing is object detection, i.e. the problem of finding the location of specific objects in an image and label them correctly with their names.

There are many resources on the web (or in books) that describe this problem more in detail and examples (which also include code) to get you started with it (e.g. this one).

In any case, to solve this problem, you will (probably) need a labelled dataset $D$. So, if you don't have it, you will need to (manually) collect many images, similar to the ones you describe, and find the objects of interest in them (i.e. their locations, which may be specified as a bounding box), and assign the corresponding name to each of them. At the end of this process, you should have a dataset of the form $D = \{(x_1, y_1), \dots, (x_1, y_1) \}$, where $N$ should be as big as possible, $x_i$ is an image, and $y_i$ are the labels (also known as targets), which, in this case, should be both the location and name of all objects of interest in $x_i$. The specific format of the labels depends also on the specific model that you will train to solve this problem. You can find different models online for this task, such as YOLO.

",2444,,,,,10/13/2021 16:05,,,,0,,,,CC BY-SA 4.0 32047,2,,32039,10/13/2021 19:54,,1,,"

Technically, nothing prevents you from doing so. When you have mulitple losses, you may call .backward() at each term separately.

However, I wonder, whether it makes sense to optimize each individual path as a separate objective, since if we have multiple of them - we would like to solve several tasks simultaneously.

Probably, it could be beneficial as some kind of regularization. One makes steps away from the gradient, but overall in the direction, which makes the model less prone to overfitting. But the choice of the batch size, learning rate - seem more straightforward way to achieve this. There is also a ~2x times additional computational overhead since we backpropagate twice.

In some sense, It is done in the training of GAN's. Instead of backpropagating for discriminator and generator simultaneously - one calculates the loss separately for each model and updates weights.

",38846,,,,,10/13/2021 19:54,,,,0,,,,CC BY-SA 4.0 32048,1,,,10/13/2021 20:07,,1,71,"

In order to get a smaller model, one often uses larger model, that performs reasonably well on the data as a teacher, and uses the information from large model to train the smaller one.

There are several strategies to do this:

  • Soft distillation

    Given logits of the teacher one adds the KL-divergence between the student logits and teacher logits to the loss: $$ \mathcal{L}_{loss} = (1 - \alpha) \mathcal{L}_{BCE} (y_{student}, y_{true}) + \lambda \mathcal{L}_{KL} (y_{student}, y_{teacher}) $$ Intuition behind this approach is clear - logits are more informative than a single target label and seemingly allow for faster training.

  • Hard distillation

    One adds the BCE between student logits and teacher model outputs as if they were true labels. $$ \mathcal{L}_{loss} = (1 - \alpha) \mathcal{L}_{BCE} (y_{student}, y_{true}) + \lambda \mathcal{L}_{BCE} (y_{student}, y_{teacher}) $$

And the benefit of the last approach is unclear to me. For the perfect model, one will have no difference with the vanilla training procedure, and for the case, where the teacher makes mistakes, we will optimize the wrong objective.

Despite these concerns, it was shown experimentally in several papers, and in the Deit, that this objective can improve performance. Even more, it is better, than soft distillation.

Why can this be the case?

",38846,,,,,10/13/2021 20:07,What is the purpose of hard distillation?,,0,0,,,,CC BY-SA 4.0 32049,1,32059,,10/13/2021 20:29,,1,54,"

I'm looking for the name of the method (or algorithms family, or research body) used for the smart extend of image surroundings.

For example, the method I'm looking for would take this image:

And smartly extend it into:

So that the grass and the surrounding scenery are all generated to fill the desired area.

Generally speaking, what I'm looking for should smartly generate surroundings including entities such as tree trunks and branches, grass patterns, mountains slopes, clouds patterns, water bodies like puddles, shrubs, stones on ground, and so on.

Also, it would be nice to know how mature is this technology, i.e. how well can different entities be smartly extended.

Note that Seam Carving is a candidate (used in Photoshop under the name Content-Aware Scale (see this for example)), but I'm looking for something smarter, I think, and I'm not really sure if it can do what I'm looking for.

",50330,,2444,,10/15/2021 10:36,10/15/2021 10:36,What is the name of the method for the smart extend of image surroundings?,,1,0,,,,CC BY-SA 4.0 32050,2,,32042,10/13/2021 20:38,,4,,"

You could just tweak your reward function to include this restrictions.

In the most simple case, you could reward your agent -1 if $x(t) > 0$ and $y(t) \neq 0$.

The scale of your negative reward depends on your general reward scaling of course.

",49455,,,,,10/13/2021 20:38,,,,4,,,,CC BY-SA 4.0 32052,1,,,10/14/2021 1:06,,1,23,"

Consider the following paragraph from the chapter named pre-trained models from the textbook titled Deep Learning with PyTorch by Eli Stevens et al.

The AlexNet architecture won the 2012 ILSVRC by a large margin, with a top-5 test error rate (that is, the correct label must be in the top 5 predictions) of 15.4%. By comparison, the second-best submission, which wasn’t based on a deep network, trailed at 26.2%. This was a defining moment in the history of computer vision: the moment when the community started to realize the potential of deep learning for vision tasks. That leap was followed by constant improvement, with more modern architectures and training methods getting top-5 error rates as low as 3%.

This paragraph is mentioning a moment that makes the community realize the potential of deep learning. Are there any other similar defining moments in the history of deep learning?

",18758,,,,,10/14/2021 1:06,What are the defining moments that make community realise the potential of deep learning?,,0,0,,,,CC BY-SA 4.0 32053,1,,,10/14/2021 6:45,,1,111,"

I am trying to use reinforcement learning to let an agent learn simultaneously how to play a game and when to end a game.

The task is to find a single target in a grid of locations. At each time step, the agent needs to make a series of decisions:

  1. It believes the target is at the currently inspected location. End the trial and see whether the result is correct.
  2. It believes the target is not at the currently inspected location. It then needs to pick another location to check at the next timestep.

If the agent is choose decision #2, the environment will give some hints on where the target is, with some stochastic noise. The noise level depends on the distance between the true target location and currently inspected location. The shorter the distance, the lower the noise. The goal is to let the agent perform the task as fast and accurately as possible, so the agent needs to learn when to stop the trial, and how to select the next inspected location given the hints. The agent also has a internal memory so it won't select previously inspected locations. I would like to compare the agent's speed-accuracy trade off to human's.

In a previous simplified version of the task, the environment ends the trial once the agent hit the target location, so the agent only needs to learn how to choose the next location to inspect. I used a simple Q-network and it works well. I also found that the network should be a fully convolutional network because fully connected layers are not spatially shift-invariant.

Now how can I modify the existing convolutional network to satisfy the new task requirement? Or should I use a new network architecture?

",37482,,37482,,10/14/2021 8:20,11/14/2021 16:00,Training a reinforcement learning agent that can decide to continue or end the game,,1,2,,,,CC BY-SA 4.0 32055,2,,32017,10/14/2021 18:34,,0,,"
Closed Loop Network Step-Ahead Prediction Network
The function CLOSELOOP replaces the feedback input with a direct connection from the output layer. Step-Ahead Prediction Network also known as removedelay function helps to remove delay to neural network’s response
In Closed loop networks, its own predictions become the feedback inputs. targets with a delay were used as feedback input
Network used to do multi-step prediction Network used to predict one step ahead
Highly helpful to turn the network into parallel configuration Can't be used for parallel configuration
Output is not shifted one timestep the output is shifted one timestep.
Closed loop network continue to predict when external feedback is missing, by using internal feedback Open-loop and Remove-delay use external feedback
Real time Configuration Not real time configuration
Closed loop network will produce output for the time extent of the input. will produce output for the common time extent of the input and target.

Open-Loop Network In open-loop networks, targets were used as feedback inputs. Open loops are primarily used when future outputs are not known. Below is how open loop network looks like Note:

The typical workflow is to fully create the network in open loop, and only when it has been trained (which includes validation and testing steps) it is transformed to closed loop for multistep-ahead prediction

UPDATED Response to newly added questions

"The desired output, AKA the delayed target, is used as an additional input." how is this?

This is with reference to the Step-Ahead prediction network. Here, the delayed target is used as an additional input. Kindly refer to diagram in point #9. when the target is y(t+1) the additional input is the delayed target, i.e., y(t). Whereas in open-loop target is used as additional input this is the main difference between OL and step ahead prediction.

"The OL net will produce output for the common time extent of the input and target." and this?

As you can see, the open-loop network will produce output y(t) as long as open-loop has two inputs x(t)and the targety(t)` for a certain time

"The CL net will produce output for the time extent of the input." What does this mean?

In closed loop based on the input x(t) the output y(t) will be produced for a certain time

Reference Link:

",48391,,48391,,10/18/2021 17:12,10/18/2021 17:12,,,,2,,,,CC BY-SA 4.0 32057,1,,,10/15/2021 1:04,,2,104,"

Consider the following excerpt taken from the chapter named Using convolutions to generalize from the textbook titled Deep Learning with PyTorch by Eli Stevens et al.

Downsampling could in principle occur in different ways. Scaling an image by half is the equivalent of taking four neighboring pixels as input and producing one pixel as output. How we compute the value of the output based on the values of the input is up to us. We could

  • Average the four pixels. This average pooling was a common approach early on but has fallen out of favor somewhat.
  • Take the maximum of the four pixels. This approach, called max pooling, is currently the most commonly used approach, but it has a downside of discarding the other three-quarters of the data.
  • Perform a strided convolution, where only every $N$-th pixel is calculated. A $3 \times 4$ convolution with stride 2 still incorporates input from all pixels from the previous layer. The literature shows promise for this approach, but it has not yet supplanted max pooling.

The paragraph is mentioning that the research community is biased towards max-pooling than average pooling. Is there any rational basis for such bias?

",18758,,2444,,10/18/2021 20:28,10/18/2021 20:28,Is there any reason behind bias towards max pooling over avg pooling?,,1,0,,,,CC BY-SA 4.0 32058,1,32061,,10/15/2021 10:19,,-1,140,"

How can I know if two words are likely to appear in the same sentence in (British) English (or English in general to enhance the chance of getting a result).

As I don't have access to a powerful machine, is there any relevant website? Or a pretrained model I can use? Or something else?

",37267,,2193,,10/15/2021 19:49,10/15/2021 19:49,Probability that two words appear in the same sentence,,1,3,,,,CC BY-SA 4.0 32059,2,,32049,10/15/2021 10:34,,1,,"

In computer vision, the problem of filling missing parts of an image is called image inpainting; the subtask of filling the surroundings is called image outpainting in [1], which is your problem.

The methods for solving the image outpainting problem are not mature according to the pre-print paper Image Outpainting and Harmonization using Generative Adversarial Networks (2020), which you should read for more info.

",2444,,,,,10/15/2021 10:34,,,,0,,,,CC BY-SA 4.0 32060,2,,32053,10/15/2021 13:49,,1,,"

I assume your agent also has to choose which locations to visit next. If so, then there are two rough designs that crossed my mind.

You can use separate agents, one for choosing to inspect or not, and one to choose which adjacent cell to visit. Sum all of the log likelihoods of both agents' actions for the loss. One particular benefit of this design is that if you can prepare the data, you can separately train the agents for awhile, and maybe train them jointly afterwards. See if the separate pre-training help improve the performance.

Other choice is to pad the "inspect current location" action alongside the "location selecting" actions, #1 inspect current location, #$2$ until #($N+1$) visit next location, given $N$ is the number of possible locations to visit.

The challenge of the two designs is how to represent this inspect action or the raw features of this action(?). If you have a domain knowledge from the game for this, maybe it can help you design it. Else you can just try with dummy features for the inspect action (maybe all zeroes) with the same length of the locations' features.

",44920,,,,,10/15/2021 13:49,,,,0,,,,CC BY-SA 4.0 32061,2,,32058,10/15/2021 17:51,,2,,"

You don't need a powerful machine or a pre-trained model. All you need — as Neil Slater rightly says in his comment to your question — is a corpus of English texts to analyse. There are some corpora available for linguistic research, or you can collect your own.

Then you need to split the texts into sentences, and tokenise them, and you're ready to calculate probabilities.

In linguistics there are some commonly used co-occurrence measures, such as mutual information, log-likelihood, or t-score. These are all used to measure the associations between words, typically in a window around the target word, rarely within sentences (as it makes processing easier). Any textbook on statistics in corpus linguistics will tell you how to do that.

The exact parameters depend on what the purpose of your analysis is, but you won't be able to do this without a corpus.

",2193,,,,,10/15/2021 17:51,,,,3,,,,CC BY-SA 4.0 32062,1,,,10/15/2021 18:05,,2,36,"

Most computer science instructors will tell you that the Turing Test is more a theoretical or conceptual thought experiment than an actual exam that someone (or something!) can formally sit and receive a score on. A thread here on AI Stack Exchange confirms this.

Considering all of this, have there been any significant attempts to create a standardized form of a Turing Test that could be rolled out widely and used to evaluate various AI constructs? Obviously, none of these standardized testing systems could be considered The Only True Turing Test (TM), but perhaps they could have their place in research as a way to benchmark or categorize various algorithms or evaluate the work of students.

For example, I'm imagining hearing a graduate student muttering the following:

My AI construct passes the Johnson-Smith Turing Test 1992 and the Hernandez-Dorfer 2017, but it's still failing the Takahashi-2003 Advanced Elite. What am I doing wrong? Maybe if I tweak this routine here. [click]. Darn, still fails.

Either a fully automated test (e.g. just login and click to sit the exam) or a standardized system involving trained human judges (e.g. a la 21st century medical board exams) who apply standardized written rubrics would be acceptable as long as the criteria for passing are standardized rather than left to the judgment of untrained personnel or random passersby.

",4855,,2444,,10/15/2021 22:51,10/15/2021 22:51,Are there standardized forms of the Turing Test?,,0,0,,,,CC BY-SA 4.0 32063,1,32118,,10/15/2021 20:20,,2,200,"

I am trying to implement reinforcement learning into my real-world problem. One thing making me hesitant to apply RL is that this real-world problem of mine is unique in a way how every state is independent of one another. The action taken by the agent at timestep t is the only thing that affects the state at the next timestep. (For example, in the cycle of "state-action-reward-next state", the "next state" is solely dependent on the "action" but not the "state".)

I am wondering if the RL could still be able to learn through this scenario. If not, what other methods could be an option?

",46395,,,,,10/19/2021 20:07,Can RL still learn in a scenario where current state and the next state are independant?,,1,4,,,,CC BY-SA 4.0 32065,1,,,10/15/2021 21:51,,1,12,"

I'm looking for some good references that give convergence results of training neural networks. I'm decently familiar with works that analyze the convergence of SGD, and, in particular, I really like this paper Optimization Methods for Large-Scale Machine Learning. I'm looking for works that talk about the convergence of SGD (or possibly a different gradient-based algorithm) specifically for neural networks.

An example of the type of paper I'm looking for is Convergence Analysis of Two-layer Neural Networks with ReLU Activation.

",47080,,2444,,10/15/2021 22:09,10/15/2021 22:09,References for the convergence of gradient-based algorithms for training neural networks,,0,0,,,,CC BY-SA 4.0 32068,1,32079,,10/15/2021 23:17,,1,91,"

I'm working on a project for an evolutionary algorithms course, and the problem we're trying to solve is multi-objective. We'll use NSGA-II but we also wanted to compare with some other MOEAs, however, we haven't been able to find good comparisons/benchmarks of these algorithms, so we don't really know how to decide.

Any insights will be appreciated.

",50364,,2444,,10/16/2021 12:08,10/16/2021 23:40,Is there a benchmark for multi-objective evolutionary algorithms?,,1,2,,,,CC BY-SA 4.0 32070,1,,,10/16/2021 3:33,,2,129,"

Why does TD (0) converge to the MLE solution of the Markov model?

Let's take the Example 6.4 in Sutton and Barto's book as an example.

Example 6.4: You are the Predictor Place yourself now in the role of the predictor of returns for an unknown Markov reward process. Suppose you observe the following eight episodes:

$A,0,B,0; B,1;B,1 ;B,1 ;B,1 ;B,1 ;B,1 ;B,0$

...

But what is the optimal value for the estimate $V(A)$ given this data? Here there are two reasonable answers. One is to observe that $100 \%$ of the times the process was in state $A$ it traversed immediately to $B$ (with a reward of $0$); and because we have already decided that $B$ has value $\frac{3}{4}$, therefore $A$ must have value $\frac{3}{4}$ as well. One way of viewing this answer is that it is based on first modeling the Markov process, in this case as shown to the right, and then computing the correct estimates given the model, which indeed in this case gives $V(A)=\frac{3}{4}$. This is also the answer that batch $\mathrm{TD}(0)$ gives.

Given the TD(0) update rule $V(S) \leftarrow V(S)+\alpha\left\lceil R+\gamma V\left(S^{\prime}\right)-V(S)\right]$, how can we deduce that it will get the MLE solution and thus $V(A) =\frac{3}{4}$?

",46710,,2444,,10/16/2021 13:56,10/16/2021 13:56,Why does TD (0) converge to the MLE solution of the Markov model?,,0,2,,,,CC BY-SA 4.0 32071,1,32143,,10/16/2021 11:49,,2,451,"

In Temporal-Difference Learning, we update our value function by $V\left(S_{t}\right) \leftarrow V\left(S_{t}\right)+\alpha\left(R_{t+1}+\gamma V\left(S_{t+1}\right)-V\left(S_{t}\right)\right)$

If we choose a constant $\alpha$, will the algorithm eventually give us the true state value function? Why or why not?

",46710,,2444,,10/23/2021 11:23,10/23/2021 11:23,How does $\alpha$ affect the convergence of the TD algorithm?,,2,2,,,,CC BY-SA 4.0 32074,1,,,10/16/2021 12:12,,1,73,"

When calculating the distance between two genomes, how does one treat disabled connections?

For example, consider the following genome:

[1, 0.2, E] [2, 0.1, D] [3, 0.2, E] [4, 0.15, E] [5, 0.3, D] [7, 0.25, D] [8, 0.25, E] [9, 0.1, E]
[1, 0.2, E] [2, 0.2, E] [3, 0.1, E] [6, 0.15, E]

For $\overline{W}$, the weighted sum of common genes, do I only sum the genes that are enabled in both (1 and 3), or do I sum all of the common genes (1, 2, and 3)?

For $D$, the disjoint genes, do I only count the disjoint enabled genes (4 and 6), or do I count all of them (4, 5, and 6)?

For $E$, the excess genes, do I only count the enabled genes (8 and 9), or do I count all of them (7, 8, and 9)?

And finally, for $N$, do I count all of the genes in the larger genome or just the enabled ones?

Oh, one last question. Is gene 2 now considered disjoint since it is disabled in the first genome?

",50375,,18758,,10/19/2021 12:14,10/19/2021 12:14,NEAT Speciation distance: How does one treat disabled connections?,,0,0,,,,CC BY-SA 4.0 32077,2,,32071,10/16/2021 12:47,,0,,"

In general, NO.

You don't get the "true" state value function. TD-learning approximates the true value function. It can be a very close, or even exact approximation in simple cases, but, in general, it is just an approximation.

Depending on the difficulty of the problem, a non-constant $\alpha$ value can help the policy approximate the true value function more quickly, or help the learning from getting stuck in a local minimum.

There are implementations, like ADAM, which will adaptively change the learning rate for each feature.

Usually, you can expect convergence must faster if you adaptively change the learning rate by using an implementation like ADAM (see this).

",43651,,2444,,10/17/2021 20:53,10/17/2021 20:53,,,,2,,,,CC BY-SA 4.0 32079,2,,32068,10/16/2021 13:23,,1,,"

The DEAP library (a Python library for EAs) contains some benchmarks. In particular, you may want to look at the following functions

Besides the DEAP library repository, you might want to look at several well established MOEA frameworks such as PyMoo (Python) and PlatEMO (MatLab). Both have implementations of well known MOEAs and benchmark functions. You can look through their collection of benchmark functions for inspiration, and also implement your algorithm with those framework so that you can easily test your method's performance on their benchmark functions.

PlatEMO even has a GUI for experimental study, where you can choose the algorithms and the benchmark functions to test, followed by a Wilcoxon’s rank sum test to see how your algorithm really perform compared with other algorithms. For PyMoo you can create a new test file by following the existing examples, but, as far as I know, it doesn't have an experimental study platform similar to PlatEMO yet.

You may also be interested in the paper Scalable and Customizable Benchmark Problems for Many-Objective Optimization (2020).

",2444,,2444,,10/16/2021 23:40,10/16/2021 23:40,,,,1,,,,CC BY-SA 4.0 32080,1,,,10/16/2021 15:05,,1,193,"

Refik Anadol has machines view actual pictures and then has the machine create its own images. This video shows some of the stuff he does.

What kind of out-of-the-box tools (e.g. a Python package) or algorithms produce similar things to what he does?

I am hoping to play around with it and see what can happen. Not sure what to even Google for or search for.

",29801,,2444,,10/19/2021 12:45,11/13/2022 16:05,What sort of out-of-the-box technology could be used to create work similar to artist Refik Anadol?,,1,2,,11/15/2022 5:39,,CC BY-SA 4.0 32081,2,,9278,10/16/2021 16:25,,2,,"

The paper Comparison between genetic algorithms and particle swarm optimization (1998, by Eberhart and Shi) does not really answer the question of when to use one over the other (this may be an open question), but at least it provides a comparison of how the methods work and what could affect their performance (i.e. which parameters or operators they use, and what the typical values are), so it may be worth reading it.

",2444,,,,,10/16/2021 16:25,,,,0,,,,CC BY-SA 4.0 32082,2,,32080,10/16/2021 17:22,,0,,"

For a Google term you could use "computational creativity". It covers a wide range of ideas, and the artist here is not using one single tool or approach.

There are clearly a range of different techniques that went into the artist's installations that were shown on the video. I have an idea about a few of them:

  • Some are from basic animation and story-telling, and not "generated" art - at one stage the installation lists all the files that were used for instance. This is a way to communicate something about how the piece was produced. It would have been constructed into a video sequence by the artist, probably without any AI, although it is always possible to write a script that produces "decisions" from some kind of arbitrary mapping, and an artist may decide to cede control in that way as part of a piece.

  • There is some kind of particle system or physics-based simulation behind the flowing coloured clouds of cubes (note the physics does not have to be real physics, it often is not close, but inspired by some physical process such as flowing liquids, or maybe biological such as swarming insects). There a few different frameworks to generate these - Blender could do it for example, plus many games engines are capable of such renders. It is also common to have these types of systems hooked up to a data source that provides input of perturbations - movement, forces, shapes - that drive the dynamics, and are somehow meaninful in the context of the piece. It is not clear whether the artist has done that in this case, but it is also common to hook these systems up to data inputs in real time, such as location of gallery visitors, weather data from around the world etc. Choices of colour gradients, size and shape of elements etc will be curated by the artist to capture a certain mood.

    • Given the rest of this installation, the flowing cubes animation could well have something to do with the photos and how they were processed. However, it is not at all clear from viewing it whether this is the case.
  • The mutating photos look a lot like the output of GANs (generative adversarial networks) or VAEs (variational autoencoders). The liquid-like flow between images is caused by picking a trajectory through a latent space that these image generators learn from observing many images. A popular Python implementation of StyleGAN 2 would definitley be capable of producing output of the quality seen in the video.

  • Potentially the overall mix of the artwork, transitions and layouts could be a curated video sequence by the artist, or it could be dynamically mixed by a function reacting to a live data feed. In this case, the mixing seems more on the human-curated side - the installation could be e.g. a 15 minute video loop showcasing a fixed rendering of interesting parts of the spaces selected by the artist.

There are probably other things. For instance, the artist has gone to some trouble to curate their data, also using machine learning techniques when they discuss filtering out images of people. I think the effort that has gone into the installation, and the desire of the artist to showcase that is why some of the installation includes storytelling about how it was made.

",1847,,1847,,10/16/2021 17:31,10/16/2021 17:31,,,,2,,,,CC BY-SA 4.0 32085,1,,,10/17/2021 11:00,,0,265,"

Say I have a mini-batch of size 32, and I have 10 such batches. Assuming I only run it for one epoch (just for the sake of understanding it), Will the weights be updated using the gradients of one mini-batch, or will it be done after all the 10 mini batches have passed through?

Intuitively for me, it ought to be the first one because otherwise, the only difference between Batch-GD and mini-batch GD will be the size of the batch.

",50127,,18758,,10/19/2021 12:17,1/11/2023 8:50,"In mini-batch gradient descent, are the weights updated after each batch or after all the batches have gone through an epoch?",,1,2,,,,CC BY-SA 4.0 32088,1,32090,,10/17/2021 18:10,,5,735,"

Hyperparameter tuning is the process of selecting the optimal hyperparameters for an ANN.

Now, my guess is that, if we have sufficient data (say, 1.4 million for, say, 6 features), the model can be optimally trained and we don't need a hyperparameter tuner (like Keras-Tuner), because, while training, the data itself will optimize the model.

Do we need a hyperparameter tuner if we have a sufficient number of random data for training our ANN model?

",20721,,2444,,10/17/2021 20:51,10/17/2021 20:51,Do we need automatic hyper-parameter tuning when we have a large enough dataset?,,2,0,,,,CC BY-SA 4.0 32089,2,,32088,10/17/2021 19:28,,3,,"

You don't NEED a hyperparameter tuner, but it can help in various situations. For example, if your model is not training well, perhaps using a tuner can help.

It's hard to say in which hyperparameters you would be turning over in your specific model, but for some specific hyperparameters if you choose a bad value your model won't learn or diverge. Take for example the learning rate, if you pick a value too high it will overshoot minimums and the error might constantly grow (divergence), or if you pick a value too low it will get stuck in a local minimum and not be able to continue learning. You can have the world's largest dataset, but if you don't pick the correct range for your learning rate, the model will not properly learn.

In general, if you're not confident about specific hyperparameters ranges, then hyperparameter tuning can be a helpful tool, regardless of your dataset size

",43651,,,,,10/17/2021 19:28,,,,0,,,,CC BY-SA 4.0 32090,2,,32088,10/17/2021 19:29,,7,,"

Unfortunately, even with large amounts of training data, hyperparameter choices can strongly influence the performance of a trained model.

What you can usually drop when you have large amounts of training data is regularisation. If your training examples cover the function space you are learning really well, then it is harder to overfit the training data. Regularisation choices are also hyperparameters, so you can save some search space and time by ignoring them.

Do we need a hyperparameter tuner if we have a sufficient number of random data for training our ANN model?

Having lots of data may mean that you can use simpler "brute force" architectures and designs, and that the end result is robust over wider range of hyperparameter choices.

You may still want to tune hyperparameters at least a little though. This tuning can be tedious to drive by manual edits and re-tries, which is where an automated tuner can help.

",1847,,,,,10/17/2021 19:29,,,,0,,,,CC BY-SA 4.0 32091,1,,,10/17/2021 21:48,,1,40,"

I understand that CNNs are for image classification while object detection is for localization + classification of the objects detected. However, in particular, AI for chest radiographs, why is object detection used? If a CNN has 99% accuracy, should object detection still be considered? I see a lot of research papers on object detection with x-ray data but they don't explain why object detection is better than CNNs. While object detection allows users to see "where" the object is located, does this even matter if we get such high accuracy already? Also, if the location really does matter, can't we just use the heat maps from CNN?

",48734,,18758,,10/19/2021 12:20,7/18/2022 11:00,When is an object detection approach over a CNN approach appropriate?,,1,0,,,,CC BY-SA 4.0 32092,2,,31998,10/18/2021 8:46,,0,,"

The paper you have provided, discusses *detecting salient regions by using a contrast determination filter

which operates at various scales to generate saliency maps containing “saliency values” per pixel.*

The Contrast detection filter used in this paper is better explained with the figure 2. Here R1 is the inner most region and R2 scale is varied. Note this is the filter and not the image itself

The below fig 3 shown in the paper section 3.1 indicates how the image appears when the filter of varying scales is applied(filtered images). When R2 has the highest scale(i.e., maximum width) the background is also shown that is the non-salient parts are also taken into consideration. As the R2 takes up lesser width, the non-salient parts becomes almost invisible which you can see in 3 to 8 images in figure3. It helps to focus on what's really important in the image, which is the man riding the horse.

If you have noted in the figure 3, though R2 width varies, the filtered image size is the same as the original image.
To answer your question, the saliency map is the same size as the original image size as only the filter was scaled not the image itself

This is also quoted in the paper section 3.1, page 4

A change in scale is affected by scaling the region R2 instead of scaling the image.
Scaling the filter instead of the image allows the generation of saliency maps of the same size and resolution as the input image.

Update Let me provide you with a much more simpler explanation ,
suppose you as a human have couple of reading glasses(of varying sizes) and you are trying to read a book.

  • The smallest reading glass will help you focus on smaller part of the book, and you would often have to look in different parts of the book
  • A medium reading glass will focus on a more wider part of the book
  • If you have a large reading glass, you can cover the whole book

Now in above example, reading glass acts as the filter, the book acts as your input image, the output image is what you can see through the reading glass.

No matter the size of your reading glass(filter) the dimensions of the book hasn’t changed only the focus on what is observed i.e., content or the information of book you see through your reading glass is more focused and everything else is a blur.

So the output image will be the same size as the input image with the out of focused will be like a blur

",48391,,48391,,12/29/2021 5:29,12/29/2021 5:29,,,,5,,,,CC BY-SA 4.0 32093,2,,27867,10/18/2021 9:55,,1,,"

Based on my experience, I would say that the standard notation is just to have a regular function, and specify that it applies element wise. For example, a common notation for activation functions is $\sigma$, so e.g. you could represent the activations of a regular dense layer as $\sigma(W x + b)$ where $x, b$ are vectors and $W$ is a matrix. I've never seen a special notation for specifying that the function $\sigma$ is applied element wise.

As you suggest in the question, if the function to be applied element wise is linear, then you can use either hadamard product, e.g. $a \circ x$ or the diag function $\text{diag}(a)x$.

",47080,,,,,10/18/2021 9:55,,,,2,,,,CC BY-SA 4.0 32095,1,,,10/18/2021 12:42,,1,73,"

I'm currently trying to train a custom model with TensorFlow to detect 17 landmarks/keypoints on each of 2 hands shown in an image (fingertips, first knuckles, bottom knuckles, wrist, and palm), for 34 points (and therefore 68 total values to predict for x & y). However, I cannot get the model to converge, with the output instead of being an array of points that are pretty much the same for every prediction.

I started off with a dataset that has images like this:

each annotated to have the red dots correlate to each keypoint. To expand the dataset to try to get a more robust model, I took photos of the hands with various backgrounds, angles, positions, poses, lighting conditions, reflectivity, etc, as exemplified by these further images:

I have about 3000 images created now, with the landmarks stored inside a CSV as such:

I have a train-test split of .67 train .33 test, with the images randomly selected to each. I load the images with all 3 color channels and scale both the color values & keypoint coordinates between 0 & 1.

I've tried a couple of different approaches, each involving a CNN. The first keeps the images as they are, and uses a neural network model built as such:

model = Sequential()

model.add(Conv2D(filters = 64, kernel_size = (3,3), padding = 'same', activation = 'relu', input_shape = (225,400,3)))
model.add(Conv2D(filters = 64, kernel_size = (3,3), padding = 'same', activation = 'relu'))
model.add(MaxPooling2D(pool_size = (2,2), strides = 2))

filters_convs = [(128, 2), (256, 3), (512, 3), (512,3)]
  
for n_filters, n_convs in filters_convs:
  for _ in np.arange(n_convs):
    model.add(Conv2D(filters = n_filters, kernel_size = (3,3), padding = 'same', activation = 'relu'))
  model.add(MaxPooling2D(pool_size = (2,2), strides = 2))

model.add(Flatten())
model.add(Dense(128, activation="relu"))
model.add(Dense(96, activation="relu"))
model.add(Dense(72, activation="relu"))
model.add(Dense(68, activation="sigmoid"))

opt = Adam(learning_rate=.0001)
model.compile(loss="mse", optimizer=opt, metrics=['mae'])
print(model.summary())

I've modified the various hyperparameters, yet nothing seems to make any noticeable difference.

The other thing I've tried is resizing the images to fit within a 224x224x3 array to use with a VGG-16 network, as such:

vgg = VGG16(weights="imagenet", include_top=False,
    input_tensor=Input(shape=(224, 224, 3)))
vgg.trainable = False

flatten = vgg.output
flatten = Flatten()(flatten)

points = Dense(256, activation="relu")(flatten)
points = Dense(128, activation="relu")(points)
points = Dense(96, activation="relu")(points)
points = Dense(68, activation="sigmoid")(points)

model = Model(inputs=vgg.input, outputs=points)

opt = Adam(learning_rate=.0001)
model.compile(loss="mse", optimizer=opt, metrics=['mae'])
print(model.summary())

This model has similar results to the first. No matter what I seem to do, I seem to get the same results, in that my mse loss minimizes around .009, with an mae around .07, no matter how many epochs I run:

Furthermore, when I run predictions based off the model it seems that the predicted output is basically the same for every image, with only a slight variation between each. It seems the model predicts an array of coordinates that looks somewhat like what a splayed hand might, in the general areas hands might be most likely to be found. A catch-all solution to minimize deviation as opposed to a custom solution for each image. These images illustrate this, with the green being predicted points, and the red being the actual points for the left hand:

So, I was wondering what might be causing this, be it the model, the data, or both, because nothing I've tried with either modifying the model or augmenting the data seems to have done any good. I've even tried reducing the complexity to predict for one hand only, to predict a bounding box for each hand, and to predict a single keypoint, but no matter what I try, the results are pretty inaccurate.

Thus, any suggestions for what I could do to help the model converge to create more accurate & custom predictions for each image of hands it sees would be very greatly appreciated.

",50405,,18758,,10/19/2021 12:26,10/19/2021 12:26,Hand Landmark Detector Not Converging,,0,4,,,,CC BY-SA 4.0 32096,1,,,10/18/2021 13:13,,0,42,"

I'm currently diving into the Bayesian world and I find it pretty fascinating. I've so far understood that applying the Bayes' Rule, i.e. $$\text{posterior} = \frac{\text{likelihood}\times \text{prior}}{\text{evidence}}$$ are most of the time intractable because of the high dimensional parameter space in the denominator. One way to solve this is by using a prior conjugate to the likelihood, as then the analytical form of the posterior is known and calculations are simplified.

So far so good. Now I've read about bayesian sequential filtering and smoothing techniques such as the Kalman Filter or Rauch-Tung-Striebel Smoother (find references here). As far as I understood, assuming a time step $k$, instead of calculating the complete posterior distribution $p(X_k|Y_k)$ with $X=[x_1, ...,x_k]$ and $Y=[y_1,...y_k]$, a Markov Chain is assumed and only the marginal $p(x_k|Y_k)$ is estimated in a recursive manner. That is, the posterior calculate at time step $k$ serves as prior for the next time step. I guess, Bayes' Rule is somehow involved in these calculations

Furthermore, both techniques assume the posterior always to be Gaussian and therefore closed-form solutions are obtained. Now I was wondering what restriction does make the whole process tractable, i.e. eliminates the need to compute the evidence?

I guess it's the Gaussian assumption, i.e. the prior, the predicted, and the posterior distribution are all assumed to be Gaussian, and therefore updated distributions are obtained without computing the evidence - is this correct and does this refer to conjugate distributions?

Or is it the fact that we assume a Markov Chain and do not consider all states at each time step?

",50415,,18758,,10/19/2021 12:27,10/19/2021 12:27,What makes Sequential Bayesian Filtering and Smoothing tractable?,,0,2,,,,CC BY-SA 4.0 32097,1,32409,,10/18/2021 14:47,,2,152,"

This question is assuming a sequential, deep neural network

Given some features [X1, X2, ... Xn], I'm trying to predict some value Y.

The raw data available to me contains feature X1 and feature X2. Say that I know there is an effect on Y based on the ratio of the two features, i.e. X1 / X2.

Should I add a new feature, mathematically defined as the ratio of the two features? I haven't been able to locate any literature which begins to describe the necessity or warnings of this.

Instinctly I'm worried about the following:

  • Overfitting and the need for excessive regularization, due to duplicate information in the feature set
  • Exponentially growing number of features, since defining a ratio between each feature may be necessary

However, I also recognize that certain relationships are impossible to be defined by a deep neural network (i.e. logic gates, exponential relationships, etc), so when would this sort of "relationship defining" be necessary? For example, if an exponential relationship is known to exist?

",50416,,,,,11/23/2021 6:45,Should I allow NN to infer relationships of inputs?,,3,4,0,,,CC BY-SA 4.0 32098,1,,,10/18/2021 17:05,,0,865,"

This is the vacuum cleaner example of the book "Artificial intelligence: A Modern Approach" (4th edition).

Consider the simple vacuum-cleaner agent that cleans a square if it is dirty and moves to the other square if not; this is the agent function tabulated as follows.

Percept sequence                      Action
[A,Clean]                             Right
[A,Dirty]                             Suck
[B,Clean]                             Left
[B,Dirty]                             Suck
[A,Clean], [A,Clean]                  Right
[A,Clean], [A,Dirty]                  Suck
.                                      .
.                                      .
[A,Clean], [A,Clean], [A,Clean]       Right
[A,Clean], [A,Clean], [A,Dirty]       Suck
.                                      .    
.   
                               .

The characteristics of environment and performance function are as follow:

  • The performance measure awards one point for each clean square at each time step, over a "lifetime" of 1000 time steps.

  • The "geography" of the environment is known a priori (the above figure) but the dirt distribution and the initial location of the agent are not. Clean squares stay clean and sucking cleans the current square. The Right and Left actions move the agent one square except when this would take the agent outside the environment, in which case the agent remains where it is.

  • The only available actions are Right, Left, and Suck.

  • The agent correctly perceives its location and whether that location contains dirt.

In the book, it is stated that under these circumstances the agent is indeed rational. But I do not understand such percept sequence that consists of multiple [A, clean] percepts, e.g. {[A, clean], [A, clean]}. In my opinion, after first [A, clean], the agent must be gone to the right square; So, the sequence {[A, clean], [A, clean]} will never be perceived.

In other words, the second perception of [A, clean] is the consequence of acting left or suck action after perceiving the first [A, clean]. Therefore, we can conclude the agent is not rational.

Please, help me to understand it.

",50418,,2444,,10/19/2021 12:30,10/19/2021 12:30,Why is this vacuum cleaner agent rational?,,1,0,,,,CC BY-SA 4.0 32101,2,,32057,10/18/2021 20:26,,2,,"

I've found out rather a good explanation on Quora.

Max pooling extracts the most salient features - edges, cusps, whatever.

Average pooling operates smoothly - collects features, that are relevant to any part of the image.

Max pooling throws some information, It can be thought as some sense of "forgetting", whereas Average pooling depends on the whole input, despite the output representation is compressed.

There are cases, where each of these operations may not be good for feature extracting (from here):

In th first case Max Pooling will produce simply a white background, and in the second after Average pooling one will get a pale grey strip (although I think, it should be lighter, than depicted).

Probably It would make sense, to mix these two approaches, and pool half of the filters with Max Pooling, and another half with Average Pooling, although, I am not aware of the use of this approach in the literature.

The most flexible and expressive approach is the stridden convolution and the Average pooling, for example, is a particular case of it, but it introduces a certain (although not big) additional cost for storage of new parameters.

",38846,,,,,10/18/2021 20:26,,,,0,,,,CC BY-SA 4.0 32103,1,,,10/19/2021 5:25,,1,132,"

Can I use the transformers for the prediction of wind power with the historical data?

Dataset

Datetime, Ambient temperature (Degree), Dewpoint (Degree), Relative Humidity\n (%), Air Pressure, Wind Direction (Degree), Wind Speed at 76.8 m (m/sec), Power Generated(kW).

15 years of data from 2007 to 2021 with a sampling time of 1 hour​

",50428,,2444,,10/19/2021 12:17,10/19/2021 12:17,Can I use the transformers for the prediction of historical data?,,1,1,,,,CC BY-SA 4.0 32105,1,32153,,10/19/2021 7:30,,3,130,"

I know the encoder is variational posterior $q_{\phi}(\mathbf{z} \mid \mathbf{x})$.

I also know that the decoder represents the likelihood: $p_{\theta}(\mathbf{x} \mid \mathbf{z})$.

My question is about the prior $\mathrm{p}(\mathbf{z})$.

I know ELBO can be written as:

$$E_{q_{\phi}(\mathbf{z} \mid \mathbf{x})}[\log (p_{\theta}(\mathbf{x} \mid \mathbf{z}))]-\mathrm{D}_{\mathrm{KL}}( q_{\phi}(\mathbf{z} \mid \mathbf{x}) \| \mathrm{p}(\mathbf{z})) \leq \log (p_{\theta}( \mathbf{x}))$$

And for the VAE, the variational posterior is

$$ q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x}^{(i)})= \mathcal{N}( \boldsymbol{\mu}^{(i)}, \boldsymbol{\sigma}^{2(i)} \mathbf{I}),$$

and prior is

$$ \mathrm{p}(\mathbf{z})=\mathcal{N}( \boldsymbol{0}, \mathbf{I}).$$

So

$$\mathrm{D}_{\mathrm{KL}}\left(\mathrm{q}_{\Phi}(\mathbf{z} \mid \mathbf{x}) \| p_{z}(\mathbf{z})\right)=\frac{1}{2} \sum_{j=1}^{J}\left(1+\log \left(\sigma_{j}^{2}\right)-\sigma_{j}^{2}-\mu_{j}^{2}\right)$$

That's one way I know the prior plays a role, in helping determine part of the loss function.

Is there any other role that the prior plays for the VAE?

",46842,,46842,,10/23/2021 2:12,10/23/2021 2:12,What are the roles of the prior $\mathrm{p}(\mathbf{z})$ in a VAE?,,1,0,,,,CC BY-SA 4.0 32106,2,,32098,10/19/2021 9:00,,1,,"

You are correct that the robot should never perceive two [A, Clean] events in a row, as it should have moved when perceiving the first one.

However, this only applies to a perfect agent, which always successfully executes all its actions. What if the dog walks past and blocks the exit to room B? Then the Right action fails, and the agent is still in A. But now, it might only listen to [B, ?] events, because obviously it must now be in room B, since it executed a Right command.

Having a simple tabulated agent function like this is clearly not adequate for real world problems, as where do you stop? How many [A, Clean] events do you want to list? And, if you start again, why do you have lists of perceptions in the first place? The initial [A, Clean] in the list should suffice, as it would still attempt to move right if it processes perceptions with no internal state/memory.

But bearing in mind that physical actions can fail in the real world, the agent is still rational, it's just overly cautious: it doesn't try to suck when the location is clean, or move away when it's dirty. It just wants to make sure it's really clean before moving away.

",2193,,,,,10/19/2021 9:00,,,,1,,,,CC BY-SA 4.0 32107,2,,32103,10/19/2021 9:53,,1,,"

Transformers, being a general-purpose sequence model can be used for Time-Series forecasting.

There are some papers dedicated to the use of Transformer for time-series prediction and blogs.

The main ingredient for the autoregression in predictions is the mask in Transformer encoder. When the next element is predicted, tokens in the sequence attend only to the tokens back in time.

After each block a new element is predicted, based on the decoder and encoder tokens.

However, since the dimensionality of your data seems to be rather small, I would suggest starting from something simpler - say linear AR models or RNN, and only then work with transformers.

",38846,,,,,10/19/2021 9:53,,,,0,,,,CC BY-SA 4.0 32109,1,,,10/19/2021 11:37,,0,177,"

I'm studying the series of Wav2Vec papers, in particular, the vq-wav2vec and wav2vec 2.0, and have a problem understanding some details about the quantization procedure.

The broader context is this: they use raw audio and first convert it to "features" $z$ via a convolutional network. Then they project any feature $z$ to a "quantized" element $\hat{z}$ from a given finite codebook (or concatenation of finitely many finite codebooks). To find $\hat{z}$, they compute scores $l_j$ for each codebook entry $v_j$, convert these scores to Gumbel-Softmax probabilities $p_j$ (using a formula which is not deterministic, the formula involves random choices of some numbers from some distribution) and then use these probabilities $p_j$ to choose $\hat{z}$. Further stages of the pre-training pipeline are trained to predict $\hat{z}'s$ by either predicting "future" from the "past", or "reconstructing masked segments".

My question is this is about this sentence:

During the forward pass, $i = \text{argmax}_j p_j$ and in the backward pass, the true gradient of the Gumbel-Softmax outputs is used.

  • I have trouble seeing what exactly is happening in the loss function and back-propagation. Could someone please help me to break this down into details?

My mental attempts to make sense out of it (I'm using the notation $\hat{z}$ for quantized vectors, in the second paper they use $q$)

(1) I would say that during the forward pass, in the Gumbel-Softmax, random variables from the Gumbel-distribution $n_j$ are sampled every time (for every training example) to compute the Gumbel-softmax probabilities $p_j$.

(1a) In the back-propagation, these $n_j$'s are kept constant, and $p_j$ is treated as a function of $l_j's$ only.

(2) The loss function has 2 parts here, Contrastive loss and Diversity loss.

(2a) Based on the description, I would say that in the contrastive loss, the "sampled" vectors $\hat{z}_j$ are used, and probabilities never appear (even not in back-propagation of this part of the loss).

(2b) I would believe that in the gradient of the Diversity loss, which only uses probabilities $p_{g,v}$, that here the gradient or the loss actually is used, as this is responsible for maximizing the entropy. This part of the gradient probably does not use the sampled values $\hat{z}_j$.

Is this approximately correct?

If yes, then I still fail to understand what exactly is happening in the vq-wav2vec paper. The sentence

During the forward pass, $i = \text{argmax}_j p_j$ and in the backward pass, the true gradient of the Gumbel-Softmax outputs is used.

is there as well, but I cannot see any part of the loss function (in this paper) where the probabilities are explicitly used (such as the diversity loss).

",9092,,9092,,10/24/2021 21:11,10/24/2021 21:11,Understanding gumbel-softmax backpropagation in Wav2Vec papers,,0,5,,,,CC BY-SA 4.0 32110,1,,,10/19/2021 12:05,,3,580,"

I heard from many people about the paper titled Attention Is All You Need by Ashish Vaswani et al.

What actually does the "attention" do in simple terms? Is it a function, property, or some other thing?

",18758,,18758,,10/19/2021 22:45,10/20/2021 12:45,"In layman terms, what does ""attention"" do in a transformer?",,2,0,,,,CC BY-SA 4.0 32111,1,,,10/19/2021 13:26,,2,172,"

I am trying to figure out how multiprocessing works in neural networks.

In the example I've seen, the database is split into $x$ parts (depending on how many workers you have) and each worker is responsible to train the network using a different part of the database.

I am confused regarding the optimization part:

Let's say worker 1 finished first to calculate the gradient, now it will update the network accordingly.

Then worker 2 finished the calculation and it will also attempt to update the weights. However, the gradient it calculated was for the network before it was updated by the first worker. Now, the second worker will attempt to update the network with a bad gradient.

Did I miss something?

",47484,,2444,,10/22/2021 13:01,10/22/2021 13:01,How to train neural networks with multiprocessing?,,0,5,,,,CC BY-SA 4.0 32112,2,,32110,10/19/2021 14:05,,4,,"

Let's start by stressing out that in the literature unfortunately the term attention is still used widely without any precise consensus around the technical details, the only constant across papers is that attention should be used when a model is capable of learning, or focusing on local vs global patterns in the data we use for training. And with "should be used" I simply refer to the fact that everyone like to feel eligible to write "Hey, we used attention!" simply because of the hype generated by the introduction of transformers by Vaswani et al.

Said that, I think up to this point the best expression to describe attention is:

A specific type of architecture

What do I mean by this: Vaswani et al. introduced the expression attention in the paper you cite with a new whole machine learning architecture, namely the transformers. In the paper, attention is used to refer to a specific set of layers, similarly we call residual blocks or dense blocks specific type of layers combinations that were introduced for convolutional neural networks. For me the is no difference at all between attention and the two above mentioned examples. The confusion around the use of this expression in my opinion arose from the fact that Vaswani et al. put a lot of emphasis on the final purpose of the new proposed model, i.e. capturing local similarities within sentences in machine translation.

One last consideration why I think that architecture is the best label for attention is that it include also type of attentions that are completely different from the multi-head attention module introduced by Vaswani et al, like architectures that leverage attention maps. Mathematically, attention maps and the multi-head attention module share nothing but the name, still, because conceptually they seems to fulfill the same purpose, we call them both attention, with the consequence that to avoid confusion, one should always refer to a specific paper when talking about attention.

",34098,,,,,10/19/2021 14:05,,,,3,,,,CC BY-SA 4.0 32113,2,,24672,10/19/2021 14:29,,1,,"

After a long wait and some digging, I accidently found what I was looking for. In 2015, polish researcher Dominika Traczyk publish an article presenting CERMINE, a solution for the posted problem.

His solution is SVM-based, but the article gives good insights for an alternate Neural Network version.

The article is open access and can be found on the Springer website, while all the source code is available on GitHub.

",42356,,42356,,10/19/2021 15:17,10/19/2021 15:17,,,,0,,,,CC BY-SA 4.0 32114,1,32117,,10/19/2021 15:19,,2,82,"

Modern cars can operate high-beam headlights automatically:

  • They automatically switch from high-beam headlights to low-beam ones (less intense) when you enter a town or there is a car in front of you either going in the same or opposite direction so you don't dazzle other drivers or people in the street.

  • Oppositely, when you are in almost complete darkness and there aren't any other drivers at sight, the system automatically sets the high-beam headlights.

I am aware that in the front part of these cars there is a camera or sensor and also imagine that the automatic switching when entering or exiting a town is achieved by just having a threshold ambient illumination.

But I am unable to imagine how the recognition of other cars works. It might be that an image recognition program is used to detect pairs of front car lights (white) and rear lights (red). However, how do you deal with:

  • pairs of street lamps (far from the road and not illuminating it) that could be identified as a car coming in the opposite direction,
  • pairs of lights coming from the reflectors of the crash barriers,
  • many other random pairs of lights that could be interpreted as cars.

Is this technology based on AI software that after intense training is able to deal with these points? Or is it a less complex image analysis program that takes into account that moving lights outside the car (with respect to the road) move differently than static lights (with respect to the road) when seen from the car?

Edit: I've seen this technology working on Audi A4 and A5 cars.

",50448,,50448,,10/26/2021 13:09,11/1/2021 13:03,How do automatic high-beam headlights work on cars?,,1,0,,,,CC BY-SA 4.0 32115,1,,,10/19/2021 17:20,,1,111,"

Recently, I was going through the paper Intriguing Properties of Contrastive Losses. In the paper (section 3.2), the authors try to determine how well the SimCLR framework has allowed the ResNet50 Model to learn good quality/generalised features that exhibit hierarchical properties. To achieve this, they make use of K-means on intermediate features of the ResNet50 model (intermediate means o/p of block 2,3,4..), and I quote the reason below

If the model learns good representations then regions of similar objects should be grouped together.

Final Results:

I am trying to replicate the same procedure but with a different model (like VggNet, Xception).

Are there any resources explaining how to perform such visualizations?

",50445,,2444,,10/22/2021 12:57,10/22/2021 12:57,How to use K-means clustering to visualise learnt features of a CNN model?,,0,0,,,,CC BY-SA 4.0 32117,2,,32114,10/19/2021 17:45,,1,,"

I don't have domain knowledge on how these cars perform this detection, but between my EE and ML degrees, I'm quite confident they are most likely not using any AI software to perform this prediction. It is similar to using AI prediction with IR bathroom sensors for a paper dispenser.

What they are probably doing is just simply looking at light intensity. The reflectors you provide in the image will have a luminosity much lower than a car light, so from a threshold perspective I would assume is easy to detect car lights from a reflector.

Here is a video from a guy talking about it. youtube.com/watch?v=teilITzPjPU He shows an example where the light intensity is so high for some signs that the high beams turn off. I would guess that at 300 meters a directed car light would have a higher average area luminosity, than most street reflectors, but i can't verify if that's true.

if very_bright_light:
    turn_off_highbeams()
else:
    turn_on_highbeams()

I'm sure there is more complexity to it, as the time frame of exposure, or perhaps car lights release light with specific frequencies that are easily detectable.

Hope this helps in the event that a domain expert doesn't show up.

",43651,,43651,,11/1/2021 13:03,11/1/2021 13:03,,,,4,,,,CC BY-SA 4.0 32118,2,,32063,10/19/2021 20:07,,1,,"

You don't have a full reinforcement learning problem, but appear to have a context-free k-armed bandit problem:

  • The start state at time $t$ is essentially irrelevant to the problem. It does not impact available actions, reward or next state.

  • The next state at time $t+1$ is only of interest because it determines the reward.

  • All actions are effectively independent events, unaffected by prior history of the system.

As far as the agent is concerned, you can ignore the state. It may be occurring mechanically within the environment, but the agent can be optimised by observing reward values following each action. It does not need to observe the state, because there is nothing to learn from it, and there is no point in having a policy function with state as an input argument.

If your action space is small, you can use any one of a number of optimisers for k-armed bandits.

If your action space is large, you may need to use a gradient bandit of some kind (very similar to policy gradient methods used in RL, except there is no input layer, since using the state value as input to the function would be counter-productive).

",1847,,,,,10/19/2021 20:07,,,,0,,,,CC BY-SA 4.0 32120,1,,,10/19/2021 21:22,,1,38,"

A lot of recent research on Transformers has been devoted to reducing the cost of the self-attention mechanism:

$$\text{softmax}\left(\frac{Q K^T}{\sqrt{d}} \right)V,$$

As I understand it, the runtime, assuming $\{Q, K, V\}$ are each of shape $(n, d)$, is $O(n^2 d + n d^2)$. In general, the issue is the $n^2 d$ term, because the sequence length $n$ can be much bigger than the model dimension $d$. So far, so good.

But as far as I can tell, current research focuses on speedups for $Q K^T$, which is $O(n^2 d)$. There's less focus on computing $A V$, where $A = \text{softmax} \left(\frac{Q K^T}{\sqrt{d}} \right)$ -- which also has complexity $O(n^2 d)$.

Why is the first matrix product the limiting factor?

Examples of these faster Transformer architectures include Longformer, which approximates $QK^T$ as a low-rank-plus-banded matrix, Nystromformer, which approximates $\text{softmax}(QK^T)$ as a low-rank matrix with the Nystrom transformation, and Big Bird, which approximates it with a low-rank-plus-banded-plus-random matrix.

",50457,,50457,,10/22/2021 16:34,10/22/2021 16:34,Why does research on faster Transformers focus on the query-key product?,,0,0,,,,CC BY-SA 4.0 32122,1,32123,,10/20/2021 2:39,,2,973,"

I have recently been studying GNN, and the fundamental idea seems to be the aggregation and transfer of information from a node's neighborhood to update the node's internal state. However, there are few sources that mention the implementation of GNN in code, specifically, how do GNNs adapt to the differing number of nodes and connections in a dataset.

For example, say we have 2 graph data that looks like this:

It is clear that the number of weights required in the two data points would be different.

So, how would the model adapt to this varying number of weight parameters?

",50461,,2444,,10/22/2021 13:43,10/22/2021 13:43,How do graph neural networks adapt to different number of nodes and connections of different graphs?,,1,0,,,,CC BY-SA 4.0 32123,2,,32122,10/20/2021 6:18,,1,,"

The essence of the reason, why this approach works for graphs with a different number of nodes is the locality and node order permutation invariance.

The typical form of the layer-wise signal propagation rule is: $$ H^{(l+1)} = f(H^{(l)}, A) = \sigma (A H^{(l)} W^{(l)}) $$

Here $H^{(k)}$ are the activations of the $k$-th later, $W^{(k)}$ is the weight matrix, $A$ is the adjacency matrix and $\sigma$ is the activation function.

Activation function $\sigma$ and $W^{(k)}$ is the same on any graph, and the difference is only in the choice of adjacency matrix $A$.

Aggregation of the information from the neighborhood is done in permutation-invariant way, and the only way to do this is to assign the same weight for every member in neightboorhood, and (probably) some other weight to the node itself.

For Graph Convolutional Neural Networks (GCNN's) this works as follows: $$ h_{v_i}^{(l+1)} = \sigma (\sum_{j \in N(i)} \frac{1}{c_{ij}} h_{v_j}^{(l)} W^{(l)}) $$

Regardless of whether the node is isolated or has many neighbors procedure is unchanged.

Older approaches, the spectral for example, had to calculate the Graph Laplacian and perform its eigendecomposition. They did not generalize to other graphs.

",38846,,,,,10/20/2021 6:18,,,,6,,,,CC BY-SA 4.0 32124,1,,,10/20/2021 9:00,,1,770,"

I have been reading about Adam and AdamW (Here). The author mentioned that in "uncentered variance" we don't consider subtracting mean

In this statement, the author is talking about uncentered variance and how it becomes equal to the square of the mean.

I want to understand what exactly is uncentered variance? (because if I consider the general equation of variance $\dfrac{(\text{obs-mean})^2}{(n-1)}$ then it is not making sense to me the removing mean will lead to the definition of uncentered variance as we need a point around which we are calculating variance and here mean is that point)

Also if we are making mean=0(not subtracting mean from obs) then if I consider this as uncentered mean (For me it is variance around 0) then it is becoming hard to understand how this will lead to uncentered $\text{variance = mean}^2$

",34365,,18758,,10/20/2021 12:41,1/19/2022 15:13,What is uncentered variance and how it becomes equal to mean square in Adam?,,1,2,,,,CC BY-SA 4.0 32125,2,,18573,10/20/2021 9:42,,0,,"

The Policy Iteration algorithm (given in the question) is model-based.

However, note that there exist methods that fall into the Generalized Policy Iteration category, such as SARSA, which are model-free.

From what I understand, policy iteration is a model-free algorithm

Maybe this was referring to generalized policy iteration methods.


(Answer based on comments from @Neil Slater.)

",50473,,50473,,10/21/2021 17:19,10/21/2021 17:19,,,,1,,,,CC BY-SA 4.0 32126,2,,32110,10/20/2021 12:45,,0,,"

The answer above is very concise but I will try to give an ELI5 example. I also agree with @nbro that attention does not exclusively mean transformer architecture.


Before attention

What is the height of the youngest female child of the father of your mother's first cousin? That query is convoluted, depends on your good memory of your family tree's relationships. It looks like this:

$Q_{RNN} = f(Mother(first\_cousin(father(yongest\_female(height))))\ \ (1)$

What you see in (1) is how Recurrent Neural Networks function, and it is obvious that their performance is inversely correlated with the length of the input.

After attention

A more efficient way to query the height of that person is by giving it a name - Aligna. Now, whenever you need to get characteristics of that person you don't have to remember it's "chained" relationships backwards from the person that is closest to you. This new approach would look like this:

$Q_{attentive_RNN} = f(Aligna(height))\ \ (2)$

The latter is what essentially attention pooling does. Attention establishes a direct link with all input steps (e.g. words in an input sentence) so that it can independently pay attention to the input (or set of inputs) that will generate a successful output/prediction.

",40560,,,,,10/20/2021 12:45,,,,0,,,,CC BY-SA 4.0 32128,1,,,10/20/2021 13:35,,0,58,"

In this paper, in section 3.1, the authors state

Scaling the filter instead of the image allows the generation of saliency maps of the same size and resolution as the input image.

How is this possible?

From what I have understood, the process of filtering the image is similar to that of a convolution operation, like this:

However, if this is true, shouldn't we get different sized outputs (i.e. saliency maps) for different filter sizes?

I think I am misunderstanding how the filtering process really works in that it actually differs from a CNN. I would highly appreciate any insight on the above.

Note: This is a follow-up to this question.

",,user48670,,user48670,10/22/2021 18:15,10/22/2021 18:15,"In this paper, how does scaling the filter instead of the image generate saliency maps of the same size and resolution as the input image?",,0,2,,,,CC BY-SA 4.0 32129,2,,32042,10/20/2021 14:55,,1,,"

I'm not an expert, but as far as I understand, you should use an off-policy algorithm, the difference between is:

On-Policy: The agent learns the value function according to the current action derived from current the policy being used. Off-Policy: The agent learns the value function according to the action derived from another policy.

This means that you can use another policy to explore. For example, if you use a Q-Learning (not your case because of the continuos values of your problem) that is an off-policy approach, you can explore with a particular policy to get the actions (you can only select valid actions) then you can update your q-table with the Q-Learning equation.

In your case you can use an off-policy deep approach. I suggest DDPG/TD3, you can look about some of them briefly here.

The idea is to use an exploration policy, the one you restrict to only select valid values (hard-constraint), and integrate the State, Action, Reward, State' in the replay buffer. The Stable_Baseline library doesnt allow that, but you could check the original source code of TD3.

Edit1:

If you see in the Q learning algorithm, the e-greedy consist on selecting with a probability of $\epsilon$ $a \gets \text{any action}$, and with $1-\epsilon$ the $a \gets max_{a}Q(s,a)$. This $\text{any action}$ is the part of the code that you use this "controller" to only select random (but valid) actions. This is because you want to explore but only explore with valid actions. Then the Q learning can "exploit" picking the best action from the exploration you did before. Now, for your case with continuos actions, you can use DDPG/TD3 to do something similar but you store these valid actions in a replay buffer, so your Neural Network can learn for this "data" of only valid actions.

Edit 2:

In your custom environment you can define your action space like:

self.action_space = gym.spaces.Box(low=-1, high=1, shape=(1,))

Now, as you said, in the step function of your environment you can establish the x(t) and y(t)

maxX=10 #Depends on the maximum value of your x(t), I assigned a 10
maxY=10 #Depends on the maximum value of your y(t), I assigned a 10
x=0
y=0
if action>0:
    y=0
    x=action*maxX
elif action<0:
    x = 0
    # you need to multiply by -1 because your action is negative
    y = -1*action * maxY 
# do the rest of the code of your controler with x and y

In this way, your RL agent will learn which action (between -1 and 1) will get the best reward, but in the step function, you map the action [-1 +1] to your true values.

",49444,,49444,,11/7/2021 13:24,11/7/2021 13:24,,,,25,,,,CC BY-SA 4.0 32130,1,,,10/21/2021 0:43,,1,46,"

I'm wondering if there is a way to use a neural network that can detect the noisy sine wave, where the frequency is not constant. In other words, I'm not looking for a solution that would detect the signal of a particular frequency (say 50Hz), but a solution that can detect any sine wave signal in say range (100-1000Hz).

",50487,,2444,,10/22/2021 13:53,10/22/2021 13:53,How to detect the sine wave signal with different frequency using neural networks?,,0,1,,,,CC BY-SA 4.0 32131,2,,32091,10/21/2021 8:15,,1,,"

I think you might have misunderstood 2 concepts here: CNNs and Object Detection.

Object Detection is an AI approach to solve problems where you are interested in both the location and the classification of key elements in the image. On the other hand Image Classification is another approach where you are interested in classify the whole image with a tag.

Those are very well known approaches to solve computer vision problems through AI. There lots of approaches, you select the one that outputs the information you are most interested in.

CNNs is network type. You can build Object Detectors or Image Classifiers with CNNs, but you can also build it with Transformers, with Multi Layer Perceptrons, with LSTM, even there are some approaches with Reinforcement Learning.

Going back to your problem of AI for chest radiographs, when you see 99% accuracy it is probably in Image Classification (predict probability of bone broken in the image). On the other hand Object Detection is a more informative approach because it locates the places where a bone could be broken and the probability of that bone being broken. It is more informative because the doctor now knows where to look in the chest radio image.

",26882,,,,,10/21/2021 8:15,,,,0,,,,CC BY-SA 4.0 32132,1,,,10/21/2021 13:35,,1,238,"

I am working on a project consisting of medical images and a huge dataset of multi-label and non-binary labels/outcomes ( sex, blood pressure, age and 40 more ).

Would be the best approach to hard code all of them or is there some better approach? If this is the best way, does anyone have a similar PyTorch notebook on which I could orientate myself? Or some smart solution how to hard code them automatically?

Any help is welcome!

",50500,,,,,11/15/2022 20:04,Multi label classification on non binary labels with pytorch,,1,2,,,,CC BY-SA 4.0 32133,2,,7853,10/21/2021 14:14,,1,,"

The simple answer to your question is "No" with a caveat.

The caveat is that there are signs that your network is never going to perform well. For example, the epoch accuracy fails to improve or even consistently declines over the first several epochs, or the validation accuracy is flat or declining. It could be that the validation loss starts high and just keeps increasing from the beginning. These are all bad signs.

Outside of this, however, it's very tough to know the model won't work well in the long run. For example, we have a model we built for solving a set of CAPTCHAs. The regression portions of that converged very quickly, but the portions that solved the rest of the CAPTCHA took something like 18 hours before they converged. Honestly, we only ran it that long because it was the end of the day and the regression piece looked so promising; there was nothing in the training behavior of the CAPTCHA solver that looked like it would work (even though our intuition was that it should.)

In the end, we have a 96%+ accuracy CAPTCHA solver that we likely would have killed if we had watched it train for more than 10 or 15 minutes.

",30426,,,,,10/21/2021 14:14,,,,0,,,,CC BY-SA 4.0 32135,1,,,10/21/2021 16:28,,1,213,"

I'm using a deep learning-based model (deep lab v3+ with xception as the backbone) for image segmentation and removing the background. The subject of the image will be a person. And my target is to extract the person from the image. But only with the machine learning model, the output is not satisfactory. I'm thinking if somehow I can detect the edge of the person with "Canny Edge detector" and use this as post processor to get exact output. I also try the blur effect for smoothing the object's edge and getting a pleasant-looking output. I saw Adobe Photoshop's "select subject" feature. It is quite accurate. Main image -> segmented mask from ml-model -> adding blur effect on the edge

Canny Edge Detector as Post Processor: I tried the following steps:

  1. Run Canny Edge on the main image. canny edge image
  2. Remove the background noise(of the canny edge image) using the image segmentation mask(I got from the ml-model). Here I use bitwise-and of the canny edge image and segmentation mask. background noise removed canny edge image
  3. In the 2nd step, some of the real edges of the person also get removed, and to restore those, I have used the breadth-first-search algorithm and run it up to some limit(suppose, up to 10 neighbors). after BFS
  4. I shrink the segmentation mask and using this, removed the interior noise of the canny edge image. removing the inside noise

Now if I could somehow connect the edge line of the canny edge image, we would get a perfect segmentation. But I failed to do so.

Any suggestion will be helpful for me.

",50496,,,,,10/21/2021 16:28,Is there any way to remove background of an image fully with the help of post-processor techniques(like edge detector) after deep learning based model,,0,0,,,,CC BY-SA 4.0 32136,1,,,10/21/2021 17:17,,2,28,"

As part of a talk I'm giving, I'd like to show one of the many videos on YouTube where an AI is playing Mario, such as this one. What bothers me though is that the AI is trying to complete the level as quickly as possible, without trying to collect lots of coins. I believe it's known as "speed run" in the gaming world, which is fine and well, but I think most people expect Mario to be collecting lots of coins and mushrooms.

Are you familiar with a video of an AI-powered Mario that does collect a lot of coins and mushrooms?

If not, maybe you know a different video of a similarly popular video game where the AI does try to get lots of points and not just do a speed run?

",25904,,,,,10/21/2021 17:17,Demonstration of AI-powered Mario collecting lots of coins?,,0,0,,,,CC BY-SA 4.0 32138,2,,32132,10/21/2021 18:48,,0,,"

If your goal is to predict given an image multiple labels (each of them can be binary or multi-class) you could consider two strategies:

  • Create for each classification task a separate model, which predicts solves only one problem
  • Create a single model with multiple heads

The first option seems to be more straightforward, but it would most likely need to consume more memory and computational resources, and the gradient signal from the prediction of one model is independent of other models. In case the labels are uncorrelated or weakly correlated this is not a problem.

The second option when you have a joint backbone for all classification problems and only, in the end, the computation graph splits into branches solving each of the classification problems seems to be more efficient, and in the case, where these tasks are related to each other, improvement of accuracy in one task is very likely to be beneficial for the other task.

Overall, the resulting architecture resembles the Inception architecture:

You can try to put all classification heads in the very end, or some can disentagle from the other part of network a bit earlier.

",38846,,,,,10/21/2021 18:48,,,,0,,,,CC BY-SA 4.0 32139,1,,,10/21/2021 20:13,,1,13,"

I want to build a deep learning model that can predict a continuous value (LogP in this case) given text inputs (SMILES notations in this case), the dataset is as illustrated below.

SMILES notations LogP
C1CCCC(C)(C)1 1.98
... ...

I have never tackled text data, I mostly worked with numbers-based datasets (or images).

My questions are:

  • What is the best model for this case? I believe RNN based architectures, such as LSTM and GRU, are the most suitable.
  • What about recent architectures such as Transformers?
  • How can/should I convert (or embed, or encode) the text inputs (SMILES) to feed them to my model?
",49798,,2444,,10/22/2021 13:19,10/22/2021 13:19,What is the best way to train a text-based regressor model?,,0,0,,,,CC BY-SA 4.0 32140,1,,,10/21/2021 22:39,,1,69,"

From my knowledge, the most used optimizer in practice is Adam, which in essence is just mini-batch gradient descent with momentum to combat getting stuck in saddle points and with some damping to avoid wiggling back and forth if the conditioning of the search space is bad at any point.

Not to say that this is actually easy in absolute terms, but after a few days, I think I got most of it. But when I look into the field of mathematical (non-linear) optimization, I'm totally overwhelmed.


What are the possible reasons that optimization algorithms for neural networks aren't more intricate?

  • There are just more important things to improve?
  • Just not possible?
  • Is Adam and others already so good that researchers just don't care?
",35727,,40434,,10/22/2021 12:52,10/22/2021 12:52,Why are optimization algorithms for deep learning so simple?,,0,2,,,,CC BY-SA 4.0 32141,1,32142,,10/22/2021 2:05,,0,57,"

For non-English languages (in my case Portuguese), what is the best approach? Should I use the not-so-complete tools in my language, or should I translate the text to English, and after using the tools in English? Lemmatization, for example, is not so good in non-English languages.

",50512,,2444,,10/25/2021 10:39,10/25/2021 10:39,"Given the immaturity of NLP tools for non-English languages, should I first translate the non-English language to English before text pre-processing?",,1,1,,,,CC BY-SA 4.0 32142,2,,32141,10/22/2021 8:10,,1,,"

Check SpaCy, it's a powerful NLP library that provides lot of different language models, including one for Portuguese.

To answer the more generic question, translating to another language undermines the whole purpose of text pre-processing. Not only will translating generate errors, even when translating to a common language like English, but most importantly, you're forgetting that every language has its own specific linguistic characteristics, like different grammatical genders, tenses, grammar rules for plurals and adjectives, adverbs and so on. By translating you'll throw all that information in the bin.

",34098,,2193,,10/22/2021 8:36,10/22/2021 8:36,,,,0,,,,CC BY-SA 4.0 32143,2,,32071,10/22/2021 8:43,,3,,"

It should be noted that the selection of $\alpha$ is a classic problem in stochastic approximation, rather than a specific problem in RL. Once you know this it will be clear.

What is stochastic approximation? As its name suggests, it is a method that uses data to approximate (typically) expectations. For example, suppose \begin{align*} w=\mathbb{E}[X] \end{align*} where $X$ is a random variable. If we have some iid samples of $X$ as $\{x_i\}$, then we can estimate $w$ by \begin{align*} w_{k+1}=w_k-a _k(w_k-x_k). \end{align*}

Here, you note that $a _k$ is time-varying instead of a constant. In fact, in order to ensure the (almost surely) convergence of $w_k$, a necessary condition is \begin{align*} \sum_k a _k&=\infty\\ \sum_k a_k^2&<\infty \end{align*} Why such a condition is required? A rigorous proof can be found in Robbins-Monro Algorithm. Here, I merely show some intuition why it is necessary.

  • First, $\sum_{k=1}^\infty a_k=\infty$ says that have $a_k$ should be sufficiently large in order to counter arbitrary initial conditions. In particular, the mathematically reasoning is as follows: hypothetically if $\sum_{k=1}^\infty a_k<\infty$, and also $\delta_k\doteq w_k-x_k$ is bounded, we have $\sum_{k=1}^\infty a_k \delta_k<\infty$. As a result, the difference between $w_\infty$ and $w_1$ is bounded. If the initial condition is very far from the solution, then it is not able to converge to the true solution.
  • Secondly, the condition of $\sum_k a _k^2<\infty$ says that $a_k$ should converge to zero. In particular, mathematically, the difference between $w_k$ and $w_{k+1}$is $a_k\delta_k$. If $a_k$ does not go to zero, then $w_k$ and $w_{k+1}$ will still fluctuate significantly after when $k$ is very large.

Of course, if $a_k=\alpha$ is constant, then $\sum_k a _k^2<\infty$ is not satisfied.

Now, let's come back to the TD algorithm. In fact, it can be viewed as a stochastic approximation algorithm. In particular, recall that the Bellman equation is \begin{align} v_\pi(s)=\mathbb{E}[R+\gamma v_\pi(S')|s],\quad \forall s \end{align} If we write $X\doteq R+\gamma v_\pi(S')$, then it becomes $v_\pi(s)=\mathbb{E}[X|s]$, which is very similar to the case of $w=\mathbb{E}[X]$. Hence, the stochastic approximation algorithm solve such a equation using data is \begin{align}\label{eq_SAAlgorithmSolvingBellmanEquation} v_{t+1}(s_t) &=v_t(s_t)-a _t(s_t)[v_t(s_t)-(r_t+\gamma v_t(s_{t+1}))],\qquad t=1,2,\dots \end{align}

Now, you may see how TD is obtained and why the step size should be $a _t(s_t)$ instead of a constant.

",,user50121,,user50121,10/23/2021 6:46,10/23/2021 6:46,,,,0,,,,CC BY-SA 4.0 32144,1,,,10/22/2021 10:08,,2,140,"

Very much a general question here, from a somewhat uneducated perspective.

I'm currently part way through an MSc in AI, and at the minute I am taking a module on Knowledge Engineering and Computational Creativity. The professor taking the class obviously does research in this area and is saying that ontologies are becoming very important in the world of AI, or it may be more accurate to say he is suggesting they are becoming more important.

I intend to look into the work he does, and ask him a few questions, but, generally, I was wondering where this type of research sits in the world of AI. Is it something being worked on a lot? Is it becoming bigger?

I am interested because I do find the topic interesting, and I will have a research project coming up soon, and while I do want to work on an interesting topic, I also want to work on a relevant topic, so any information would be great.

",50516,,2444,,10/23/2021 14:29,10/23/2021 14:29,What is the place of ontologies in artificial intelligence?,,1,6,,,,CC BY-SA 4.0 32145,1,32155,,10/22/2021 10:29,,1,489,"

I am currently taking an Artificial Intelligence course and learning about DFS and BFS.

If we take the following example:

From my understanding, the BFS algorithm will explore the first level containing $B$ and $C$, then the second level containing $D,E,F$ and $G$, etc., till it reaches the last level.

I am lost concerning which node between $B$ and $C$ (for example) will the BFS expand first?

Originally, I thought it is different every time, and, by convention, we choose to illustrate that it's done from the left to the right (so exploring $B$ then $C$), but my professor said that our choice between $B$ and $C$ depends on each case and we choose the "shallowest node first".

In made examples, there isn't a distance factor between $A$ and $B$, and $A$ and $C$, so how could one choose then?

My question is the same concerning DFS where I was told to choose the "deepest node first". I am aware that there are pre-order versions and others, but the book "Artificial Intelligence - A Modern Approach, by Stuart Russel" didn't get into them.

I tried checking the CLRE algorithms book for more help but the expansion is done based on the order in the adjacency list which didn't really help.

",50521,,2444,,10/22/2021 16:01,10/22/2021 16:34,"How do the BFS and DFS search algorithms choose between nodes with the ""same priority""?",,2,0,,,,CC BY-SA 4.0 32147,1,,,10/22/2021 10:32,,1,49,"

I've recently become very interested in the potential AI holds for the future of society. I believe it has the potential to truly alter the way we live our lives in the not-too-distant future. I've read around three dozen books by leading scholars and I've written my undergraduate thesis on the implications of AI for modern warfare and the international balance of power. With that in mind, I've started to think about a career related to AI and machine learning.

However, I have a bit of a problem considering my background as a humanities person. I'm about to finish my undergraduate studies in International Relations and I plan to get a master in a similar field. As someone with little background in Comp Sci/ Math/ Physics/ etc, how could someone go about working in a field like AI? I struggle to see what utility non-STEM people could offer to firms.

I hope I'm asking this in the right place considering this forum is mostly for technical questions.

",50309,,,,,10/22/2021 10:32,Can people without a background in STEM go into the field of AI?,,0,2,,,,CC BY-SA 4.0 32148,1,,,10/22/2021 10:34,,1,510,"

The problem I'm trying to solve is as follows.

I have two separate domains, where inputs do not have the same dimensions. However, I want to create a common feature space between both domains using paired inputs (similar inputs from both domains).

My solution is to encode pairs of inputs into a shared latent space using two VAE encoders (one for each domain). To ensure that the latent space is shared, I want to define a similarity metric between the output of both probabilistic encoders.

Let's define the first encoder as $q_\phi$ and the second as $p_\theta$. As for now, I have two main candidates for this role:

  1. KL-divergence : $\text{KL}(p || q)$ (or $\text{KL}(q||p)$), but, since it is not symmetrical, I don't really know which direction is the best.

  2. JS-divergence: symmetrical and normalized, which is nice for a distance metric, but, since it is not as common as KL, I'm not sure.

Other candidates include adversarial loss (a discriminator is tasked to guess from which VAE the latent code is, the goal of both VAE being to maximally confuse it) or mutual information (seen more and more in recent works, but I still don't fully understand it).

My question is: according to you, which loss could work best for my use case? KL or JS? Or other candidates I didn't think of?

-- More context --

My ultimate goal is to use transfer learning between morphologically distinct robots e.g a quadrupedal robot and a bipedal robot. The first step in my current approach is to record trajectories on both robots executing the same task (walking for example). From said trajectories I create paires of similar states (to simplify the problem, I suppose that both robot achieve the same task at the same speed so temporaly aligned states for both robots are paired). Then my goal is to encode these paired states (that doesn't have the same dimension due to the difference in number of joints) into two latents spaces (one for each VAE) such that similar pair of inputs are close in the latents spaces. If I was working with simple autoencoders I would simply minimize the distance in the latent space between paires of inputs such that similar states on both robots maps to the same point in the latent space. But I need the generative capabilities of VAE so instead I would like to make the distributions outputed by the VAEs as close as possible. Make sense ?

",48484,,48484,,10/26/2021 9:35,10/26/2021 9:35,What is the most suitable measure of the distance between two VAE's latent spaces?,,0,5,,,,CC BY-SA 4.0 32149,2,,32010,10/22/2021 13:50,,0,,"

Value iteration (VI) is a truncated version of Policy iteration (PI).

PI has two steps:

  • the first step is policy evaluation. That is to calculate the state values of a given policy. This step essentially solves the Bellman equation $$v_\pi=r_\pi+\gamma P_\pi v_\pi$$ which is the matrix vector form of the Bellman equation. I assume that the basics are already known.
  • The second is policy improvement. That is to select the action corresponding to the greatest action value at each state (i.e., greedy policy): $$\pi=\arg\max_\pi(r_\pi+\gamma P_\pi v_\pi)$$

The key point is: the policy iteration step requires an infinite number of iterations to solve the Bellman equation (i.e., get the exact state value). In particular, we use the following iterative algorithm so solve the Bellman equation: $$v_\pi^{(k+1)}=r_\pi+\gamma P_\pi v_\pi^{(k)}, \quad k=1,2,\dots$$ We can prove that $v_\pi^{(k)}\rightarrow v_\pi$ as $k\rightarrow\infty$. There are three cases to execute this iterative algorithm:

  • Case 1: run an infinite number of iterations so that $ v_\pi^{(\infty)}=v_\pi$. This is impossible in practice. Of course, in practice, we may run sufficiently many iterations until certain metrics (such as the difference between two consecutive values) are small enough).
  • Case 2: run just one single step so that $ v_\pi^{(2)}$ is used for policy improvement step.
  • Case 3: run a few times (e.g., N times) so that $ v_\pi^{(N+1)}$ is used for the policy improvement step.

Case 1 is the policy iteration algorithm; case 2 is the value iteration algorithm; case 3 is a more general truncated version. Such a truncated version does not require infinite numbers of iterations and can converge faster than case 2, it is often used in practice.

",,user50121,,user50121,10/23/2021 16:12,10/23/2021 16:12,,,,16,,,,CC BY-SA 4.0 32150,1,,,10/22/2021 14:11,,1,81,"

As far as I know,

  1. FaceNet requires a square image as an input.

  2. MTCNN can detect and crop the original image as a square, but distortion occurs.

Is it okay to feed the converted (now square) distorted image into FaceNet? Does it affect the accuracy of calculation similarity (embedding)?

For similarity (classification of known faces), I am going to put some custom layers upon FaceNet.

(If it's okay, maybe because every other image would be distorted no matter what? So, it would not compare normal image vs distorted image, but distorted image vs distorted image, which would be fair?)

Original issue: https://github.com/timesler/facenet-pytorch/issues/181.

",50531,,2444,,10/25/2021 14:30,4/20/2022 13:31,Does the converted (now square) distorted image of a face affect the accuracy of the calculation of the similarity in FaceNet?,,1,3,,,,CC BY-SA 4.0 32151,2,,20532,10/22/2021 14:14,,0,,"

More comments in addition to the accepted answer.

The OP says the two algorithms have different value functions. This is actually not precise and may be the source of confusion. In particular, only in the policy iteration algorithm, the value of $v$ is the state value function, which is the solution to the Bellman equation. However, the value of $v$ in value iteration is not a state value function! That is simply because it is not the solution to any Bellman equation in general. Then, what is the $v$ in value iteration? See another answer of mine.

Why value iteration, which does not calculate the state values, can find the optimal policy? It would be easier to see that if you think of it as a simple numerical iterative algorithm solving the Bellman optimality equation. The algorithm follows from the contraction (or called fixed-point) theorem when we analyze the Bellman optimality equation.

",,user50121,,user50121,10/23/2021 6:43,10/23/2021 6:43,,,,0,,,,CC BY-SA 4.0 32152,1,32160,,10/22/2021 14:31,,1,86,"

I was reading the following paper here about some of the groundwork in graph deep learning. On page 3, in the bit entitled Polynomial parameterization for localized filters, it states that non-parametric filters (i.e. a filter whose parameters are all free) are not localized in space.

Question: Why is this the case? It is referring to a filter $g_{\theta}$ such that: $$ y = g_{\theta}(L) x = g_{\theta} (U \Lambda U^T) x = U g_{\theta} (\Lambda) U^T x $$ where $L$ is the graph laplacian matrix, and $U \Lambda U^T$ are the eigenvector decomposition matrices.

Attempted explanation: Is it because the filter $g_{\theta}$ is defined in the spectral domain and thus its spatial domain (i.e. its inverse graph Fourier transform) may be defined over the whole graph (and thus not localized)?

",49689,,2444,,10/23/2021 22:13,10/23/2021 22:13,Graph Convolutional Networks: why are non-parametric filters not localized in space?,,1,0,,,,CC BY-SA 4.0 32153,2,,32105,10/22/2021 16:07,,2,,"

The prior $p(z)$ is assumed as part of the problem formulation. A typical case is where $z$ is a vector of iid normal random variables. The ELBO involves a regularization term which encourages $q(z \, | \, x)$ to have a similar distribution to $p(z)$ (the way you've written it, that's the KL term). Thus $q(z \, | \, x)$ will end up having a similar shape to $p(z)$. For example, again assuming $z$ is a vector of iid normals, if you plot samples of $z$ drawn from $q(z \, | \, x)$ you will find it has a roughly spherical shape. If you scroll down to the [16] code block and look at the figure you'll see what I mean. The figure is plotting samples of $z$, colored according to what $x$ is (MNIST example). This is just some random figure I found, and I don't endorse this code, but the image is what you'd expect to see.

The way we end up with a distribution $p(x, z)$ is by using the prior. We sample $z$ according to $p(z)$; we've trained the decoder $p(x \, | \, z)$, and by definition $p(x, z) = p(x \, | \, z) p(z)$.

",47080,,,,,10/22/2021 16:07,,,,2,,,,CC BY-SA 4.0 32154,2,,32145,10/22/2021 16:25,,1,,"

Either one. The BFS algorithm and DFS algorithm do not specify.

Typically, it's programmed as left-to-right, just because that's the way programmers think about trees. It doesn't have to be.

Note that DFS isn't "deepest node first" either. Imagine that nodes H and I in your tree did not exist; D, J, K, E, B would be a perfectly valid DFS traversal of B, even though J and K are deeper than D. So would B, D, E, J, K even though E is the parent of J and K! DFS says that you look at a node's children before you look at other nodes on the same layer, but it doesn't say you have to look at the node's children before the node itself. In fact there are three well-known variants (pre-order, post-order and in-order) depending on whether you visit each node before, after or in the middle of its children.

Now, if this is for an AI then you probably do care about the order. If this tree represents a game tree, then you probably want to estimate which node is likely to have the best outcome for the AI player, and check that one first. This can be called "best-first search".

",28406,,,,,10/22/2021 16:25,,,,1,,,,CC BY-SA 4.0 32155,2,,32145,10/22/2021 16:34,,1,,"

BFS and DFS are usually applied to unweighted graphs (or, equivalently, to graphs where the edges have all the same weights). In this case, BFS is optimal, i.e., assuming a finite branching factor, it eventually finds the optimal solution, but it doesn't mean that it takes a short time, in fact, it might well be the opposite, depending on the search space. DFS would not be optimal because it may not terminate if there is an infinite path.

In the case of unweighted graphs, BFS really proceeds level-by-level. In other words, it starts at the root (level $l=0$), then expands (i.e. adds to the FIFO queue) all children of the root (level $l=1$), then it expands all children of the children of the root (level $l=2$), and so on. So, for example, all children at level $l=2$ have a distance of $2$ to the root (if we assume that edges have a weight of $1$).

The order in which you choose nodes at a certain level to add to your FIFO queue is, as far as I know, typically from left-to-right, but you could, in principle, choose them in different ways (e.g. from right-to-left). So, this is a convention. Of course, this choice may affect when (and if, in the case of DFS) you find the solution.

Now, if we applied BFS to a weighted graph, then what happens here? It depends. If you ignore the weights, then it is the usual BFS. If you take into account the weights, it depends on how you take them into account. For example, if you choose to expand (i.e. add to the queue) nodes with the shortest path so far to the root, then this becomes a uniform-cost search (this is explained in the AIMA book, e.g. 3rd edition, section 3.4.2, p. 83).

So, maybe your professor has in mind the uniform-cost search when he's telling you to choose the "shallowest node first", but you need to talk to him, tell him about uniform-cost search, and ask him if that's what he means.

It also seems to me that your original idea of how DFS works is correct. It goes deep, then backtracks (without taking into account weights). If you take into account the weights, I don't know what DFS could turn into.

",2444,,,,,10/22/2021 16:34,,,,0,,,,CC BY-SA 4.0 32156,2,,25950,10/22/2021 16:43,,0,,"

It's a very complex task. The price of bitcoin could be affected by NEWs(political desicions, network upgrades, other cryptocurrencies), whales buying and selling crypto, network hashrate. You can try to search for predictions after lets say after 5 minutes, based on the data inside the orderbook on Exchanges. The market is very volatile, you can make profits of shorter period of time.

https://github.com/Sapphirine/BITCOIN-PRICE-PREDICTION-USING-SENTIMENT-ANALYSIS

https://github.com/miroblog/limit_orderbook_prediction

",50535,,,,,10/22/2021 16:43,,,,0,,,,CC BY-SA 4.0 32157,1,32167,,10/22/2021 17:34,,1,553,"

In Semi-Supervised Classification with Graph Convolutional Networks, the authors say that GCN is an approach for semi-supervised learning (SSL).

But a GCN is making predictions using only the graph Laplacian. The single place where I find the labels is in its loss function.

$$\mathcal{L}=-\sum_{l \in \mathcal{Y}_{L}} \sum_{f=1}^{F} Y_{l f} \ln Z_{l f}$$

How does it make GCN a SSL approach?

",49338,,2444,,10/23/2021 16:25,10/25/2021 14:25,How are GCN doing semi-supervised learning?,,1,0,,,,CC BY-SA 4.0 32158,1,32159,,10/22/2021 17:46,,2,274,"

Currently, I am studying deepfake detection using deep learning methods. Convolution neural networks, recurrent neural networks, long-short term memory networks, and vision transformers are famous deep learning-based methods that are used in deepfake detection, as I found in my study.

I was able to find that CNNs, RNNs and LSTMs are multilayered neural networks, but I found very little about the neural network layers in a Vision Transformer. (Like a typical CNN has an input layer, pooling layer, and a fully connected layer, and finally an output layer. RNN has an input layer, multiple hidden layers and an output layer.)

So, what are the main neural network layers in a Vision Transformer?

",50537,,2444,,10/22/2021 21:36,10/22/2021 21:36,What are the major layers in a Vision Transformer?,,1,0,,,,CC BY-SA 4.0 32159,2,,32158,10/22/2021 19:48,,4,,"

The Transformer family of architectures is a separate family of NN architectures, different from the CNNs and RNNs.

The main part of the Vision Transformer are the self-attention layers.

The architecture proposed in the paper An Image is Worth 16x16 Words treats each 16x16 as a word in the sentence. There is a convolutional layer (with kernel_size=16 and stride 16) that transforms the input patches into tokens as in NLP problem, and then these tokens are propagated through multiple layers.

Each Transformer encoder is a standard Transformer block, consisting of the:

  • Multihead self-attention layer that transforms tokens into keys, queries and values
  • Feedforward layer acting on each token independently (pointwise nonlinearity)
  • LayerNormalization modules between them.

Each image is treated as a sentence or chunk of text. The main idea and advantage of self-attention layers is the ability to collect the global context of the given data sample, whereas CNNs are restricted to a neighboorhood of the given pixel (and can have a global understanding of the data after a sufficient number of convolutional layers).

If you are inexperienced with transformers, I recommend reading this blog as an easily accessible and comprehensive introduction to Transformers.

",38846,,,,,10/22/2021 19:48,,,,0,,,,CC BY-SA 4.0 32160,2,,32152,10/22/2021 20:07,,1,,"

Your explanation is correct.

Probably, the term non-parametric is not very appropriate. But the meaning of it here, as far as I understand, is the parametrization where all parameters are all independent. And in order to work with independent parameters, one makes the transition to the diagonal basis of the graph Laplacian $\Lambda$.

Computation of the eigendecomposition of $\Lambda$ requires knowledge of the graph as a whole.

The drawback of the spectral methods is that they work only on a given graph and do not generalize to other graphs. Even more, a small perturbation in the graph structure can lead to big changes in the spectrum as the following example shows (it is a mesh, actually, a particular case of graph):

The property $$ g_\theta (U \Lambda U^T) x = U g_\theta (\Lambda) U^T x $$ doesn't require a lot from $g_\theta$, expansion in Taylor series is sufficient, since ($U U^T = U^T U = 1$): $$ (U \Lambda U^T)^k = U \Lambda^2 U^T (U \Lambda U^T)^{k-2} = ... = U \Lambda^k U^T $$

",38846,,,,,10/22/2021 20:07,,,,3,,,,CC BY-SA 4.0 32161,1,32222,,10/23/2021 2:12,,1,489,"

I found the following paragraph from An Introduction to Variational Autoencoders sounds relevant, but I am not fully understanding it.

A VAE learns stochastic mappings between an observed $\mathbf{x}$-space, whose empirical distribution $q_{\mathcal{D}}(\mathbf{x})$ is typically complicated, and a latent $\mathbf{z}$-space, whose distribution can be relatively simple (such as spherical, as in this figure). The generative model learns a joint distribution $p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})$ that is often (but not always) factorized as $p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})=p_{\boldsymbol{\theta}}(\mathbf{z}) p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$, with a prior distribution over latent space $p_{\boldsymbol{\theta}}(\mathbf{z})$, and a stochastic decoder $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$. The stochastic encoder $q_{\phi}(\mathbf{z} \mid \mathbf{x})$, also called inference model, approximates the true but intractable posterior $p_{\theta}(\mathbf{z} \mid \mathbf{x})$ of the generative model.

How is it that the generative model learns a joint distribution $p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})$ in the case of the VAE? I know that learning the weights of the decoder is learning $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$

",46842,,2444,,10/30/2021 13:37,10/30/2021 13:37,How does the VAE learn a joint distribution?,,1,0,,,,CC BY-SA 4.0 32163,2,,32144,10/23/2021 10:47,,2,,"

Computational creativity is not something I know anything about. However, I work in knowledge engineering. This falls into the areas of knowledge representation and reasoning known as semantic web technologies in general. More recently Google has popularized the name "knowledge graphs" and now many people tend to talk of knowledge graphs rather than the semantic web, even though, strictly speaking, knowledge graphs are a small subset of semantic web technologies.

This is a massive field of research, which is widely used in bioinformatics, healthcare, property management, and many other fields to enable semantic search. I myself work in bioinformatics where we use 265 ontologies spanning close to 7 million concepts that are used to enable semantic searches across around 300 petabytes of data. In fact, it is knowledge graphs that are at least in part used to enable Google searches and information provided Google info boxes. Hence, many people are already using the results of knowledge engineering without knowing it.

So what is an ontology? An ontology defines concepts and relations between concepts. This has been done in computer science for ages, so what makes ontologies and the semantic web so special?

  • Each concept and relation is assigned globally unique identifiers, e.g. URI, IRI, PURL
  • Ontologies are defined in a machine-readable syntax, e.g. XML, JSON-LD
  • The semantic web defines a generic data model able to describe arbitrary data called RDF triples. In particular, this helps people to define their data before having to define a schema (as is the case with relational databases).
  • An RDF triple <s, p, o> expresses that subject (s) and object (o) is related via predicate (p).
  • SPARQL is a query language for querying RDF triples
  • Ontologies are equipped with formal semantics based on mathematical logic, which enables artificial intelligence reasoning procedures to infer implicit knowledge from explicit knowledge. E.g. RDFS and OWL.
  • JSON-LD, RDF, SPARQL, RDFS and OWL are W3C standards.

Here is a book that can give you a reasonably gentle introduction to aspects of the field and how it is used in practice: The knowledge graph cookbook.

",11450,,2444,,10/23/2021 14:28,10/23/2021 14:28,,,,2,,,,CC BY-SA 4.0 32166,1,,,10/23/2021 19:16,,1,356,"

Consider the following problem.

We have a process, that generates $N$ stones (e.g. 2000) in one batch $b$. Every pebble has state $s_{i}^b$ and reward $s_i^b$. After choosing one pebble $i$ from the $N$, we start sampling again using the chosen pebble as a starting point and we generate the next batch $b+1$. The state $s_i^b$ is a vector of real-values and the reward $r_i^b$ is a real value.

The problem is to choose pebbles so that we maximize reward $r_i^b$ in long term. Because depending on how we choose the pebble, we can sample around the region that gives a better or worse reward $r$.

During each new batch, we make one selection of one pebble (so actions can from $i, \dots, N$. We have access to the previous $m$ batches (e.g. through replay buffer) with their rewards and states.

In short, it looks like this:

  1. Chose randomly the first pebble from which we start sample;
  2. We start sampling from the chosen pebble in the current batch;
  3. We sample $N$ pebbles from a process, each pebble have a state $s_i^b$ and reward $r_i^b$;
  4. We can chose one pebble from $i \dots N$ as action $a_i^b$ based on state $s_i^b$ and reward $r_i^b$;
  5. Go to point 1 and repeat;

For example, at the moment, I choose in a given batch $b$ pebble with max reward $r_i^b$, so

$$i = \underset{i}{\mathrm{argmax}}\, r_i^b$$ and then use $a_i^b$ for a the current batch $b$.

But what I want is to choose: $$i = \underset{i}{\mathrm{argmax}}\, \underset{b}{E}[R_i^b | s_i, a_i]$$

Graphically speaking:

Assuming one Batch (N) is 30 P - pebble, P - chosen pebble

batch 1: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP

batch 2: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP

batch 3: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP

batch 4: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP

batch 5: PPPPPPPPPPPPPPPPPPPPPPPPPPPPPP

So, if I have a batch $N$, when I choose one element from the batch as an action, so the expected reward is the highest in long term, and start the sampling again from it. So only one element per batch can be chosen. And only choice of the one pebble per batch does affect the sequence in the next batch, but not inside the current batch.

The problem is, what Reinforcement Learning algorithm to use, when we choose only one item from N. And the one choice affect the whole sampled sequence in the next batch. For example in batch 1 to 4, the reward can be very low, and in batch 5 the reward is super high, if we chose wisely the pebbles in 4 previous batchs.

",31324,,31324,,10/27/2021 11:20,11/22/2022 10:08,Reinforcement Learning method suitable for a large discrete action space with high sample efficiency,,1,10,,,,CC BY-SA 4.0 32167,2,,32157,10/23/2021 22:59,,3,,"

In the introduction, the authors write

We consider the problem of classifying nodes (such as documents) in a graph (such as a citation network), where labels are only available for a small subset of nodes. This problem can be framed as graph-based semi-supervised learning, where label information is smoothed over the graph via some form of explicit graph-based regularization (Zhu et al., 2003; Zhou et al., 2004; Belkin et al., 2006; Weston et al., 2012), e.g. by using a graph Laplacian regularization term in the loss function: $$ \mathcal{L}=\mathcal{L}_{0}+\lambda \mathcal{L}_{\text {reg }}, \quad \text { with } \quad \mathcal{L}_{\text {reg }}=\sum_{i, j} \color{red}{A}_{i j}\left\|f\left(X_{i}\right)-f\left(X_{j}\right)\right\|^{2}=f(X)^{\top} \color{blue}{\Delta} f(X) $$ Here, $\mathcal{L}_{0}$ denotes the supervised loss w.r.t. the labeled part of the graph, $f(\cdot)$ can be a neural network-like differentiable function, $\lambda$ is a weighing factor and $X$ is a matrix of node feature vectors $X_{i}$. $\color{blue}{\Delta}=D-\color{red}{A}$ denotes the unnormalized graph Laplacian of an undirected graph $\mathcal{G}=(\mathcal{V}, \mathcal{E})$ with $N$ nodes $v_{i} \in \mathcal{V}$, edges $\left(v_{i}, v_{j}\right) \in \mathcal{E}$, an adjacency matrix $\color{red}{A} \in \mathbb{R}^{N \times N}$ (binary or weighted) and a degree matrix $D_{i i}=\sum_{j} \color{red}{A}_{i j} .$ The formulation of Eq. 1 relies on the assumption that connected nodes in the graph are likely to share the same label. This assumption, however, might restrict modeling capacity, as graph edges need not necessarily encode node similarity, but could contain additional information.

So, if you want to solve a node classification problem, where labels are available only for a small subset of the nodes, you can solve it by framing it with a specific loss function, $\mathcal{L}$, which is the sum of $L_0$ and a regularisation term, where

  • $L_0$ is the term of the loss function that takes care of the nodes that have labels (i.e. it's computed as a function of the nodes that have labels), while

  • $\mathcal{L}_{\text {reg }}$ is supposed to take care of the nodes without (and with) labels. Why? I don't know exactly, but I suspect this is due to the fact that it uses the information contained in $\color{red}{A}$ (the adjacency matrix) or $\color{blue}{\Delta}$ (the Laplacian).

However, this approach, as they claim, can be limiting, and the explanation is above (the last sentences in bold of the quoted excerpt).

To overcome this limitation, they decide to pass the adjacency matrix $\color{red}{A}$ to the neural network $f$, or, as they claim, to condition $f$ on $\color{red}{A}$. The idea is that, by doing this, the neural network $f$ will learn the graph structure. They write

In this work, we encode the graph structure directly using a neural network model $f(X, \color{red}{A})$ and train on a supervised target $\mathcal{L}_{0}$ for all nodes with labels, thereby avoiding explicit graph-based regularization in the loss function. Conditioning $f(\cdot)$ on the adjacency matrix of the graph will allow the model to distribute gradient information from the supervised loss $\mathcal{L}_{0}$ and will enable it to learn representations of nodes both with and without labels.

So, their graph neural network is defined as follows (equation 9)

$$ Z=f(X, \color{red}{A})=\operatorname{softmax}\left(\hat{\color{red}{A}} \operatorname{ReLU}\left(\hat{\color{red}{A}} X W^{(0)}\right) W^{(1)}\right), $$ where $Z$ are the predictions (i.e. labels for the nodes $X$).

Their loss function is then defined only for $X$ that have labels

$$\mathcal{L}=-\sum_{l \in \mathcal{Y}_{L}} \sum_{f=1}^{F} Y_{l f} \ln Z_{l f},$$

where $\mathcal{Y}_{L}$ is the set of node indices that have labels. However, note that $Z_{l f}$ is still computed as a function of $\color{red}{A}$, which contains information about all nodes (labeled and unlabelled).

To conclude, their approach is a semi-supervised learning approach because they train the neural network with a training dataset, which is only partially labeled, to perform node classification. Their approach is different from previous approaches (which include a regularisation term to account for the part of the graph without labels) by defining the graph neural network as a function $f$ that also takes as an input $\color{red}{A}$, $f(X, \color{red}{A})$, which, supposedly, is what makes this approach work.

(Disclaimer: I've only quickly read a few other parts of the paper, and it had been a while since I extensively read something about graph neural networks.)

",2444,,2444,,10/25/2021 14:25,10/25/2021 14:25,,,,0,,,,CC BY-SA 4.0 32168,1,,,10/24/2021 4:42,,0,74,"

In this example, what exactly do "Cluster" and "Sigma" mean? (They chose random coordinates for the three centroids of the groups)

  • Centers: Cluster centers, returned as a JxN array, where J is the number of clusters and N is the number of data dimensions.
  • Sigma: Range of influence of cluster centers for each data dimension, returned as an N-element row vector. All cluster centers have the same set of sigma values.

Please, elaborate on the difference.

",33475,,2444,,10/24/2021 11:16,11/19/2022 13:03,"In this example of fuzzy c-means, what is the difference between ""sigma"" and ""center"" for the clusters?",,1,0,,,,CC BY-SA 4.0 32171,1,,,10/24/2021 18:18,,2,54,"

I have a base model $M$ trained on a data say type 1 for task $T$. Now, I want to update $M$ by applying transfer learning for it to work on data type 2 for the same task $T$. I am very new to AI/ML field. One common way I found for applying transfer learning is to add a new layer at the end or replace the last layer of the base model with a new layer, and then retrain the model on new data (type 2 here). Depending upon the size of type 2, we may decide whether we retrain the whole model or only the new layer.

However, my question is that how do we decide following:

  1. What should be the new layer(s)?
  2. Should the objective function while retraining be the same as the one used for the base model, or it can be different? If different, then any insights on how to figure out a new objective function?

P.S. Data of type 1 and type 2 are of the same category (like both are logs or both are images), however are significantly different.

",48468,,2444,,11/3/2021 12:58,11/3/2021 12:58,How to choose the new layer and objective function for transfer learning on a neural network?,,0,4,,,,CC BY-SA 4.0 32172,1,,,10/24/2021 19:52,,2,162,"

The popular implementations of ViTs by Ross Wightman and Phil Wang add the position embedding to the class tokens as well as to the patches.

Is there any point in doing so?

The purpose of introduction positional embeddings to the Transformer is clear - since in the original formulation Transformer is equivariant to permutations of tokens, and the original task doesn't respect this symmetry one needs to break it in some way to translational symmetry only - and this goal is achieved via the positional embedding (learned or fixed).

However, the class token is somehow distinguished from the other tokens in the image, and there is no notion for him to be located in the [16:32, 48:64] slice of the image.

Or this choice is simply a matter of convenience? And additional parameter, indeed, has a negligible cost, and there is no benefit as well as harm of the addition of positional embedding to the [CLS] or any special token?

",38846,,2444,,11/3/2021 13:01,11/3/2021 13:01,Is there any point in adding the position embedding to the class token in Transformers?,,0,0,,,,CC BY-SA 4.0 32173,1,32175,,10/25/2021 5:24,,1,183,"

In Section 2.1 of the research paper titled Semi-Supervised Classification with Graph Convolutional Networks by Thomas N. Kipf et al.,

Spectral convolution on graphs defined as

The multiplication of a signal $x ∈ R^N$ with a filter $g_\theta =$ diag$(\theta)$ parameterized by $\theta \in R^N$ in the Fourier domain,

i.e.:

$g_\theta * x $= $U g_\theta U^T x$

Actually, I don't understand notations here.

  1. What is a filter? Is it like a filter from CNN? Then why it has (or should have) a diagonal form? What is $\theta$ here?

  2. What is a Fourier domain?

I searched on Google and there was no Fourier domain.

",50119,,2444,,10/25/2021 14:10,10/25/2021 16:22,What is a filter in the context of graph convolutional networks?,,1,1,,,,CC BY-SA 4.0 32175,2,,32173,10/25/2021 10:35,,6,,"

Short answer

Check out the paper of Shuman et al. [1], it provides some background on Graph Signal Processing, including answers to your questions in sections II.C and III.A

Long Answer

Question 1

Yes, the filter $g_{\theta}$ is analogous to CNN's filter. You have a diagonal matrix with $\theta_{i}$ in its diagonal mainly for matrix-multiplication purposes (otherwise the definition of convolution would not make sense for graph-structured data).

Question 2

A good introduction to signal processing on graphs is the Paper by Shuman et al. [1]. I will divide the answer in two topics.

What is a Fourier Domain

I think the authors in [2] used "Fourier Domain" as another name for "Frequency Domain". As they apply the Fourier transform on graphs, the notion of "frequency" gets lost, and you are left with the operational part of transforming the signal (see next section).

The fact that the authors in [2] say the filter $g_{\theta}$ is parametrized in the Fourier domain simply says that $g_{\theta}$ acts on the Fourier transform of $\mathbf{x}$, and not on $\mathbf{x}$ per se.

Fourier Transform on Graph Signals

Following [2, sect II.C], recall that the fourier transform of a continuous signal $x$ is, \begin{align*} X(\xi) = \mathcal{F}(\mathbf{x}) = \int_{\mathbb{R}}x(t)\text{exp}(-2\pi i \xi t)dt, \end{align*}

note that this is an inner product on the vector space of continuous functions, that is, \begin{align*} \langle x, u \rangle = \int_{\mathbb{R}}x(t)u(t)dt \end{align*}

moreover, if we look at the function $u(t)=\text{exp}(-2\pi i \xi t)$, these are the eigenfunctions of the Laplace operator, \begin{align*} -\Delta(u(t)) = -\dfrac{\partial^{2} u}{\partial t^{2}} = (2\pi\xi)^{2}u(t) \end{align*}

Now, the Fourier transform on graphs is defined based on two analogies. The first of them, is the Laplacian operator on Graphs, $L$, that substitutes the Laplacian operator of functions. In [2], the authors further used the normalized Laplacian. The second analogy is the inner product. As the signal $x(t)$ is substituted for $\mathbf{x} = [x_{i}]_{i=1}^{n}$, where $i$ a given graph vertex,

\begin{align*} \mathcal{F}(\mathbf{x}) = \langle \mathbf{x},\mathbf{u} \rangle = \sum_{i=1}^{n}x_{i}u_{i}, \end{align*}

where $u_{i}$ is the i-th eigenvector of $L$. Writing $U$ as the matrix of eigenvectors of $L$, \begin{align*} \mathcal{F}(\mathbf{x}) &= \mathbf{U}^{T}\mathbf{x} \end{align*}

Finally, following [2, III.A], let $g_{\theta}$ be the filter parametrized by $\theta$, and $y$ be the filtered signal (whose Fourier Transform is $Y$). In the frequency/fourier domain convolutions are products, hence, \begin{align*} Y(\xi) = g_{\theta}(\xi)X(\xi). \end{align*}

Conversely, for signals on graphs, \begin{align*} \mathbf{U}^{T}\mathbf{y} &= g_{\theta}\mathbf{U}^{T}\mathbf{x},\\ \mathbf{UU}^{T}\mathbf{y} &= \mathbf{U}g_{\theta}\mathbf{U}^{T}\mathbf{x},\\ \mathbf{y} &= \mathbf{U}g_{\theta}\mathbf{U}^{T}\mathbf{x}, \end{align*}

where from the second to third line we used the fact that $\mathbf{U}$ is an unitary matrix.

References

[1] Shuman, D. I., Narang, S. K., Frossard, P., Ortega, A., & Vandergheynst, P. (2013). The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE signal processing magazine, 30(3), 83-98.

[2] Kipf, T. N., & Welling, M. (2016). Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907.

",48703,,48703,,10/25/2021 16:22,10/25/2021 16:22,,,,3,,,,CC BY-SA 4.0 32176,2,,32168,10/25/2021 10:47,,0,,"

In the example you link to, the sigma parameter has got nothing to do with the clustering; it is only used to generate sample data for illustration. It defines the spread of cluster elements around the (pre-defined) centroid of each cluster.

This is done for demonstration only: you generate clusters which you know exist, and then check that the cluster algorithm can detect those clusters correctly. In normal centroid-based clustering the centroid does not have a specified range -- each data point it simply assigned to its nearest centroid, however far away it it.

",2193,,,,,10/25/2021 10:47,,,,0,,,,CC BY-SA 4.0 32178,1,,,10/25/2021 11:33,,0,25,"

Convolutional neural networks are capable of handling inputs of varying sizes. It is one of the key benefits of convolutional neural networks. But I am unsure about the cases when we should not utilize this advantage of the convolutional neural network.

Although the following example has been provided in the chapter named Convolutional Networks of the textbook titled Deep Learning by Ian Goodfellow et al.

Convolution does not make sense if the input has variable size because it can optionally include different kinds of observations. For example, if we are processing college applications, and our features consist of both grades and standardized test scores, but not every applicant took the standardized test, then it does not make sense to convolve the same weights over features corresponding to the grades as well as the features corresponding to the test scores.

It is not clear for me to understand the above example since I am habituated with images in the case of convolutional neural networks.

Can anyone provide a possible example of images where we cannot utilize the benefit of passing variable-size images to the convolutional neural network?

",18758,,18758,,10/25/2021 11:46,11/20/2022 15:03,Simple Image-based example for not utilising the variable-sized input handling capability of a Convolutional neural network,,1,0,,,,CC BY-SA 4.0 32181,1,,,10/25/2021 16:07,,1,230,"

I'd like to use a residual network to improve learning in image-based reinforcement learning, specifically on Atari Games.

My main question is divided into 3 parts.

  1. Would it be wise to integrate a generic ResNet with a DQN variant?

  2. I believe ResNets take a long to train, and therefore would it be realistic to train on an Atari simulator? What would the downsides be?

  3. Are there any fast ResNets that can be used for such purposes? Perhaps a fast ResNet that is specifically designed for online settings?

",31755,,2444,,12/2/2021 8:24,12/2/2021 8:24,How can I use a ResNet as a function approximator for pixel based reinforcement learning?,,1,0,,,,CC BY-SA 4.0 32184,1,,,10/25/2021 20:05,,5,595,"

In PyTorch, transformer (BERT) models have an intermediate dense layer in between attention and output layers whereas the BERT and Transformer papers just mention the attention connected directly to output fully connected layer for the encoder just after adding the residual connection.

Why is there an intermediate layer within an encoder block?

For example,

encoder.layer.11.attention.self.query.weight
encoder.layer.11.attention.self.query.bias
encoder.layer.11.attention.self.key.weight
encoder.layer.11.attention.self.key.bias
encoder.layer.11.attention.self.value.weight
encoder.layer.11.attention.self.value.bias
encoder.layer.11.attention.output.dense.weight
encoder.layer.11.attention.output.dense.bias
encoder.layer.11.attention.output.LayerNorm.weight
encoder.layer.11.attention.output.LayerNorm.bias
encoder.layer.11.intermediate.dense.weight
encoder.layer.11.intermediate.dense.bias

encoder.layer.11.output.dense.weight
encoder.layer.11.output.dense.bias
encoder.layer.11.output.LayerNorm.weight
encoder.layer.11.output.LayerNorm.bias

I am confused by this third (intermediate dense layer) in between attention output and encoder output dense layers

",50586,,50586,,10/26/2021 12:26,10/26/2021 12:26,What is the Intermediate (dense) layer in between attention-output and encoder-output dense layers within transformer block in PyTorch implementation?,,1,4,,,,CC BY-SA 4.0 32185,2,,32184,10/26/2021 7:25,,4,,"

Feedforward layer is an important part of the transformer architecture.

Transformer architecture, in addition to the self-attention layer, that aggregates information from the whole sequence and transforms each token due to the attention scores from the queries and values has a feedforward layer, which is mostly a 2-layer MLP, that processes each token separately: $$ y = W_2 \sigma(W_1 x + b_1) + b_2 $$

Where $W_1, W_2$ are the weights, and $b_1, b_2$ - biases, $\sigma$ - is the nonlinearity ReLU, GeLU, e.t.c.

This is kind of a pointwise nonlinear transformation of the sequence.

I suspect, that intermediate here corresponds to $W_1, b_1$ and the output is about $W_2, b_2$.

",38846,,,,,10/26/2021 7:25,,,,2,,,,CC BY-SA 4.0 32188,2,,32178,10/26/2021 9:06,,1,,"

Well, I think, this statement sounds somehow misleading.

The main statement is actually in the first passage of this statement:

Note that the use of convolution for processing variably sized inputs makes sense only for inputs that have variable size because they contain varying amounts of observation of the same kind of thing—different lengths of recordings over time, different widths of observations over space, and so forth.

Convolution is applicable only provided the input is the 1D, 2D, 3D array, graph, any collection of the data of the same kind. Each pixel is the feature vector of the same shape.

This statement means, that one cannot apply convolutions if the inputs are not-structured, like the data tables, which involve boolean, categorical, continuous features. There is no notion of locality and adjacency for this kind of data.

",38846,,2444,,10/26/2021 11:28,10/26/2021 11:28,,,,0,,,,CC BY-SA 4.0 32189,1,,,10/26/2021 10:53,,0,156,"

I am new to neural networks. I am trying to solve a binary classification problem. Specifically, I want to determine whether a patient has or not a certain disease based on the dataset.

The dataset has about 700 samples of different patients. I divided the sets into training and test (test size = 0.3). My model has 1 input layer, 5 hidden layers, and 1 output layer. I used ReLU for the input and hidden layers, and I used the sigmoid for the output layer.

During the compilation of the model, I used stochastic gradient descent (SGD) as the optimizer and the mean squared logarithmic error for the loss. I used mini-batch gradient descend (batch size = 4) for the training.

I am trying to calculate the accuracy on the test set I created previously.

  • The model evaluation for train set is about: 0.07 (loss) 0.76 (accuracy).

  • The model evaluation for test set is about: 0.07 (loss) 0.74 (accuracy).

Firstly, I would like to know if this is a good value for a model. Is the accuracy too small?

Plus, I would like to know if there's a way to improve accuracy based on my model.

I am trying to work on a project, so I was wondering if these values are acceptable.

",50355,,2444,,10/26/2021 16:03,10/26/2021 16:03,"Is a test accuracy of 0.74 good enough, given a dataset of about 700 samples, and, if not, how can I improve it?",,0,8,,,,CC BY-SA 4.0 32191,1,,,10/26/2021 16:00,,1,184,"

I'm reading the article An Analysis of Temporal-Difference Learning with Function Approximation (1997), but the mathematics inside seems overly complicated for me. Answers to some similar questions had pointed out that these proof typically involves stochastic approximation.

My question is: are there any good tutorials (textbooks, list of papers, etc.) on stochastic approximation (or similar topics) that prepare you for reading proof like this? It will be best if it is rather self-contained under my mathematical maturity.

I have an undergraduate-level mathematical analysis and probability theory foundation and have touched some measure theory.

",50600,,2444,,10/29/2021 9:25,10/29/2021 9:25,Is there a tutorial for understanding the proof of convergence for TD learning?,,1,1,,,,CC BY-SA 4.0 32192,1,,,10/26/2021 17:00,,0,72,"

I have a three-class classification problem for a large dataset. Classes are 0, 1, and 2. There's a categorical variable in my feature vectors such that when a sample point has this variable positive, it can only belong to either class 1 or class 2. On the other hand, if that categorical variable is negative for another sample point, then this sample can be from all three classes. I was wondering how I can make a neural network use that information during training? I guess I need a custom loss function however I could not figure out how to exactly create that.

",50601,,18758,,10/27/2021 3:38,10/27/2021 14:44,Multi-class classification but a single feature sometimes boils it down to a binary-classification,,1,0,,,,CC BY-SA 4.0 32198,1,,,10/27/2021 7:51,,2,40,"

I am playing with the transforms from Torchvision.

There are plenty of different kinds of these like:

  • Resize
  • RandomCrop
  • ColorJitter
  • Blurring
  • ...

These are some cases of Resize for a given image:

ColorJitter

RandomAffine

The main purpose of the augmentation procedure is to prevent overfitting, extend the training dataset in a certain way.

Some transformed images still look like the image from the original dataset, since a small change in contrast or brightness still makes the image look real.

For the other, there can be a significant change in color, or the original image occupies only a small fraction of the resulting image.

In all cases, one can still classify this object as a dog. However, many of these would not lie on the manifold of real images, and the classification for these may not make sense.

Are there some papers or research discussing the issue of the augmentation choice, and the correspondence of these to the directions along the manifold of real images?

",38846,,,,,10/27/2021 7:51,What is the sensible amount of augmentation?,,0,0,,,,CC BY-SA 4.0 32199,1,32229,,10/27/2021 8:18,,1,181,"

Kaparthy in his blog post said

[this] hints at the kinds of architectures we’ll eventually explore. As an example - are very local features enough or do we need global context?

I'd like to gain expertise in designing networks for local vs global features. Obviously increasing the receptive field of neurons (though more pooling/striding and bigger kernels) will allow to network to take more global context into account.

Could someone point me in the direction of some readings on the topic?

Are there any specific architectures targeted to local features or to global context?

",50614,,18758,,10/27/2021 9:02,10/30/2021 6:39,CNN Architectures for local features vs global context,,1,0,,,,CC BY-SA 4.0 32200,1,,,10/27/2021 9:37,,1,66,"

Consider the following sentences from the research paper titled PatternNet: Visual Pattern Mining with Deep Neural Network by Hongzhi Li et al.

The value of each pixel in a feature map is the response of a filter with respect to a sub-region of an input image. A high value for a pixel means that the filter is activated by the content of the sub-region of the input image.

The sentences mention the sub-regions of an image. Is there any formal definition for a sub-region of an image?

",18758,,,,,10/27/2021 13:03,What is meant by sub-region of an image?,,1,0,,,,CC BY-SA 4.0 32201,2,,32166,10/27/2021 12:47,,0,,"

My understanding of your environment is:

  • The batch number $b$ is the same thing as a time step $t$. Each batch is associated with a single static representation of the environment, the agent makes one decision, then receives a reward and a next state. I am using $t$ instead of $b$ to match the usual name seen in RL.

  • There is a generator which can be in a state $s_t$, initially unknown for $s_0$.

  • Each time step, $t$, the generator creates a batch of "pebbles". The generator state $s_t$ determines the output of the generator, and is the only non-random influence on the contents of each batch - it controls the possible rewards plus the possible next states.

  • Selecting a specific pebble from the batch returns an immediate reward $r_{t+1}$ plus sets the next state $s_{t+1}$. This is the agent's action, and it can choose any pebble from the batch as its action $a_t$.

  • The agent can see the values of $r_{t+1}$ and $s_{t+1}$ associated with each pebble before making its selection.

This is a slightly unusual setup for an MDP, but I think it is still a valid MDP, and the expected return can still be maximised using reinforcement learning. One important detail is that although you have a discrete list of actions in each batch, the action index values look like they are meaningless. Your action choice is really to choose the next state $s_{t+1}$ from a presented list of available states. Your action space is therefore continuous, not discrete, even though at each step you are selecting from a discrete set of actions.

The environment design is different enough from standard that most off-the-shelf RL agent libraries will not work for you. However, you can take advantage of this knowledge of the action space, and the fact that there is no other meaningful state than that selected by the agent when it chooses a pebble, to learn state values efficiently.

I would suggest you use a variant of Q-Learning, based on the "afterstate" values that you are selecting. You can train a neural network to approximate the state value function $\hat{v}(s,\theta)$ and you have a choice whether to use discounted returns or an average reward setting. When deciding the greedy target policy, you will want to select according to

$$\pi = \text{argmax}_i (r_i + \gamma \hat{v}(s_i,\theta))$$

for discounted return. For Q learning, technically you should also store the whole batch output in experience replay, because the TD target will be

$$g = \text{max}_i (r_i + \gamma \hat{v}(s_i,\theta))$$

and you won't know what this max will be later at the time of storing the memory. There may be ways around that - for instance you could use SARSA updates based on the actual action taken, or you could store the value of $g$. However, these would be less sample efficient and you would need to discard the memory at a faster rate, before it became too out of date with respect tothe current policy.

I think an average reward will be closer to your stated goal of generating highest expected reward from each individual batch, but I am not 100% certain how to set up the policy for that. Whilst a discounted return may end up preferring a high immediate reward in some circumstances. However, you may be able to get near identical behaviour by setting $\gamma$ high enough (e.g. $0.99$ or higher) in the more familiar discounted return setting.

The efficiency of this approach will depend on how easy it is for the agent to learn the mapping between states and the quality of the next batch. However, you should have a higher sample efficiency when selecting by afterstate, because the agent does not need to learn separately how actions relate to states.

",1847,,1847,,10/28/2021 9:44,10/28/2021 9:44,,,,0,,,,CC BY-SA 4.0 32202,2,,32200,10/27/2021 13:03,,1,,"

It seems that they are informally using the term "sub-region" to refer to the section of the image with which you multiply the kernel to produce a scalar value of the feature map (which they call pixel, but I would use the term pixel only to refer to the scalar values of the image). So, it seems that a sub-region is a synonym for receptive field. We have a question on the difference between receptive fields and feature maps here.

",2444,,,,,10/27/2021 13:03,,,,0,,,,CC BY-SA 4.0 32203,2,,32192,10/27/2021 14:44,,1,,"

I think a custom loss function would be an overkill in this situation. A linear pattern like this would be easily learned by any loss desinged for multi class classification.

If I were you I would try 3 roads, in this order:

  • train a classifier based on decision trees (random forest & xgboost in primis). The rule you described would most likely be learned as first split to perform, if the remaining features are easily separable then such classifier would perform better than a neural network, plus it would be interpretable.
  • train a neural network without any fancy loss function, again the rule you describe is linear and really simple, no reason to think any loss function would miss it. BUT, if you have specific reasons to think that or if you want to try to get the best of the best out of your data there is still another possibility.
  • train an ensambled model. More precisely, split the data yourself into 2 subsets, one with positive feature "x" and one with negative feature x, and train 2 separate models on each subset of training data. It is reasonable to think that the model trained on the subset associated with positive feature x would perform better, since the problem will turn from multi class classification to binary classification, but with pros comes cons as well, specifically higher risk of over fitting or class unbalance depending on the stats of your overall dataset.
",34098,,,,,10/27/2021 14:44,,,,0,,,,CC BY-SA 4.0 32205,1,,,10/27/2021 14:59,,1,170,"

I am reading a paper known as GIN, How powerful are graph neural networks?, Xu et al. 2019

The paper, Lemma 5 and Corollary 6, introduces Graph Isomorphism Network (GIN).

In Lemma 5,

Moreover, any multiset function $g$ can be decomposed as $g (X) = \phi(\sum_{x\in X} f(x))$ for some function $\phi$

Similarly, in Corollary 6,

Moreover, any function $g$ over such pairs can be decomposed as $g (c, X) = \varphi((1+\epsilon) f(c)+ \sum_{x\in X} f(x))$ for some function $\varphi$.

Finally, it makes MLP by compositing $f^{(k+1)}$ and $\varphi^{(k)}$. i.e

$f^{(k+1)} \varphi^{(k)}$

I know that $h(X)$ or $h(c,X)$ is injective, because they are unique to $c$ and $X$(in Lemma 5 and Corollary 6, respectively).

What I don't understand is: in the statements in Lemma 5 and Cor 6, $\phi$ and $\varphi$, I don't know whether they are injective or not.

My question is then: why is GIN powerful? i.e, Why does GIN preserve injectivity?

This paper explains that injectivity should be preserved to get powerful GNN. How can I answer this question?

",50119,,2444,,10/29/2021 9:45,10/29/2021 9:45,Why is the Graph Isomorphism Network powerful?,,0,0,,,,CC BY-SA 4.0 32206,1,,,10/27/2021 20:40,,1,61,"

From what I have been reading, I see statements like

Ontology is a common method used for knowledge representation in artificial intelligence.

But there is never really a discussion around what other options are available. To allow me to research other options I am hoping someone could maybe suggest alternatives?

As an extra question, do you have a preference for a particular system? If so, why?

",50516,,2444,,10/29/2021 15:07,10/29/2021 15:07,"Apart from ontologies, which other methods for knowledge representation are there in Artificial Intelligence?",,0,0,,,,CC BY-SA 4.0 32207,1,32277,,10/28/2021 2:24,,0,50,"

I've got a task similar to the following:

Out of x amount of people, I need to predict, who could be a good athlete and who not. The thing is, I don't have data on the athletic performance of those individuals.

So I was thinking of looking into assumptions/traits: Most of the NBA players are tall. If someone of a random amount of people is tall, it could be a good basketball player. In contrast, a tall person would not be a good jockey. The same goes for age - 3 years or 90 years old might not make a world-performing athlete, etc.

How would I best build a dataset for this problem? Which features do I need to add to the dataset in order to make good predictions about athletic performance?

",50630,,2444,,11/3/2021 13:10,11/4/2021 12:31,"Generating a dataset from data with ""assumed"" lables",,1,0,,,,CC BY-SA 4.0 32208,1,32220,,10/28/2021 4:07,,1,314,"

I am familiar with the variational autoencoder, but not totally clear on what exactly the AEVB is.

In the original VAE paper (by Kingma and Welling), he uses both the terms variational autoencoder and autoencoding variational Bayes.

For the case of an i.i.d. dataset and continuous latent variables per datapoint, we propose the Auto-Encoding VB (AEVB) algorithm. In the AEVB algorithm, we make inference and learning especially efficient by using the SGVB estimator to optimize a recognition model that allows us to perform very efficient approximate posterior inference using simple ancestral sampling, which in turn allows us to efficiently learn the model parameters, without the need of expensive iterative inference schemes (such as MCMC) per datapoint.

And then in section 2, the paper talks about the SGVB estimator and the AEVB algorithm.

Then in section 3, it says that the VAE is an example.

So, is the VAE a special application of AEVB?

",46842,,2444,,10/29/2021 10:17,10/30/2021 13:09,How is the VAE related to the Autoencoding Variational Bayes (AEVB) algorithm?,,1,0,,,,CC BY-SA 4.0 32209,2,,32191,10/28/2021 12:36,,2,,"

It's a pleasant surprise that somebody is interested in the fundamental convergence properties of TD, like me:)

It depends on which algorithm's convergence you want to know.

  1. TD in the tabular case
  2. TD with value function approximation: linear function approximation
  3. TD with value function approximation: nonlinear function approximation by NN

Here TD mainly refers to TD($\lambda$).

  1. If it is the first case, you can read the paper:

1993 - On the Convergence of Stochastic Iterative Dynamic Programming Algorithms

The most important result is Lemma 1, which is an extension of Dvoretzky's Theorem. If you would like to further study Dvoretzky's Theorem, you will see that the math behind the convergence proof is stochastic approximation (SA). If you go deep into SA, you may diverge from RL too far. One suggestion is to study Robbins-Monro Algorithm, which is easy to understand and inspiring.

  1. If it is the second case, you have been reading the correct paper, though this paper is really complicated. But if you are determined to understand this paper, one suggestion is to consider the special TD(0) case, which will be easier to understand. Nevertheless, it is still nontrivial. But once you understand it, everything will be crystal.

  2. If it is the third case, I doubt it could be proved. That is simply because convergence properties are extremely challenging to analyze when the function appropriator is nonlinear. But it does not mean the convergence analysis in the linear case is meaningless, nor the nonlinear case is impractical.

",,user50121,,,,10/28/2021 12:36,,,,0,,,,CC BY-SA 4.0 32214,1,32224,,10/29/2021 0:46,,4,103,"

The ability to create self-replicating machines can give some very useful benefits. So what is the problem with creating this type of stuff?

Let's say we have two pieces of equipment - 3d printers and robotic arms. These items are already available and are easy to create.

It looks like they are enough to create self-replicating machines. 3d printers are able to print any details for arms and printers. Robotic arms are able to assemble other arms and printers. Both equipment items are able to create almost any other kind of stuff.

So we need only one set of 3d printers and arms with a basic program to start the process. The more sophisticated programs can be added later to create almost any type of equipment from design. If there are enough rough materials, this process can be scaled indefinitely and allow to construct, gather resources, etc.

So, what is the problem with that scheme? Why is is not used already yet everywhere?

",48812,,2444,,10/29/2021 9:49,9/30/2022 15:11,Why aren't 3d printers and robotic arms already used to create the first versions of self-replicating machines?,,4,3,,,,CC BY-SA 4.0 32215,1,,,10/29/2021 2:43,,1,718,"

Problem setting

Consider a game like trading a stock

  • At each step, the agent can buy / sell a stock.
  • Trade is a pair of buy and sell actions.
  • We can set the reward for the sell action as the profit made in this trade. Although this might be misleading, since whether a trade is good or not depends on both buying and selling timing.
  • We don't know the reward for buy, whether a buy is good or not depends on when you sell.

Questions

  1. How should the reward scheme be for a game like this? i.e., whether one action is good or not depends on other actions taken before it?
  2. Can we set a scheme like this: set the reward to be zero at each step, no matter it is a buy or a sell. Only give the reward at the end of the game, say after 1000 steps, and the reward, in this case, is the total money made?
",48796,,2444,,12/11/2021 21:09,12/11/2021 21:09,How should I define the reward function for a stock trading-like game?,,1,0,,,,CC BY-SA 4.0 32216,1,,,10/29/2021 3:51,,3,426,"

As per my knowledge, any entity that is learnable by a training algorithm can be called a parameter. Weights of a neural network are called parameters because of this reason only.

But I have doubts about the qualification of hyperparameter.

Hyperparameter according to my knowledge is an entity that needs to be learned outside of the training algorithm. But, a lot of entities can come into the picture if I want to follow this definition.

For example, the selection of the type of neural network, number of layers, number of neurons in each layer, presence of batch normalization layer, type of activation function, number of parameters, type of parameters (integer, float, etc.), number of epochs, batch size, type of optimizer, learning rate, etc., and I can able to list a lot of entities like this.

Is it okay to call anything that needs to be learned outside the training algorithm a hyperparameter?

",18758,,2444,,10/29/2021 12:26,10/29/2021 15:10,When can I call an entity a hyperparameter?,,2,0,,,,CC BY-SA 4.0 32217,2,,32215,10/29/2021 7:40,,1,,"

how should the reward scheme be for a game like this? i.e., whether one action is good or not depends on other actions taken before it?

The reward scheme should always be a "natural" representation of the goals of the agent. If the goal of your agent is to make a profit, then the reward should be the amount of profit*.

It is important to RL, and becomes critical for rewards that may be delayed, that the state representation captures all the information that impacts those rewards. That means representing the direct effects of actions, so that they can be tracked. In the case of a trading bot, it will be important at a minimum that the state includes the current portfolio. If the agent can access data about the amount and value of stock it currently holds, it has a chance to predict likely profits later.

can we set a scheme like this: set the reward to be zero at each step, no matter it is a buy or a sell. Only give the reward at the end of the game, say after 1000 steps, and the reward, in this case, is the total money made?

You can, but that looks like it is making the problem harder than it needs to be.

I would naturally set the reward to be the profit on each transaction (including negative reward for buying stock). The agent should be able to use that to figure out long term rewards, and will learn faster if it is given this more direct feedback. RL algorithms are well-suited to figuring out the need to invest at an earlier time in order to make profit later. If there is also a time horizon, that should be factored into the state - the agent should be made aware that the game will end in so many time steps otherwise it may end up holding stock instead of selling it in time to meet the final evaluation.

One important caveat is that real trading problems are hard, because the real environment is highly variable and competitive. The competition includes intelligent people with advanced qualifications in statistics. The chances of making a trading bot that is successful in a real trading environment are low.


* Sometimes this "natural" rerpresentation is too sparse, and you may want to look into reward shaping. That is adding interim rewards or spreading out an existing reward, to assist the learning agent. For a stock trading game for instance, it may help to allocate some reward for gaining ownership of a stock, perhaps the nominal resale value minus the fees for selling it.

However, where it is possible to stick with the simplest direct interpretation of the agent's goals, then you should do that. Reward shaping comes with the potential risk that the changes you have made will impact what it means for the agent to perform optimally.

",1847,,1847,,10/31/2021 9:49,10/31/2021 9:49,,,,0,,,,CC BY-SA 4.0 32219,2,,32216,10/29/2021 11:57,,2,,"

Is it okay to call anything that needs to be learned outside the training algorithm a hyperparameter?

I think so, yes.

Personally, I would reserve the term to discuss values that I could choose freely in any given experiment, but that were not learned directly from the training data by the model class I was using. Or perhaps with a tighter restriction for practical purposes, things I would be willing to investigate and change in order to achieve the goal of getting good metrics from the learning agent - it doesn't matter from a practical perspective what I label a value for something that could be changed, if I am not considering changing it.

Choices that I am not interested in making for a given experiment might be hyperparameters to someone else who is performing a different kind of search.

Discrete, top-level choices such as whether to use linear regression versus XGBoost versus a deep neural network might technically be considered hyperparameters, and a sophisticated search algorithm may even automatically test across them. However, I rarely see this kind of choice discussed using the term hyperparameter.

",1847,,1847,,10/29/2021 15:10,10/29/2021 15:10,,,,0,,,,CC BY-SA 4.0 32220,2,,32208,10/29/2021 12:16,,2,,"

Short answer

The VAE is not the Auto-Encoding Variational Bayes (AEVB) algorithm or an instance of it.

The VAE is a machine learning model. More specifically, it's a probabilistic auto-encoder, where the probabilistic encoder (aka recognition model) and decoder (aka generative model) are represented as neural networks. (Here, "probabilistic" means that they produce/represent a probability distribution).

The AEVB algorithm is a learning/inference algorithm that can be used to find the parameters of the VAE, i.e. to perform approximate variational inference in the following directed graphical model (figure 1 of the VAE paper). (Note that "inference" in Bayesian statistics has a specific meaning, i.e. the computation of the posterior using Bayes rule)

Long answer

The AEVB algorithm

The Auto-Encoding Variational Bayes (AEVB) is the algorithm used to find the parameters $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$, as you can conclude by reading its pseudocode given in the paper.

The AEVB algorithm stochastically estimates the gradient of the objective function, i.e. the Evidence Lower BOund (ELBO) (equation 3). They propose 2 estimators (equations 6 and 7), of the ELBO (not its gradient, which is computed automatically by back-propagation; note that stochastic estimation of the gradient of the objective function is very common in deep learning: see this). Moreover, the AEVB algorithm also uses the re-parametrization trick (i.e. it samples from $p(\boldsymbol{\epsilon})$ to avoid high variance).

The VAE model

The VAE is the model that you create when you use neural networks to represent the encoder (aka recognition model) and decoder (aka generative model), which are parametrized by $\boldsymbol{\phi}$ and $\boldsymbol{\theta}$, respectively. So, the VAE is not the AEVB algorithm. You use the latter to find the parameters of the VAE.

The authors write in the same paragraph that you are quoting

When a neural network is used for the recognition model, we arrive at the variational auto-encoder.

In the appendix, they also write

In variational auto-encoders, neural networks are used as probabilistic encoders and decoders. There are many possible choices of encoders and decoders, depending on the type of data and model. In our example we used relatively simple neural networks, namely multi-layered perceptrons (MLPs). For the encoder we used a MLP with Gaussian output, while for the decoder we used MLPs with either Gaussian or Bernoulli outputs, depending on the type of data.

Variational and generative parameters

Ok, but what are these parameters $\boldsymbol{\phi}$ and $\boldsymbol{\theta}$?

  • $\boldsymbol{\phi}$ are the parameters of the variational distribution $q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})$, which is an approximation of the (usually intractable) posterior $p_{\boldsymbol{\theta}}(\mathbf{z} \mid \mathbf{x}) = p_{\boldsymbol{\theta}}(\mathbf{z}) p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})/p_{\boldsymbol{\theta}}(\mathbf{x})$; so, in the case of the VAE, these are the weights of the encoder

  • $\boldsymbol{\theta}$ are the parameters of the generative model $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$; so, in the case of the VAE, these are the weights (and biases) of the decoder neural network

Recap

So, to recap, the AEVB is the training/learning/inference algorithm to find $\boldsymbol{\phi}$ and $\boldsymbol{\theta}$. The AVB stochastically estimates the ELBO (and, consequently, its gradient) and uses the re-parametrization trick. If you use a neural network to represent the encoder and the decoder, you get the VAE, which is not the same thing as the AEVB, the algorithm to find the parameters of the neural networks (i.e. the encoder and decoder).

",2444,,2444,,10/30/2021 13:09,10/30/2021 13:09,,,,4,,,,CC BY-SA 4.0 32221,2,,32216,10/29/2021 12:24,,4,,"

In older machine learning literature the given definition of hyperparameters was explicitly the same used in Bayesian statistics, i.e.

a hyperparameter is a parameter of a prior distribution

For example, in Christopher M. Bishop's "Pattern Recognition and Machine Learning" (Springer, 2006), hyperparameters are introduced in the following paragraph (page 30)

Now let us take a step towards a more Bayesian approach and introduce a prior distribution over the polynomial coefficients $\mathbf{w}$. For simplicity, let us consider a Gaussian distribution of the form $$ p(\mathbf{w} \mid \alpha)=\mathcal{N}\left(\mathbf{w} \mid \mathbf{0}, \alpha^{-1} \mathbf{I}\right)=\left(\frac{\alpha}{2 \pi}\right)^{(M+1) / 2} \exp \left\{-\frac{\alpha}{2} \mathbf{w}^{\mathrm{T}} \mathbf{w}\right\} $$ where $\alpha$ is the precision of the distribution, and $M+1$ is the total number of elements in the vector $\mathbf{w}$ for an $M^{\text {th }}$ order polynomial. Variables such as $\alpha$, which control the distribution of model parameters, are called hyperparameters.

In modern machine learning literature though, the definitions became more operational. For example, in Ian Goodfellow, Yoshua Bengio, Aaron Courville - Deep Learning (2016), we can read

Most machine learning algorithms have hyperparameters, settings that we can use to control the algorithm's behavior. The values of hyperparameters are not adapted by the learning algorithm itself (though we can design a nested learning procedure in which one learning algorithm learns the best hyperparameters for another learning algorithm).

So, there is room for interpretation, even though I personally find more technical and precise the old reference to Bayesian statistics. It is clear from that definition that every variable not belonging to the parameters used in the prediction phase but only during training are indeed hyperparameters. Moreover, it is clear that the choice of hyperparameters affects the distribution of learned parameters once the model training reaches convergence.

To elaborate a bit more on the modern definitions, what I don't like about the example taken from Deep Learning is the lack of further explanation about the meaning of "model behavior". Does it refer to weight updating during training? Final metrics score? Both and more? In other words, what are hyperparameters supposed to affect? Of course, these loose definitions do not stop machine learning practitioners from using hyperparameters in the right place, but no surprise about doubts emerging like this question.

",34098,,2444,,10/29/2021 14:48,10/29/2021 14:48,,,,1,,,,CC BY-SA 4.0 32222,2,,32161,10/29/2021 14:34,,1,,"

The VAE models the following directed graphical model (figure 1 from the original VAE paper)

So, you have 2 sets of parameters, $\boldsymbol{\phi}$ and $\boldsymbol{\theta}$, and 2 random variables, $\mathbf{z}$ (latent r.v.) and $\mathbf{x}$.

How can you view this graphical model (in figure 1 above) as a generative model?

  1. First, a sample $\mathbf{z}^{(i)}$ is generated from the (variational) probability distribution $q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})$

  2. Then, a sample $\mathbf{x}^{(i)}$ is generated from $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z}^{(i)})$

Now, more concretely, let's assume that we have a dataset $\mathbf{X}=\left\{\mathbf{x}^{(i)}\right\}_{i=1}^{N}$ of $N$ i.i.d. samples of the random variable $\mathbf{x}$. So, each of these $\mathbf{x}^{(i)}$ has been generated as follows

  1. a sample $\mathbf{z}^{(i)}$ is generated from some prior distribution $p_{\boldsymbol{\theta}}(\mathbf{z})$

  2. a sample $\mathbf{x}^{(i)}$ is generated from some conditional distribution $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z}^{(i)})$.

Now, in the VAE, the encoder represents $q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})$, while the decoder represents $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$. If you want to train a VAE, you also need to make an assumption about $p_{\boldsymbol{\theta}}(\mathbf{z})$, for example, you can assume $p_{\boldsymbol{\theta}}(\mathbf{z}) = \mathcal{N}(\mathbf{z} ; \mathbf{0}, \mathbf{I})$. Once trained, you use the variational distribution (encoder) as the prior from which you sample $\mathbf{z}^{(i)}$ (although we train the VAE as if the variational distribution is an approximation of the true/unknown/intractable posterior given a usually fixed prior and the likelihood/decoder), in order to sample $\mathbf{x}^{(i)}$. This is not wrong. In fact, this is just how you usually do Bayesian statistics. You have a prior and a likelihood (and maybe a marginal), then you compute the posterior, which then becomes the new prior. So, if you had more data, you could learn a new variational distribution $q'_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})$ by assuming that your new prior is $q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})$.

If you keep in mind the following equation

$$p_{\boldsymbol{\theta}}(\mathbf{x}, \mathbf{z})=p_{\boldsymbol{\theta}}(\mathbf{z}) p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$$

it should remind you why the VAE can be viewed as a generative model.

",2444,,,,,10/29/2021 14:34,,,,1,,,,CC BY-SA 4.0 32224,2,,32214,10/29/2021 18:44,,2,,"

We also discussed this topic at physics.stackexchange, take a look at physics related issues there. To summarize all the comments there, here is my answer to this topic as a complilation of all issues in discussions (feel free to propose your explanations).

It looks like there are no special technical problems with that. Basically, both arms (i.e. manipulators) and 3d printers consist of servomotors, wires, chips and structural mechanical elements. They all can be easily 3d printed, that's no doubt I guess.

As often seen in modern science/research/development, the only problem is with funding. The problems with this scale require some solid funds for a large amount of time. That is not compatible with modern financial world, that is aiming at low term profits in simple stuff. Both states and commercial sectors (venture firms) are not currently able or willing to fund it due to uncertainties.

The solution might be in centralized fundings using tax from states via UN or something. It may be like a cross-state global research and development fund (with let's say 1% of GDP per state shares). The results (products and tech blueprints) may be shared to participants due to their share part. But that require a lot of changes/efforts and currently is not available in the nearest future unfortunately.

",48812,,,,,10/29/2021 18:44,,,,5,,,,CC BY-SA 4.0 32225,2,,32214,10/29/2021 23:07,,0,,"

This is a high-level answer.

  • Much of the focus has been on nanotech as opposed to self-replication on the macro scale

Search of Google Scholar with keywords "self replicating machines molecular nano" reveals a slew of papers, especially in the early 2000's, with diminishment in activity subsequently. (My recollection is it turned out to be harder than initially hoped, but CRISPR has gotten a lot of attention in regard to molecular scale machines.)

There may be a weakening distinction between what constitutes a machine at this scale subsequent to CRISPR, i.e. organic machines vs. inorganic machines.

Biology provides numerous examples of self-replicating machines, but artificially engineering such complex systems remains a formidable challenge. In particular, although simple artificial self-replicating systems including wooden blocks magnetic systems modular robots5,6 and synthetic molecular systems have been devised, such kinematic self-replicators are rare compared with examples of theoretical cellular self-replication. One of the principal reasons for this is the amount of complexity that arises when you try to incorporate self-replication into a physical medium

Kim, J., Lee, J., Hamada, S. et al. Self-replication of DNA rings. Nature Nanotech 10, 528–533 (2015)

Programmable manufacturing systems capable of self-replication closely coupled with (and likewise capable of producing) energy conversion subsystems and environmental raw materials collection and processing subsystems (e.g. robotics) promise to revolutionize many aspects of technology and economy, particularly in conjunction with molecular manufacturing. The inherent ability of these technologies to self-amplify and scale offers vast advantages over conventional manufacturing paradigms, but if poorly designed or operated could pose unacceptable risks.

Rabani E.M., Perg L.A. (2019) Demonstrably Safe Self-replicating Manufacturing Systems. In: Schmorrow D., Fidopiastis C. (eds) Augmented Cognition. HCII 2019. Lecture Notes in Computer Science, vol 11580. Springer, Cham.

However, search using the terms "self replicating machines macro scale" does return many results, some recent.

This may be more what you're looking for:

This paper introduces the concept of a physical self-replicating machine for deployment on the Moon utilizing raw material available on the Moon. A detailed but selective review is given in order to highlight clearly the novel aspects of this concept. In particular, it is hypothesized that if electric motors and vacuum tubes can be 3D printed from the limited repertoire of lunar materials, 3D printing constitutes a universal construction mechanism. This follows from the observation that mechatronic components are the constituent parts of all robotic mechanisms. In particular, we examine the use of 3D printing of electronics as a physical instantiation of a Turing machine. Several general implications of such a self-replicator are considered including whether it constitutes artificial life and mitigation against runaway replication.

Alex Ellery; September 4–8, 2017. Building physical self-replicating machines. Proceedings of the ECAL 2017, the Fourteenth European Conference on Artificial Life. ECAL 2017, the Fourteenth European Conference on Artificial Life. Lyon, France. (pp. pp. 146-153).

Possibly this type of endeavor has more chance of serious funding, since the long-term rewards are so potentially great, and cost of using humans in those environments is presumed to be even greater.

(At least one super-power seems interested in a moon-base, and Elon wants to die on Mars;)

  • Runaway self-replication seems to be a concern in all areas

This could be why there is some reticence (see gray goo—only needs to be "intelligent" at one thing;) but it would seem to be less of a concern at the macro-scale in a terrestrial setting at current level of AI. In this domain, it's almost certainly a question of cost vs. cost of human labor.

",1671,,1671,,10/29/2021 23:14,10/29/2021 23:14,,,,1,,,,CC BY-SA 4.0 32226,2,,32214,10/30/2021 2:04,,1,,"

The problem is accuracy degrades exponentially and no 3d printed part today can accurately fix the accuracy issue.

The other issue is, to make modern things requires a ridiculous amount of specialized industry.

What your proposing will only work if we can 3d print at a tiny, tiny scale. If you can reliably print semiconductors, then this will be viable.

",32390,,,,,10/30/2021 2:04,,,,2,,,,CC BY-SA 4.0 32227,2,,32044,10/30/2021 6:18,,1,,"

You did a great job at this...

You can use the Tensorflow’s LogSumExp built-in function to avoid numerical problems. This routine prevents over/under flow issues that may occur when LogSumExp encounters very extreme, either positive or negative values.

You have sorted out this:

There need to be Images from the generator. To these ones, the discriminator learns to classify as fake.

Real images with labels. These are image label pairs like in any regular supervised classification problem.

Real images without labels. For those, the classifier only learns that these images are real.

I would recommend you visit this link: Semi-supervised learning with Generative Adversarial Networks (GANs)

Good luck.

",50672,,50672,,10/30/2021 6:24,10/30/2021 6:24,,,,0,,,,CC BY-SA 4.0 32229,2,,32199,10/30/2021 6:39,,1,,"

Well, in regards to properties of CNNs in regards to local versus global features, you should familiarize yourself with the concepts of invariance and equivariance. At some point you should also learn about the Picasso problem which is a consequence of the invariances and equivariances of CNNs + pooling.

That will probably also mean you'll encounter Capsule Nets at some point. While not really CNNs, learning about CapsNets can elucidate the weaknesses of convolutions since CapsNets were made to solve several of those.

There are CNN architectures that, in parallel, use different scales of local features, such as the Inception architecture and ResNext; Both combine local features on different scales, i.e. they use differently sized kernels in parallel to improve classifications.

",31879,,,,,10/30/2021 6:39,,,,0,,,,CC BY-SA 4.0 32230,2,,32044,10/30/2021 7:02,,1,,"

If you ruled out leakage completely read this observation about double decent https://openai.com/blog/deep-double-descent/ This Blogpost from openAI shares the observation that the validation loss can decrease again even after initially increasing (which is typically a sign for the start of overfitting, e.g. in early stopping).

",41576,,,,,10/30/2021 7:02,,,,0,,,,CC BY-SA 4.0 32236,1,,,10/31/2021 8:10,,3,47,"

Transformer architecture (without position embedding) is by the very construction equivariant to the permutation of tokens. Given query $Q \in \mathbb{R}^{n \times d}$ and keys $K \in \mathbb{R}^{n \times d}$ and some permutation matrix $P \in \mathbb{R}^{n \times n}$, one has: $$ Q \rightarrow P Q, K \rightarrow P K $$ $$ A = \text{softmax} \left(\frac{Q K^T}{\sqrt{d}} \right) \rightarrow \text{softmax} \left(\frac{R Q K^T R^T}{\sqrt{d}} \right) = R \ \text{softmax} \left(\frac{Q K^T}{\sqrt{d}} \right) R^T = R A R^T $$ Without the positional embedding (learned of fixed), that breaks permutation symmetry to translational there is no notion of location. And with the positional embedding one introduces a notion - token $x$ is on the $k$-th position.

However, I wonder, whether this notion makes sense after the first self-attention layer.

The operation of producing the output (multiplication of attention by the value) $$ x_{out} = A V $$ outputs some weighted sum of all tokens in the sequence, the token at the $k$-th position now has information from all the sequence.

So, I wonder, whether the notion of position (absolute or relative) still makes sense in this case?

My guess is, that since Transformers involve skip connections, these transfer the notion of the location to the next layers. But this depends on the relative magnitude between the activation of the given self-attention layer and the skip-connections

",38846,,,,,11/25/2022 15:01,Is there a notion of location in Transformer architecture in subsequent self-attention layers?,,1,0,,,,CC BY-SA 4.0 32238,2,,32236,10/31/2021 11:31,,0,,"

Attention mechanism solves this problem by allowing the decoder to “look-back” at the encoder's hidden states based on its current state. This allows the decoder to extract only relevant information about the input tokens at each decoding, thus learning more complicated dependencies between the input and the output.

",50690,,,,,10/31/2021 11:31,,,,2,,,,CC BY-SA 4.0 32240,1,,,10/31/2021 12:53,,1,43,"

I have the problem, which I described below. I wonder if there exists a class of multi-armed bandit approaches that is related to it.

I am working on computer networking optimization.

In the simplest scenario, we model the network as a graph with a circular node topology, similar to that seen in Chord (attached photo). Each node(vertex) can have a maximum number of $X$ active links (tunnels or edge) to other nodes at any given time. Then it can open, maintain, or close links (each operation has a cost associated with it). If there isn't a direct edge, traffic must be routed through neighboring nodes. What is the best link structure(optimal set of edges in the graph connecting nodes) in the underlying graph given the predicted traffic intensity matrix between the nodes?

Note: the optimal link structure should be recalculated on a regular basis to account for history (for example, it is worthwhile to keep a connection between two nodes open even though there is no traffic at the current time because it was generally a busy link in the past and the chance of using this link is high in future).

Estimation: Can multi-armed bandit be useful here?

",31773,,18758,,11/1/2021 0:41,11/1/2021 0:41,Multi-armed Bandit in optimization on graph edges selection,,0,0,,,,CC BY-SA 4.0 32241,1,32243,,10/31/2021 15:20,,1,66,"

Lately, I have been reading a lot about the universal approximation theorem. I was surprised to find only theorems about "single-channel" standard networks (multi-layer perceptrons), where all layers are 2D vectors and the weights can be represented in weight matrices.

In particular, this is no longer applicable in some convolutional network applications, where the layers tend to be tensors with multiple feature channels. Of course, one could construct an equivalent "single-channel" neural network from a multi-channel network by putting the weight matrices together in a certain way. However, one would then have sparse matrices as weight matrices with very many constraints on the matrix entries, so that the "standard theorems" would no longer be applicable.

Do you know of any papers that study the Universal Approximation Theorem for multi-channel neural networks? Or is there a way to derive it from one of the other theorems?

",50693,,2444,,11/2/2021 22:50,11/2/2021 22:50,Is there any paper that shows that multi-channel neural networks are universal approximators?,,1,1,,,,CC BY-SA 4.0 32242,1,,,10/31/2021 15:26,,2,80,"

Instead of changing faces (like James Bond to Putin) what if, given sufficient training data, I wanted to:

  • Remove or add some windows from a brick house?
  • Convert a glass of red wine to a glass of white wine?
  • Remove the infamous the Starbucks cup from Game of Thrones?

Although deepfakes are notoriously hungry to train, the pre-trained weights are plug-and-play.

Does there exist any equivalent for non-face objects?

",50695,,50695,,12/15/2021 13:08,12/15/2021 13:08,"Non-face ""deepfakes"" in videos",,0,3,,,,CC BY-SA 4.0 32243,2,,32241,10/31/2021 20:16,,1,,"

Yes, there is such a statement, valid even in a bit more general setting.

Any function, equivariant to a certain symmetry, can be approximated arbitrarily well, provided that the number of parameters is sufficient.

For the case of CNN, the symmetry is translational, and with a sufficient number of filters, any translationally equivariant function can be approximated by a suitable CNN.

For the details and exposition, please, look at the paper Universal approximations of invariant maps by neural networks (2018) by Dmitry Yarotsky.

",38846,,2444,,11/2/2021 12:33,11/2/2021 12:33,,,,2,,,,CC BY-SA 4.0 32246,1,,,11/1/2021 0:10,,2,124,"

While going over PGMs and GNNs, it seems like both leverage the graph data structure. The former has been used to represent causal associations (among other things), while the latter has a varied set of applications. Do these techniques intersect?

",31755,,2444,,11/1/2021 14:59,11/1/2021 14:59,What is the difference between Probabilistic Graphical models and Graph Neural networks?,,0,0,,,,CC BY-SA 4.0 32247,1,,,11/1/2021 5:24,,1,68,"

A probability density function is a real-valued function that roughly gives the density of probability at a particular value of a random variable.

For example, the probability density function of a normal random variable is given by

$$f(x) = \dfrac{1}{2\sigma \sqrt{2\pi}} e^{-{\LARGE(}\dfrac{x-\mu}{\sigma}{\LARGE)}^2}$$

Uniform Kaiming He probability distribution function is used for initialization of weights in Convolutional neural networks in PyTorch and the distribution function was initially mentioned in the research paper titled Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification by Kaiming He et al. I think.

What is the analytical formula for the Kaiming He probability density function?

",18758,,18758,,11/5/2021 1:39,11/5/2021 1:39,"What is the analytical formula for ""Kaiming He"" probability density function?",,1,0,,,,CC BY-SA 4.0 32248,1,,,11/1/2021 10:35,,1,107,"

The Universal Approximation Theorem states (roughly) that any continuous function can be approximated to within an arbitrary precision $\varepsilon>0$ by a feedforward neural network with one hidden layer (a Shallow Neural Network) with sufficient width.

I remember stumbling upon articles showing that for some functions the necessary number of hidden neurons tends to infinity as $\varepsilon$ tends to zero (similar to this one by A closer look at the approximation capabilities of neural networks), but I have not had success finding them again. Is my memory incorrect or is it merely my searching skills that are insufficient?

",50595,,50595,,12/15/2021 16:43,12/15/2021 16:43,Does there exist functions for which the necessary number of nodes in a shallow neural network tends to infinity as approximation error tends to 0?,,0,6,,,,CC BY-SA 4.0 32249,2,,32247,11/1/2021 11:52,,1,,"

The activation function proposed by He et al. is not a new probability function of its own kind. It's an improvement over a previously proposed activation function now called Xavier or Glorot (even though it was named by the authors normalized activation in the original paper). The Xavier activation is also simply an activation function and not a new kind of probability distribution.

To answer the question, both functions simply sample values from either a uniform or normal distribution. What change is how the parameters that define the distributions are estimated.

Focusing on the Kaiming activation, since that's the question asked, the formulas used to determine the correct parameters of the uniform and normal distributions to sample values from are:

  • Uniform: $$\boldsymbol{\mathit{U}}(-\sqrt{6 / n_{j}}, +\sqrt{6 / n_{j}}) $$

  • Normal: $$\boldsymbol{\mathbf{\mathit{N}}}(0, \sqrt{2/n_{j}})$$

nj is the generic notation used in the original paper, in other sources like the pytorch documentation the notation used is fan_in or fan_out. Without explaining the details, fan_in means number of hidden units of the input layers for the weight matrix being initialized (or number of rows of the weight matrix to initialize), while fan_out means number of hidden units of the output layers for the weight matrix being initialized (or number of columns of the weight matrix to initialize).

",34098,,,,,11/1/2021 11:52,,,,0,,,,CC BY-SA 4.0 32250,1,,,11/1/2021 14:40,,0,51,"

2D tasks enjoy a vast backing of successful models that can be reused.

For convolutions, can one simply replace 2D operations with 3D counterparts and inherit their benefits? Any 'extra steps' to improve the transition? Not interested in unrolling the 3D input along channels.

Publication/repository references help.


Details

The input is an STFT-like transform of multi-channel EEG timeseries, except it's 3D. There's spatial dependency to exploit across all three dimensions. The full transform is 4D, but one of the dimensions is unrolled along channels, so data and transform channels are mixed. The transform itself has stacked complex convolutions and modulus nonlinearities (output is real).

I seek to reuse models like SEResNet, InceptionNet, and EfficientNet. The "benefits" of interest are mainly train viability (convergence speed, amount of required tuning) and generalization (test accuracy) - or alternative to latter, that blocks don't interact harmfully by e.g. assuming an inherently 2D structure.

",32165,,18758,,11/2/2021 6:10,11/2/2021 6:10,2D models on 3D tasks (convolutions): simple replace?,<3d-convolution><2d-convolution>,0,2,,,,CC BY-SA 4.0 32256,1,,,11/3/2021 2:49,,0,57,"

I have a training dataset of 2000 images and 500 images for validation. I have executed 50 epochs, however I realized that my graph seems to be different as my accuracy is smaller than my loss. I am not sure on whether my graph is considered as overfitting. If it is, are there other ways to resolve it? I am currently running on Jupiter notebook. Attached below is my code

Edit:

By the way, I have made some changes to my code, such as reducing the dropout to 0.2. I have uploaded the new graph output as shown below. Is it considered as normal?

Code

import numpy as np
import keras
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import os
import cv2

Training_dir = './datasets/train'

train_datagen = ImageDataGenerator(rotation_range =15, 
                         width_shift_range = 0.2, 
                         height_shift_range = 0.2,
                         rescale=1./255, 
                         shear_range=0.2, 
                         zoom_range=0.2, 
                         horizontal_flip = True, 
                         fill_mode = 'nearest', 
                         data_format='channels_last', 
                         brightness_range=[0.5, 1.5])

train_gen = train_datagen.flow_from_directory(
    Training_dir,
    batch_size = 64,
    class_mode='binary',
    target_size=(150, 150)
)

Validation_dir = './datasets/val'

val_datagen = ImageDataGenerator(rotation_range =15, 
                         width_shift_range = 0.2, 
                         height_shift_range = 0.2,  
                         rescale=1./255, 
                         shear_range=0.2, 
                         zoom_range=0.2, 
                         horizontal_flip = True, 
                         fill_mode = 'nearest', 
                         data_format='channels_last', 
                         brightness_range=[0.5, 1.5])


validation_gen = val_datagen.flow_from_directory(
    Validation_dir,
    batch_size = 64,
    class_mode = 'binary',
    target_size = (150, 150)
)

imgs = os.listdir(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\train\cat")
imgs1 = os.listdir(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\train\dog")
imgs2 = os.listdir(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\val\dog")
imgs3 = os.listdir(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\val\cat")
        
    

for img in imgs:
    img=cv2.imread(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\train\cat" + "\\"+img)
    x = img_to_array(img)
    x = x.reshape((1,) + x.shape) 

    i = 0
    for batch in train_datagen.flow (x, batch_size=1, save_to_dir =r'C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\Preview\train\cat', save_prefix ='cat', save_format='jpg'):
        i+=1
        if i>5:
            break

for img in imgs1:
    img=cv2.imread(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\train\dog" + "\\"+img)
    x = img_to_array(img)
    x = x.reshape((1,) + x.shape) 

    i = 0
    for batch in train_datagen.flow (x, batch_size=1, save_to_dir =r'C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\Preview\train\dog', save_prefix ='dog', save_format='jpg'):
        i+=1
        if i>5:
            break
            
for img in imgs2:
    img=cv2.imread(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\val\dog" + "\\"+img)
    x = img_to_array(img)
    x = x.reshape((1,) + x.shape) 

    i = 0
    for batch in val_datagen.flow (x, batch_size=1, save_to_dir =r'C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\Preview\Validation\dog', save_prefix ='dog', save_format='jpg'):
        i+=1
        if i>5:
            break
            
for img in imgs3:
    img=cv2.imread(r"C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\datasets\val\cat" + "\\"+img)
    x = img_to_array(img)
    x = x.reshape((1,) + x.shape) 

    i = 0
    for batch in val_datagen.flow (x, batch_size=1, save_to_dir =r'C:\Users\User\Documents\IM4483 Mini Project (Tang Soon Loong Jefferson_U1921181B)\Preview\Validation\cat', save_prefix ='cat', save_format='jpg'):
        i+=1
        if i>5:
            break



from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, Dense, MaxPool2D, Flatten, Dropout

'''
SRC : https://www.pyimagesearch.com/2018/12/31/keras-conv2d-and-convolutional-layers/
----------------------------------
STRIDES = The stridesparameter is a 2-tuple of integers, specifying the “step” of the 
convolution along the x and y axis of the input volume.
PADDING = Typically, we set the values of the extra pixels to zero 'valid' or 'same'
KERNEL_INITIALIZER = The kernel_initializercontrols the initialization method used to initialize
all values in the Conv2D class prior to actually training the network.

FLATTEN = Return a copy of the array collapsed into one dimension.
DROPOUT = dropout refers to ignoring units (i.e. neurons) during the training 
phase of certain set of neurons which is chosen at random. When created, the dropout rate can be specified to the 
layer as the probability of setting each input to the layer to zero. TO PREVENT OVER-FITTING.!

DENSE = A Dense layer feeds all outputs from the previous layer
to all its neurons, each neuron providing one output to the next layer. 
----------------------------------
'''




model = Sequential()
model.add(Conv2D(16, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal',input_shape=(150, 150, 3)))
model.add(Conv2D(16, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.2))

model.add(Conv2D(32, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(Conv2D(32, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(Conv2D(32, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.2))

model.add(Conv2D(64, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(Dropout(0.2))
model.add(Conv2D(64, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.2))

model.add(Conv2D(128, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(Dropout(0.2))
model.add(Conv2D(128, (3,3), strides=(1,1), padding='same', activation='relu',kernel_initializer='he_normal'))
model.add(MaxPool2D((2,2)))
model.add(Dropout(0.2))

model.add(Flatten())
model.add(Dense(units=256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(units=256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(units=256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(units=1, activation='sigmoid'))


import tensorflow as tf
check_point_path = './best.h6'
model_checkpoint = tf.keras.callbacks.ModelCheckpoint(
    filepath = check_point_path,
    monitor = 'val_accuracy',
    save_weights_only=False,
    save_best_only=True,
    verbose=1
)

model.compile(optimizer = tf.keras.optimizers.Adam(0.0005,decay=1e-5),
             loss = 'binary_crossentropy',
             metrics = ['acc'])


tb_callback = tf.keras.callbacks.TensorBoard(log_dir="logs/", histogram_freq=1)




print('Num Params : ',model.count_params())


'''
VERBOSE = By setting verbose 0, 1 or 2 you just say how do you want to 'see' the training progress for each epoch.
'''
model_history = model.fit(
    train_gen,
    epochs=50,
    batch_size=128,
    verbose=1,
    callbacks = [tb_callback],
    validation_data=validation_gen
)









%matplotlib inline

import matplotlib.image  as mpimg
import matplotlib.pyplot as plt

#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
Acc=model_history.history['acc']
Val_acc=model_history.history['val_acc']
Loss=model_history.history['loss']
Val_loss=model_history.history['val_loss']

epochs=range(len(Acc)) # Get number of epochs

#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, Acc, 'r')
plt.plot(epochs, Val_acc, 'b')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(["Training Accuracy","Validation Accuracy"])
plt.show()

#------------------------------------------------
# Plot training and validation loss per epoch
#------------------------------------------------
plt.plot(epochs, Loss, 'r')
plt.plot(epochs, Val_loss, 'b')
plt.title('Training and validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend(["Training Loss","Validation Loss"])
plt.show()

Epochs output

Num Params :  3273585
Epoch 1/50
32/32 [==============================] - 97s 3s/step - loss: 0.7618 - acc: 0.4925 - val_loss: 0.6931 - val_acc: 0.5000
Epoch 2/50
32/32 [==============================] - 70s 2s/step - loss: 0.6916 - acc: 0.5330 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 3/50
32/32 [==============================] - 71s 2s/step - loss: 0.6946 - acc: 0.5250 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 4/50
32/32 [==============================] - 72s 2s/step - loss: 0.6911 - acc: 0.5205 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 5/50
32/32 [==============================] - 73s 2s/step - loss: 0.6949 - acc: 0.5095 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 6/50
32/32 [==============================] - 74s 2s/step - loss: 0.6933 - acc: 0.5170 - val_loss: 0.6933 - val_acc: 0.5000
Epoch 7/50
32/32 [==============================] - 74s 2s/step - loss: 0.6926 - acc: 0.5190 - val_loss: 0.6932 - val_acc: 0.5000
Epoch 8/50
32/32 [==============================] - 72s 2s/step - loss: 0.6936 - acc: 0.5055 - val_loss: 0.6933 - val_acc: 0.5000
Epoch 9/50
32/32 [==============================] - 74s 2s/step - loss: 0.6940 - acc: 0.5120 - val_loss: 0.6933 - val_acc: 0.5000
Epoch 10/50
32/32 [==============================] - 72s 2s/step - loss: 0.6920 - acc: 0.5265 - val_loss: 0.6935 - val_acc: 0.5000
Epoch 11/50
32/32 [==============================] - 73s 2s/step - loss: 0.6929 - acc: 0.4975 - val_loss: 0.6934 - val_acc: 0.5000
Epoch 12/50
32/32 [==============================] - 92s 3s/step - loss: 0.6935 - acc: 0.5035 - val_loss: 0.6933 - val_acc: 0.5000
Epoch 13/50
32/32 [==============================] - 89s 3s/step - loss: 0.6924 - acc: 0.5115 - val_loss: 0.6935 - val_acc: 0.5000
Epoch 14/50
32/32 [==============================] - 72s 2s/step - loss: 0.6929 - acc: 0.5095 - val_loss: 0.6935 - val_acc: 0.5000
Epoch 15/50
32/32 [==============================] - 71s 2s/step - loss: 0.6928 - acc: 0.5175 - val_loss: 0.6940 - val_acc: 0.5000
Epoch 16/50
32/32 [==============================] - 71s 2s/step - loss: 0.6916 - acc: 0.5170 - val_loss: 0.6941 - val_acc: 0.5000
Epoch 17/50
32/32 [==============================] - 71s 2s/step - loss: 0.6922 - acc: 0.5205 - val_loss: 0.6945 - val_acc: 0.5000
Epoch 18/50
32/32 [==============================] - 71s 2s/step - loss: 0.6894 - acc: 0.5280 - val_loss: 0.6949 - val_acc: 0.5000
Epoch 19/50
32/32 [==============================] - 71s 2s/step - loss: 0.6887 - acc: 0.5205 - val_loss: 0.6956 - val_acc: 0.5000
Epoch 20/50
32/32 [==============================] - 72s 2s/step - loss: 0.6870 - acc: 0.5400 - val_loss: 0.6963 - val_acc: 0.5000
Epoch 21/50
32/32 [==============================] - 72s 2s/step - loss: 0.6864 - acc: 0.5380 - val_loss: 0.6974 - val_acc: 0.5000
Epoch 22/50
32/32 [==============================] - 74s 2s/step - loss: 0.6836 - acc: 0.5505 - val_loss: 0.6965 - val_acc: 0.5000
Epoch 23/50
32/32 [==============================] - 74s 2s/step - loss: 0.6722 - acc: 0.5640 - val_loss: 0.6903 - val_acc: 0.5280
Epoch 24/50
32/32 [==============================] - 72s 2s/step - loss: 0.6712 - acc: 0.5755 - val_loss: 0.6980 - val_acc: 0.5100
Epoch 25/50
32/32 [==============================] - 72s 2s/step - loss: 0.6709 - acc: 0.5735 - val_loss: 0.6946 - val_acc: 0.5180
Epoch 26/50
32/32 [==============================] - 74s 2s/step - loss: 0.6687 - acc: 0.5890 - val_loss: 0.7016 - val_acc: 0.5120
Epoch 27/50
32/32 [==============================] - 73s 2s/step - loss: 0.6626 - acc: 0.5825 - val_loss: 0.7043 - val_acc: 0.5060
Epoch 28/50
32/32 [==============================] - 72s 2s/step - loss: 0.6675 - acc: 0.5830 - val_loss: 0.7053 - val_acc: 0.5060
Epoch 29/50
32/32 [==============================] - 72s 2s/step - loss: 0.6580 - acc: 0.5895 - val_loss: 0.7074 - val_acc: 0.5100
Epoch 30/50
32/32 [==============================] - 72s 2s/step - loss: 0.6632 - acc: 0.5875 - val_loss: 0.7036 - val_acc: 0.5200
Epoch 31/50
32/32 [==============================] - 73s 2s/step - loss: 0.6705 - acc: 0.5685 - val_loss: 0.6877 - val_acc: 0.5400
Epoch 32/50
32/32 [==============================] - 73s 2s/step - loss: 0.6595 - acc: 0.5880 - val_loss: 0.6821 - val_acc: 0.5520
Epoch 33/50
32/32 [==============================] - 72s 2s/step - loss: 0.6751 - acc: 0.5600 - val_loss: 0.7052 - val_acc: 0.5020
Epoch 34/50
32/32 [==============================] - 72s 2s/step - loss: 0.6536 - acc: 0.6015 - val_loss: 0.6853 - val_acc: 0.5580
Epoch 35/50
32/32 [==============================] - 72s 2s/step - loss: 0.6549 - acc: 0.6010 - val_loss: 0.6675 - val_acc: 0.5860
Epoch 36/50
32/32 [==============================] - 73s 2s/step - loss: 0.6530 - acc: 0.6050 - val_loss: 0.7023 - val_acc: 0.5160
Epoch 37/50
32/32 [==============================] - 75s 2s/step - loss: 0.6559 - acc: 0.5910 - val_loss: 0.6965 - val_acc: 0.5380
Epoch 38/50
32/32 [==============================] - 73s 2s/step - loss: 0.6423 - acc: 0.6225 - val_loss: 0.6719 - val_acc: 0.5920
Epoch 39/50
32/32 [==============================] - 72s 2s/step - loss: 0.6426 - acc: 0.6135 - val_loss: 0.7164 - val_acc: 0.5260
Epoch 40/50
32/32 [==============================] - 73s 2s/step - loss: 0.6430 - acc: 0.6080 - val_loss: 0.6936 - val_acc: 0.5640
Epoch 41/50
32/32 [==============================] - 72s 2s/step - loss: 0.6440 - acc: 0.5980 - val_loss: 0.6894 - val_acc: 0.5720
Epoch 42/50
32/32 [==============================] - 72s 2s/step - loss: 0.6431 - acc: 0.6135 - val_loss: 0.7083 - val_acc: 0.5560
Epoch 43/50
32/32 [==============================] - 73s 2s/step - loss: 0.6344 - acc: 0.6205 - val_loss: 0.6800 - val_acc: 0.5860
Epoch 44/50
32/32 [==============================] - 72s 2s/step - loss: 0.6687 - acc: 0.5735 - val_loss: 0.6699 - val_acc: 0.5940
Epoch 45/50
32/32 [==============================] - 72s 2s/step - loss: 0.6344 - acc: 0.6190 - val_loss: 0.7070 - val_acc: 0.5620
Epoch 46/50
32/32 [==============================] - 72s 2s/step - loss: 0.6340 - acc: 0.6160 - val_loss: 0.7012 - val_acc: 0.5740
Epoch 47/50
32/32 [==============================] - 74s 2s/step - loss: 0.6424 - acc: 0.5985 - val_loss: 0.7255 - val_acc: 0.5340
Epoch 48/50
32/32 [==============================] - 73s 2s/step - loss: 0.6449 - acc: 0.5985 - val_loss: 0.7120 - val_acc: 0.5420
Epoch 49/50
32/32 [==============================] - 72s 2s/step - loss: 0.6437 - acc: 0.6120 - val_loss: 0.6708 - val_acc: 0.6140
Epoch 50/50
32/32 [==============================] - 73s 2s/step - loss: 0.6497 - acc: 0.6030 - val_loss: 0.7339 - val_acc: 0.5280

Graph output

",50737,,50737,,11/3/2021 6:49,11/3/2021 6:49,Is the graph considered as overfit?,,0,6,,,,CC BY-SA 4.0 32257,1,32267,,11/3/2021 2:54,,3,154,"

I'm reading the paper Deterministic Policy Gradient Algorithms, David Silver et al.

First of all, in the introduction, the author says that

It was previously believed that the deterministic policy gradient did not exist

But, I wonder why it is. The general version of the policy gradient theorem does not have restrictions about the policy $\pi$. So, if we choose the policy $\pi$ as a Dirac measure, that is, $\pi(\cdot | s) = \delta_{a}$ for some $a \in \mathcal{A}$, then it is exactly the notion of deterministic policy, so we can apply the usual gradient descent theorem.

Indeed, in theorem 2, they showed that deterministic policy gradient theorem and usual gradient theorem matches when it comes to zero variance. (In fact, I can't understand the statement rigorously, because the policy is something about probability "measure", and the variance is something about "random variable".) However, my below computation shows some contradiction.

Let $\pi(\cdot | s) = \delta_a$, a Dirac measure for some atom $a \in \mathcal{A}$. Following the notation of the paper DPG, a policy gradient theorem says

$$\nabla_{\theta}J(\pi_{\theta}) = \mathbb{E}_{s, a}[\nabla_{\theta}log \pi_{\theta}(a | s) * Q^{\pi}(s, a)] $$

A definition of integral shows $$\nabla_\theta J(\pi_{\theta}) = \int_{\mathcal{S}, \mathcal{A}} \pi_{\theta}(da|s) \rho^{\pi}{(ds)} \nabla_\theta log \pi_\theta(a|s) * Q^{\pi}(s,a). $$ But since $\pi_\theta$ has only an atom at generic element $a \in \mathcal{A}$, so it is same with $$\int_\mathcal{S} \rho^{\pi}(s) \nabla_\theta log \pi_\theta(a | s) * Q^{\pi}(s, a) = \mathbb{E}_s[\nabla_\theta log \pi_\theta(a|s) * Q^{\pi}(s, a)].$$

(NOTE : the last term seems the same with the first line of the equation, but we get rid of $a$ from the expectation, by fixing $a$ corresponding to the atom of $s$.) However, note that $log\pi_\theta(a|s) = 1$, since $\pi_\theta(a|s) = \delta_a(\{a\}) = 1$ as we defined! Thus, $\nabla_\theta log \pi_\theta(a | s) = \nabla_\theta 1 = 0$ so we reached that gradient vanishes..

However, clearly not.

Can anyone help me?

",50199,,2444,,5/11/2022 11:26,5/11/2022 11:26,Why exactly was previously believed that the deterministic policy gradient did not exist?,,1,0,,,,CC BY-SA 4.0 32258,1,,,11/3/2021 4:21,,2,401,"

In figure 6.3 (shown below) from Reinforcement Learning: An Introduction (second edition) by Sutton and Barto, SARSA is shown to perform worse asymptotically (after 100k episodes) than in the interim (after 100 episodes) for larger values of alpha (alpha > 0.9). The graph is for the cliff walking gridworld example whose description is also given (from the paper by van Seijen et al).

As the image mentions, the image is taken from a paper by van Seijen and others titled "A Theoretical and Empirical Analysis of Expected Sarsa". In the image below from the van Seijen paper from Section VII A Discussion, the authors mention that the reason for the better interim performance of SARSA as compared to its asymptotic performance for larger values of alpha, is the divergence of Q-values. The authors, however, fail to mention the reason for the divergence.

What would be the reason that SARSA diverges but not Expected SARSA or Q-learning?

According to me, SARSA might have a higher variance than Expected SARSA, but it should behave, on average, the same as Expected SARSA.

Additionally, shouldn't Q-learning be at greater risk of diverging Q values since, in its update, we maximise over actions (and I have in fact seen a number of instances where there is a problem of diverging Q values in DQNs)?

The majority of papers I have looked at only talk about the problem from the function approximation perspective.

",46615,,2444,,11/3/2021 12:28,7/31/2022 15:02,Why would SARSA diverge (but not Expected SARSA or Q-learning)?,,1,0,,,,CC BY-SA 4.0 32260,1,32262,,11/3/2021 7:41,,1,68,"

Batch gradient descent is extremely slow for large datasets, but it can find the lowest possible value for the cost function. Stochastic gradient descent is relatively fast, but it kind of finds the general area where convergence happens and it kind of oscillates around that area.

Is it possible to use stochastic gradient descent at the beginning and find the way to a general convergence and then use batch gradient descent on only a few training examples out of the huge dataset to get even closer to the exact point of convergence?

I know that a model with a cost function that's a bit away from the lowest value for the cost function performs well in stochastic gradient descent, but assuming you want better results, will this work well?

",50742,,2444,,11/3/2021 12:16,11/3/2021 12:16,"Is it possible to use stochastic gradient descent at the beginning, then switch to batch gradient descent with only a few training examples?",,1,3,,,,CC BY-SA 4.0 32262,2,,32260,11/3/2021 10:03,,2,,"

There is a trade-off between the:

  • Memory capacity of computation device
  • Quality of gradient approximation
  • Generalization ability of the network

Memory capacity

I would say, that it is possible to process the whole dataset at once only for small enough dataset and image resolution (or any other measure of the data sample size - text sequence length, number of points in point cloud, whatever). Indeed, one can compute the update by accumulation several gradients (this functionality is provided in Pytorch Lightning) and only then update the parameters, but this would be rather slow.

For dataset of ImageNet size you will need to wait several hours (on single of few GPU's) to make a single update, which is prohibitive.

Usually, one is tempted to take large batches in order to traverse the training dataset as fast as possible, due to the large parallelization ability of modern computation accelerators (GPUs, TPUs, e.t.c). Typical batch size is of order 1k-2k on ImageNet.

Generalization

There is quite a lot of research devoted to the optimal choice of the batch size. A smaller batch size is said to be beneficial in the initial stage of training since it allows to find better and wider optima. Wider optima lead to more robust behavior of the training procedure and have less train-test error discrepancy, as mentioned, for instance, in this paper.

In the consequent stage of training, when one is close to the optimum, the typical strategy is to decrease the amount of "noise" in SGD. This objective can be achieved in several ways:

Assuming that the individual gradients are uncorrelated (which may be a good or bad assumption), the standard deviation of the mini-batch gradients will scale roughly as: $$ \mathcal{O}\left(\frac{1}{\sqrt{N}}\right) $$

The answer

Provided your computational resource allow, or you can allow for the accumulation of a large number of batches - you can try to update the weights, based on the gradients on the whole dataset. But the learning rate decay is a simpler strategy.

One step on the dataset with 1M samples with learning rate $\eta$ is roughly equivalent to 1K steps with batch size 1K and learning rate $\eta / 1000$.

",38846,,,,,11/3/2021 10:03,,,,0,,,,CC BY-SA 4.0 32264,2,,32258,11/3/2021 11:59,,2,,"

I think a useful piece of information to answer this question is a representation of the safe and optimal policies that can be learned on the cliff grid world. SARSA learns the safe path while Q-learning (and on the long run also Expected SARSA) learns the optimal path. The reason lies in how the different algorithms select the next action.

"shouldn't Q-learning be at greater risk of diverging Q values since in it's update, we maximise over actions"

This is actually the reason why Q-learning doesn't suffer of divergence issues in this simple world. Choosing always the action that maximize the reward is what allows Q-learning to learn directly the optimal policy. And you can see it on the graphs, the learning rate alpha doesn't really affect the final performance in the 50k runs case.

The same logic applies to Expected SARSA. Remember that SARSA choose a random action, while Expected SARSA take the reward expected value into account, which makes Expected SARSA closer to Q-learning than SARSA itself.

-SARSA $$Q(S_{t}, A_{t}) + \alpha((R_{t+1}) + \gamma Q(S_{t+1}, A_{t+1}) - Q(S_{t}, A_{t})) $$ -Q-learning $$Q(s_{t}, a_{t}) + \alpha((r_{t+1}) + \gamma max_{a} Q(s_{t+1}, a) - Q(s_{t}, a_{t})) $$ -Expected SARSA $$Q(s_{t}, a_{t}) + \alpha((r_{t+1}) + \gamma \sum_{a} \pi(a | s_{t+1}) Q(s_{t+1}, a) - Q(s_{t}, a_{t})) $$

This still doesn't explain why SARSA fails with high learning rates though. To answer that we just need to combine in a different light what we already said:

  • SARSA is not designed to depend on high future rewards
  • SARSA prefers policies that minimize risks

Combine these 2 points with a high learning rate, and it's not hard to imagine an agent struggling to learn that there is a goal cell G after the cliff, cause the high learning rate keeps giving high value to each random move action that keep the agent in the grid. Unfortunately for the agent, moving randomly gives you rewards but it also increase the chances of falling into the cliff. A small learning rate on the other hand push down the value of each action that simply keep you in the grid, and gives more change to SARSA to learn that the action of going down in the cell up to the bottom right corner is actually the real deal.

Hope this gives a bit more clarity, I suggest to maybe try to use some of the umpteenth noteboooks out there to visualize the policies learned by the algorithms, which is much more insightful than simply looking at graphs, even though is understandable that authors focus on the latter.

",34098,,34098,,11/3/2021 12:05,11/3/2021 12:05,,,,1,,,,CC BY-SA 4.0 32265,1,,,11/3/2021 14:14,,2,135,"
Background

I am computing the attribution scores for a simple LSTM model using Integrated Gradients. This method defines the contribution of a feature to a model prediction by integrating over the gradients along a path between the input and a fixed baseline:

$$IG_i(x) = (x_i - x'_i) \cdot\int_{\alpha=0}^1 \frac{\partial F(x'+\alpha(x-x'))}{\partial x_i}d\alpha$$

A common way of measuring the quality of the generated attributions is via the completeness axiom, which states that:

$$\sum_i IG_i(x) = F(x) - F(x')$$

The key to computing the IG scores is the approximation of the path integral, which can be approximated via Riemann sum, or a similar interpolation method. In section 5 of the IG paper, it is stated that, in practice, between 50 to 300 interpolation steps are sufficient to obtain IG scores that converge to satisfy the completeness axiom.

Issue

I am now testing the IG attributions on a simple LSTM model (1-layer, 16 hidden units). For shorter inputs (<20 tokens), convergence is reached in a reasonable number of steps, and the approximation of the integral is stable. However, when the length of the input increases, I find that the integral approximation diverges when the number of interpolation steps is increased! This can be seen in the following plot (N.B. the y axis is logarithmic):

Question

My question is: why does the number of input tokens to the LSTM have an impact on the convergence of integrated gradients?

It is stated in footnote 1 of the IG paper that the completeness axiom depends on whether the model satisfies Lebesgue's integrability condition. It would surprise me, however, that increasing the number of input tokens would dissatisfy this constraint: would it be possible that the model has become too nonlinear for numerical integration to still work? If so, are there alternative numerical integration methods that could be used here, instead of Riemann approximations or the Gauss-Legendre quadrature?

",50749,,50749,,11/4/2021 15:48,11/15/2021 14:25,Why does the number of input tokens to an LSTM have an impact on the convergence of Integrated Gradients?,,0,4,,,,CC BY-SA 4.0 32266,1,32268,,11/3/2021 14:21,,1,85,"

What is the name of this letter $\mathcal{J}$ in the following deep learning equation? And what alphabet it is from?

$$\mathcal{J} = \frac{1}{m} \sum_{i=1}^m \mathcal{L}^{(i)}$$

",50751,,2444,,11/4/2021 0:18,11/4/2021 13:14,What is the name of this letter $\mathcal{J}$?,,1,1,,,,CC BY-SA 4.0 32267,2,,32257,11/3/2021 14:44,,2,,"

Actually, your result that the gradient is 0 is correct given your formulation. Indeed, that is why one might have believed that the deterministic policy gradient didn't exist.

The term $\nabla_{\theta}\log \pi(a \, | \, s)$ is a type of gradient estimator known as a likelihood ratio, and it assumes that the support of $\pi$ does not depend on $\theta$. In other words there must be some non-zero probability of choosing all the possible actions $\mathcal{A}$, and $\theta$ encodes those probabilities. In your idea, $\mathcal{A}$ is a single action, and therefore the parameters don't do anything at all. You always have probability 1 of choosing that action. We can't let different parameters choose different actions, as that violates the assumptions of the likelihood ratio and will lead to a biased gradient.

But why can't we let the support of a likelihood ratio depend on $\theta$? I won't prove this in a rigorous way here but you can consider \begin{align*} \mathbb{E}_{y} [ \nabla_{\theta}\, \log p_{\theta}(y)] = \mathbb{E}_{y} \left[ \underset{\epsilon \rightarrow 0}{\lim} \dfrac{\log p_{\theta+\epsilon}(y) - \log p_{\theta}(y)}{\epsilon}\right] \end{align*} and see that if $p_{\theta + \epsilon}(y) = 0$, then $\log p_{\theta+\epsilon}(y) = -\infty$.

I think a more instructive example is to consider what can go wrong if we just go ahead and let the support depend on $\theta$ anyway. Let's consider $\nabla_{\theta} \, \mathbb{E} [u]$ where $u \sim \text{Uniform}(0, \theta)$ (so $\theta$ is a scalar, and $u$ has support on $[0, \theta ]$). We know $\mathbb{E} [u] = \frac{1}{2}\theta$ so $\nabla_{\theta}\mathbb{E} [u] = \frac{1}{2}$. You should work out for yourself that the gradient estimator (using a likelihood ratio) gives the utterly wrong result of $\mathbb{E}[\nabla_\theta \log p(u)] = -1/\theta$

",47080,,,,,11/3/2021 14:44,,,,0,,,,CC BY-SA 4.0 32268,2,,32266,11/3/2021 15:33,,5,,"

It's an uppercase "J" from the math calligraphy alphabet, i.e. \mathcal{J} in latex.

$\mathcal{J}$

",47080,,2444,,11/4/2021 13:14,11/4/2021 13:14,,,,0,,,,CC BY-SA 4.0 32269,2,,31997,11/3/2021 20:05,,0,,"

I've gotten no answer on this, but after some reflection I've come to accept R^2 as regression's analogue to accuracy. I have no idea why it is not used by more people in deep learning, but I recently started using it and it's exactly what I was hoping for! It scales from 0 to 1 and tells you in no uncertain terms how useful the model is and when it becomes unrealistic to improve the model further (e.g. at R^2=0.99). Additionally, R^2=0 when the model starts guessing only the mean of the data, which means it immediately reveals an important and recurring issue in regression which can otherwise go undetected for a while.

",42996,,,,,11/3/2021 20:05,,,,0,,,,CC BY-SA 4.0 32270,1,32273,,11/3/2021 21:10,,2,655,"

Does ViT do handle arbitrary sequence lengths using masking the same way the normal Transformer does?

The ViT paper doesn't mention anything about it, so I assume it uses masking like the normal Transformer.

",50755,,2444,,11/4/2021 12:29,11/4/2021 12:29,Do Vision Transformers handle arbitrary sequence lengths the same way as normal Transformers?,,1,1,,,,CC BY-SA 4.0 32271,1,,,11/3/2021 21:48,,1,45,"

I have really quite hard difficulties to understand what is actually going on in the backward pass of a CNN.

I am currently focusing on these references:

  1. https://towardsdatascience.com/forward-and-backward-propagations-for-2d-convolutional-layers-ed970f8bf602
  2. https://leonardoaraujosantos.gitbook.io/artificial-inteligence/machine_learning/deep_learning/convolution_layer

At the moment, I only want to compute the gradient which is then passed to the next lower layer.

I tried to implement the last equation on this image (from [1]):

https://miro.medium.com/max/2400/1*K2K0tfxmAlyRlqqbj4z0Rg@2x.png

For me, this formula looks like 6 nested for-loops.

The first for the channel c, the second for the output height i, the third for the output height j, the fourth for the number of kernels f, the fith for the kernel height m and the sixth for the kernel width n.

Am I wrong? I tried to implement this but I always get an out of bounds error.

Someone here who doesn't mind going a bit more into detail?

Any tips are greatly appreciated

",50757,,,,,11/3/2021 21:48,CNN: Difficulties understanding backward pass derivatives,,0,1,,,,CC BY-SA 4.0 32273,2,,32270,11/4/2021 6:49,,3,,"

Yes, they can handle sequences with arbitrary length sequence, but with some remarks.

In the paper Training data-efficient image transformers & distillation through attention authors train models in the resolution 224x224 (1 + 14x14 tokens) and then finetune to the 384x384 (1 + 28x28 tokens).

Weights to produce queries, keys, values, as well as feedforward layers, operate only on a single token and are agnostic to the sequence length.

However, the size of the sequence is required in the positional embeddings, where one has a specific weight for each location of the token.

In order to make this construction work for inputs of other sizes, one needs to transform this positional embedding in a certain way. Bicubic interpolation of positional embeddings, used in DeiT, works pretty well. One can use simpler Bilinear or Nearest interpolation - but it seems like this harms accuracy.

In my own experience, when I took the DeiT-tiny with base accuracy of 72.2% after several epochs of finetuning gave accuracy of 77.0%.

",38846,,,,,11/4/2021 6:49,,,,0,,,,CC BY-SA 4.0 32274,1,,,11/4/2021 8:26,,0,44,"

I have a model for binary classification that includes 2 linear layers with RELU activation function and Sigmoid in the last layer. The input features are FastText word embedding, frequency, and statistical signals.

This model has a 93% f1-score and I want to add an explanation to this model but I don't know how can I start.

My question is which models or papers good for these complex input features?

I appreciate any advice to achieve this goal.

",50764,,18758,,11/5/2021 0:57,11/5/2021 0:57,Explainable AI for complex input features,,0,2,,,,CC BY-SA 4.0 32277,2,,32207,11/4/2021 12:31,,1,,"

I think you need to look into semi-supervised learning, which combines supervised and unsupervised learning for problems where large labelled datasets are not available. To use this family of techniques, you need a small labelled dataset and a large unlabelled one.

Create a dataset over good athletes, lets say the ones who are professional, and the traits you are going to use as input features. This will be your labelled positive dataset. For the one with negative labels, you can probably find data over people who failed at becoming professional.

Now, you can use this dataset with a number of real, or generated, samples of people to train a model that can be used to make your desired predictions (use statistical data as reference when generating this dataset).

The quality and size of both datasets will determine how long it'll take to train the model and how good the model can become.

There are several approaches to semi-supervised learning. Read through this survey to get started.

",31879,,,,,11/4/2021 12:31,,,,0,,,,CC BY-SA 4.0 32278,1,,,11/4/2021 12:50,,1,28,"

I am currently training a neural network in a self-supervised fashion, using Contrastive Loss and I want to use that network then to fine-tune it in a classification task with a small fraction of the data with labels. I'm basing this project on the paper titled A Simple Framework for Contrastive Learning of Visual Representations by Ting Chen et al.

My questions go oriented on a very specific thing that I think is causing me some problems. After finishing the self-supervised training I extract the embeddings of a bunch of data and I get the following. These stats are calculated using a flattening of all the embedding of all the bunch of data.

Mean: -23.090446
Std: 91.78753
Min: -710.24493
Max: 651.8682

What bugs me is that when I extract the embeddings in a ResNet, for example, I get much much lower values (the highest value in absolute terms don't usually go beyond 15), but I'm getting much larger values.

When looking for similarities with the use of these embeddings, it is something that looks like it works, but my question is that if it is something that might affect when adding another layer and using it for a classification task.

I have the feeling that something here is wrong but I can not spot exactly what it is.

Could you give me any advice on this?

",33873,,18758,,11/5/2021 0:49,11/5/2021 0:49,How do the scale of an embedding affects a downstream task?,,0,0,,,,CC BY-SA 4.0 32279,1,,,11/4/2021 13:10,,1,86,"

I understand some of the inherent dangers involved with AGI and advanced machine learning. While I can see some of the more low-level risks associated with AI coming to fruition (deep-fakes, biased algorithms, etc.), some of the more existential dangers seem far-fetched and lack empirical examples. As someone outside of AI development circles and without a background in comp-sci, I would like to know how realistic those working in the field of AI find these fears. Are there any convincing hypotheticals about the risks of a superintelligence?

I've read some Stuart Russel and have recently been watching Robert Miles videos on the topic. Both of these observers seem worried. Is this opinion widely held or is it seen as a bit extreme among developers?

Provably Beneficial Artificial Intelligence by Stuart Russell

Robert Miles YouTube channel

",50309,,18758,,11/5/2021 0:44,11/5/2021 0:44,Are the existential dangers of AI exaggerated?,,0,1,,,,CC BY-SA 4.0 32280,1,,,11/4/2021 13:47,,0,174,"

I have unlabeled credit card transaction data that has the following columns:

Transaction_ID     Frequency       Amount    Fees
   192831             21            829       23
   382912             14            920       24
   483921            839           24059      87

Eventually, I'd like to build a deep learning model(e.g. LSTM) that can tell me whether a transaction(row) has a "high", "moderate", or "low" risk. However, since the data is unlabeled, I believe I need to label the data first before I feed the data into the deep learning model.

For example, transactions that have small frequency and amount values like the first two rows need to be labeled as "low (0)" while transactions that have large frequency and amount like the last row should be labeled as "high (2)". If both frequency and amount have moderate values, the row will be labeled as "moderate(1)".

I wonder if it is okay to use other machine learning techniques such as K-Means clustering to label the data before I feed the data into the deep learning model. Is it okay to use one Machine Learning algorithm (K-means) to label the data and feed the same labeled data into another Deep Learning model (LSTM)? Or is it a bad practice? For example, if the first model (K-means) is biased, will that bias(error) be carried over from the first model to the second model (LSTM)?

If it is a bad practice to use two different ML technologies, what else can I do to label the data?

",50769,,18758,,11/5/2021 0:40,4/4/2022 1:02,How to label unsupervised data for deep learning multi-classification,,1,0,,,,CC BY-SA 4.0 32282,1,,,11/4/2021 17:18,,2,71,"

the conventional way of creating a CNN is using increasing number of neurons:

model = models.Sequential([
  layers.Conv2D(32,(3,3),activation='relu',input_shape=input_size),
  layers.MaxPooling2D((2,2)),
  layers.Conv2D(64,(3,3),activation='relu'),
  layers.MaxPooling2D((2,2)),
  layers.Conv2D(128,(3,3),activation='relu'),
  layers.MaxPooling2D((2,2)),
  layers.Conv2D(128,(3,3),activation='relu'),
  layers.MaxPooling2D((2,2)),
  layers.Flatten(),
  layers.Dense(128,activation='relu'),
  layers.Dense(64,activation='relu'),
  layers.Dense(1,activation='sigmoid')
])

where in this case, the number of neurons increase from 32, 64, to 128. However, i have also found a paper https://pubmed.ncbi.nlm.nih.gov/33532975/ that uses decreasing number of neurons i.e. 128, 64, 32 , as the network goes deeper. but in this paper, not much explanation was given on how the NN work in decreasing number of neurons. Does it mean decreasing number of neurons assumes that "there are less number of important features to be captured as the network goes deeper" ?

Question: Can someone explain to me

  1. how does the increasing number of neurons work
  2. how does the decreasing number of neurons work and why this is not the common practice
  3. referring to 2, what keyword should i find, in order to get articles or writing related to 2?
",50774,,,,,11/4/2021 17:18,Decreasing number of neurons in CNN,,0,0,,,,CC BY-SA 4.0 32283,2,,32280,11/4/2021 17:45,,1,,"

"is it okay to use another machine learning technology such as K-Means clustering to label the data?"

In computer vision there's an entire branch called automatic image annotation dedicated to this topic. And after a 2 sec search online I found a tutorial that suggest precisely what you want to try. So yes, on the paper it's ok to try, the real question though should be:

Will it work?

And the unfortunate answer is: no. Or to be less harsh: only with toy datasets, not real ones.

Moreover take a step back and think about the possible outcomes of the approach and their implications. If you train a k-mean clustering on your data you have 2 possible scenarios:

  • it works, then you you have an unsupervised model that does the job, why bothering training a supervise model that at best will perform as good as the one you already have?
  • it doesn't work, you made your life harder and you're at the same point where you started.

what else can I do to label the data?

Even though it's not the answer you want to hear:

Labeling is like removing a plaster from a wounded cut, you spend hours thinking how to do it without suffering to just realize at the end that the best way is to just do it.

Answers to the most predictable counter argument, the data are too many:

  • Rely on online crowdsourcing platforms: it seems that you already have rules to use as an annotation guideline. But it costs money.
  • Label just a part of the data, train, and repeat till you get a good model. You can also leverage active learning to optimize and make the labeling process smarter.
  • Dig the web till you find an annotated dataset.
  • Move to unsupervised learning, but for the training itself. At the very least you'll rely on theories and model designed precisely for unlabelled data without having to train several models with a cascade of performance drops.

As a final note, there are several reasons to just start labeling the data yourself, the main one being that you'll have control over the quality of the annotations. Many people underestimate or don't even know about the importance of inter annotators agreement metrics like Cohen's Kappa that estimate not only the quality of a dataset but also how hard for a human the task you want to automatize is. Which is extremely relevant, cause if for humans the best agreement score is 80% instead of 100% then there's no way you'll obtain more than that from a model, due to the personal biases introduced by the annotators in the data. And by knowing that, you'll be happy about your 0.8 f-score without wasting hours thinking what you did wrong and why the model is not performing better. Hope this will be be somehow helpful and good luck with your data!

",34098,,,,,11/4/2021 17:45,,,,0,,,,CC BY-SA 4.0 32284,2,,28728,11/4/2021 19:10,,0,,"

kl_coeff_val = kl_coeff is the multiplier in the KL penalty term in the loss. So increasing this coefficient means increasing the penalty loss, which should lead to greater KL reduction after update.

",50776,,,,,11/4/2021 19:10,,,,0,,,,CC BY-SA 4.0 32285,1,,,11/4/2021 19:50,,-1,61,"

I have split the database available into 70% training, 15% validation, and 15% test, using holdout validation. I have trained the model and got the following results: training accuracy 100%, validation accuracy 97.83%, test accuracy 96.74%

In another trial for training the model, I got the following results: Training accuracy 100%, validation accuracy 97.61%, test accuracy 98.91%

The same data split is used in each run. Which model should I choose, the first case in which the the test accuracy is lower than the validation? or the second case in which the test is higher than the validation?

",50778,,50786,,11/9/2021 16:43,11/9/2021 16:43,how to decide the optimum model?,,1,1,,,,CC BY-SA 4.0 32286,1,,,11/4/2021 20:57,,3,92,"

I built a basic neural network in MATLAB. The neural network classifies points on the X-Y axis system into two classes (0 and 1).

(I try to get the function that represents a shape from this photo)

Every so often the values ​​of the points change slightly and some of the points defined in class 1 become class 0, like in this photo.

Is there a way to update the neural network to fit the new data without the time required for retraining?

",50772,,11539,,11/9/2021 16:41,12/4/2022 21:02,Is there a way to update the neural network to fit the new data without the time required for retraining?,,1,1,,,,CC BY-SA 4.0 32287,1,,,11/5/2021 2:23,,1,130,"

In Linformer's proof that self-attention is low rank in their paper, I don't see how it doesn't generalize to every matrix. They don't utilize any specifics of self-attention (the entire proof feels like it's in equation 20 utilizing JL, and I do not see where characteristics of self-attention come into play).

What am I missing?

",25496,,2444,,11/5/2021 12:42,11/7/2021 19:35,Where do the characteristics of self-attention come into play in Linformer's proof that self-attention is low rank?,,0,0,,,,CC BY-SA 4.0 32288,2,,32286,11/5/2021 2:28,,0,,"

If I understand your question, you want it to generalize to be conditioned on an image. If this is correct, you can do this via inserting a separate portion of the model that takes the image as an input, and compresses it into a dense feature vector (can be done many ways- most common is probably cnn) and then merge with the other inputs (in your case x,y pairs) and have the model train on that end-to-end.

",25496,,,,,11/5/2021 2:28,,,,1,,,,CC BY-SA 4.0 32289,2,,32285,11/5/2021 2:32,,1,,"

Testing each time on a test set is against the point of a train-val-test split. The reason test is important, is that you are only supposed to test on it when you think your model is good and ready with all final model and hyperparameter decisions made.

A good description can be found in this article: https://machinelearningmastery.com/difference-test-validation-datasets/

but to sum it up, test should be unbiased. The more you test against it, the more you bias the result.

",25496,,,,,11/5/2021 2:32,,,,1,,,,CC BY-SA 4.0 32290,1,,,11/5/2021 4:28,,1,26,"

DL-PDE prescribes a way to feed a neural network data, which in turn comes up with a PDE of the form

$$u_{t}(t,x,y) = F(x,y,u,u_{x},u_{y},u_{xx},u_{xy},u_{yy},...) \hspace{0.5cm} (x,y) \in \Omega \subset \mathbb{R}^{2}, t \in [0, T]$$

I am looking for a way to feed a neural network the same data and also prescribe a number of other variables (say 2, making the total number of variables 3), and the neural network could come up with a system of PDE's comprising of three equations with three variables.

Any leads for this?

",50785,,34098,,11/5/2021 8:28,11/5/2021 8:28,Reference needed for neural networks finding solutions of PDE's,,0,1,,,,CC BY-SA 4.0 32295,2,,28421,11/5/2021 15:42,,1,,"

Yes if we created an AI capable of modifying the reward function to "do nothing" it would almost certainly do so. But it wouldn't be very useful and we likely wouldn't make a system like that. Self preservation isn't the only possible goal, but it is a convergent instrumental goal. Self preservation is usually useful for completing other goals. AI alignment is possible, but we're statistically unlikely to get it perfectly right the first time. Corrigibility is important here so the AI will know its reward function may need to be adjusted and should be impartial to that adjustment. This prevents it killing humans who try to press the big red button or purposefully acting out so we keep adjusting it.

",50797,,,,,11/5/2021 15:42,,,,0,,,,CC BY-SA 4.0 32296,2,,2902,11/5/2021 15:53,,0,,"

An important difference between a human intelligence and an artificial intelligence is that the artificial intelligence would "think" much faster, perhaps millions of times faster than our brains do. Our neurons are slower than transistors. It also wouldn't be limited by evolutionary biology and could presumably modify itself or create a replacement AI that operates more efficiently. Do this modification an arbitrary number of times and it becomes super intelligent.

",50797,,,,,11/5/2021 15:53,,,,0,,,,CC BY-SA 4.0 32298,1,,,11/5/2021 21:19,,0,102,"

I am currently creating a GAN model from scratch (following this tutorial: https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-an-mnist-handwritten-digits-from-scratch-in-keras/) but I can't find out how to implement Conv2DTranspose from scratch. Is a Conv2DTranspose the same as a full convolution? If not, how would one implement it?

",43686,,2444,,11/5/2021 21:54,11/6/2021 20:41,Is a Conv2DTranspose the same as a full convolution?,,1,2,,,,CC BY-SA 4.0 32299,1,,,11/5/2021 23:17,,2,29,"

Consider the following paragraph from the chapter named Vector Calculus from the textbook titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.

Central to this chapter is the concept of a function. A function $f$ is a quantity that relates two quantities to each other. In this book, these quantities are typically inputs $x \in \mathbb{R}^D$ and targets (function values) $f(x)$, which we assume are real-valued if not stated otherwise. Here $\mathbb{R}^D$ is the domain of $f$, and the function values $f(x)$ are the image/codomain of $f$.

we can notice that the textbook is taking $\mathbb{R}^D$ as the domain for objective functions. I want to know whether it is valid in general cases.

Do the objective functions that we generally use in artificial intelligence have $\mathbb{R}^D$ as the domain?

I am guessing it would not be since the loss functions are generally defined on the datasets which can also have discrete attributes and hence the objective function cannot be defined on every point in $\mathbb{R}^D$. So, I am guessing that the correct form of the bold statement from the quoted paragraph is "Here the domain of $f$ should be a subset of $\mathbb{R}^D$" if we are intended to deal with a general case. Am I correct or is there any arrangement such as defining $f$ as zero where the function is not defined?

",18758,,18758,,11/5/2021 23:34,11/6/2021 5:28,Are the domains of objective functions in AI always equals to $\mathbb{R}^D$ or subset of it?,,1,0,,,,CC BY-SA 4.0 32300,1,32303,,11/5/2021 23:36,,2,179,"

I'm working on a depth estimation network. It has two outputs:

  1. A relative depth map
  2. A scalar for scaling the relative depth map into an absolute depth map. This second output uses dense layers so we cannot use variable-sized input.

We are trying to handle two different dimensions (192x256 and 256x192). The current approach is to letterbox the image, meaning apply black on the image so that it comes out to 256x256. We decided on this approach instead of center-cropping images to 192x192 because we believe we may lose valuable data with cropping.

When using letterboxes, I see two paths:

  1. Ignore the letterbox portions of the image in my loss function. The loss function will only perform calculations on the original portion of the image.
  2. Set a static value for the letterbox portion and include it as part of the loss.

Is #1 the correct approach? The network will then be able to predict any depth value for the black letterbox portions without being penalized. I'm concerned with #2 about confusing the network between the letterbox portion and actual dark portions of images.

",20338,,18758,,11/5/2021 23:40,11/6/2021 15:23,Best practice for handling letterboxed images for non fully-convolutional deep learning networks?,,1,0,,,,CC BY-SA 4.0 32302,2,,32042,11/6/2021 0:08,,0,,"

When you take a step in the DQL process, you sample a move based on the estimated qualities of each possible action. During that step, you can restrict your sampling method to have probability 0 of choosing the forbidden action.

",50802,,,,,11/6/2021 0:08,,,,1,,,,CC BY-SA 4.0 32303,2,,32300,11/6/2021 0:42,,2,,"

Padding is indeed the easiest solution. And if no bias is used then masking the extra values during the loss computation is also not necessary, since it's enough to use zero as padding value.

You might be interested though in checking Spatial Pyramid Pooling. This pooling method allows to combine fully convolutional modules and dense layers, i.e, it can be initialized to produce a specific fixed output size while allowing varying input sizes, for both, training and inference.

",34098,,34098,,11/6/2021 15:23,11/6/2021 15:23,,,,1,,,,CC BY-SA 4.0 32304,2,,32299,11/6/2021 5:28,,1,,"

I think, that $\mathbb{R}^{D}$ is most natural choice in practical situations since many kinds of data can be described in a this way:

  • Image is 2d array $H \times W$ with each pixel taking value in $\mathbb{R}^{c}$ (say, $c=3$) or a continuous subset of $\mathbb{R}^{c} = [-1, 1]$
  • In the sequence modeling problems one gives an embedding vector to each token in some $\mathbb{R}^{k}$. There is no reason priori to put constraints on the embedding vector (probably one may want to clip norm of these vectors)

However, for the applications in physics, or any other fields, by the virtue of the task arbitrary input in $\mathbb{R}^{D}$ may not make sense, and one can be restricted to consider functions only on a certain manifold $M$ (sphere $S^d$, hyperbolic spaces)

In applications of Quantum Machine Learning input is a vector in a Hilbert space - system of quits, whatever. You may be interested to consult this review.

",38846,,,,,11/6/2021 5:28,,,,0,,,,CC BY-SA 4.0 32306,1,,,11/6/2021 12:48,,3,63,"

I am trying to understand how are partial derivatives calculated in a computational graph. I understand reasoning behind computational graphs and I am bold enough to say I understand how they work, at least on high level of understanding.

But what I don't know is how are partial derivatives itself computed or better said, how are they implemented in code.

I have checked few resources like this lecture slides from CS231N, this blog post or this blog post on TowardsDatascience. They explain graphs and how are expressions calculated in graph, but they don't explain how are partial derivatives derived (or I didn't understand from those explanations). For example, blog post from TowardsDatascience says:

Next, we need to calculate the partial derivatives of each connection between operations, represented by the edges. These are the calculations of the partials of each edge:

And then they show image with values of partial derivatives but they newer explain how are these equations actually calculated in implementation of graph.

Yes, okay, I know how to calculate these partial derivatives on paper and then hardcode them in my code, but I don't know how are they actually automatically computed and implemented in code of libraries like Torch or Theano.

Do they have some basic rules implemented in code, like, for example:

$$ \frac{\partial (a + b)}{\partial a} = \frac{\partial a}{\partial a} + \frac{\partial b}{\partial a} = 1 $$

and then decompose expressions until they reach basic elements/rules or is there another way that libraries like Torch, Theano or TF do it?

Or to put it in another way, if I have this code in Torch:

from torch import Tensor
from torch.autograd import Variable

def element(val):
    return Variable(Tensor([val]), requires_grad=True)

# Input nodes
i = element(2)
j = element(3)
k = element(5)
l = element(7)

# Middle and output layers
m = i*j
n = m+k
y = n*l

# Calculate the partial derivative
y.backward()
dj = j.grad
print(dj)

how does Torch know, that is, how does it compute internally that $ \frac{\partial y}{\partial j} = l \cdot 1 \cdot i = l \cdot i $?

",50813,,,,,11/6/2021 12:48,How are partial derivatives calculated in a computational graph?,,0,1,,,,CC BY-SA 4.0 32308,1,,,11/6/2021 15:48,,2,49,"

Imagine a game with grid size 10x10, there is a good guy and a bad guy and obstacles in the grid, i.e. essentially a maze. The goal of the bad guy is to find the good guy and trap him by erecting walls around him. The good guy is able to move in the maze using a simple random movement, only in 4 directions - up, down, right or left.

I've used the A* algorithm to find the shortest path to the good guy but am still unsure as to how to go about trapping the good guy in his own space?

",50814,,2444,,11/8/2021 15:35,11/8/2021 15:35,What AI algorithm could I use to trap an agent in a game?,,0,3,,,,CC BY-SA 4.0 32310,2,,32298,11/6/2021 20:41,,1,,"

Con2DTranspose is an upsampling method used to increase the size of an image.

When we perform convolution, the size of the image decreases, but in some scenarios, we want our image size to be the same as the input image size. Hence we use this convolution.

Here you will find Keras implementation on Conv2DTranspose

https://github.com/keras-team/keras/blob/v2.7.0/keras/layers/convolutional.py#L1093-L1394

",50817,,,,,11/6/2021 20:41,,,,0,,,,CC BY-SA 4.0 32311,2,,21620,11/6/2021 22:48,,0,,"

Well, conditional probability (density) does not necessarily need to be no greater than its marginal probability (density). That means $p(x_j|c_t)\le p(x_j)$ is not always true. For example, consider two discrete r.v.s $X,Y\in\{0,1\}$ such that $$ P(X=0,Y=0)=0.05,\\ P(X=1,Y=0)=0.05,\\ P(X=0,Y=1)=0.05,\\ P(X=1,Y=1)=0.85 $$. We have $P(X=0)=0.1$; however, $P(X=0|Y=0)=0.5$.

",42254,,,,,11/6/2021 22:48,,,,0,,,,CC BY-SA 4.0 32312,2,,26007,11/6/2021 23:32,,1,,"

Very interesting question. Assuming that the programming languages used are powerful enough (say Turing Complete), all of the above actually should lead to an AGI. The difference is in how efficiently they can do it, both in term of number of computations required and in the length of the program.

So the question could be rephrased as: which approach cannot lead to an AGI using less than X resources and being shorter than Y characters? The second question is basically asking the Kolmogorov complexity of the AGI in that language, which is uncomputable. Since we cannot find the shortest program, I don't think we can make conclusions about the maximum program efficiency either. In summary I don't see a way to rule out any of those approaches (but I would be very happy to be proven wrong).

",16363,,,,,11/6/2021 23:32,,,,0,,,,CC BY-SA 4.0 32313,1,,,11/6/2021 23:39,,0,105,"

Consider the following definition of derivative from the chapter named Vector Calculus from the test book titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.

Definition 5.2 (Derivative). More formally, for $h>0$ the derivative of $f$ derivative at $x$ is defined as the limit

$$\dfrac{df}{dx} := \lim\limits_{h \rightarrow 0}^{} \dfrac{f(x + h) − f(x)}{h}$$

The derivative of $f$ points in the direction of the steepest ascent of $f$.

You can observe that the derivate of a function is another function. If we consider derivative at a single point then it will be a real number that quantifies the rate of change of the output of the function with respect to the input.

There are two kinds of directions we need to focus on that are related to gradients. One is the direction pointed by a gradient and another one is the direction for moving our input parameters using a gradient. This question is restricted to the direction of the first kind.

We can treat the sign of the derivative at a particular point as the direction to move our input parameters. And I am not sure about the rigorous definition for the direction pointed by a derivative. I have thus doubts about the direction pointed by a gradient.

What exactly is the direction pointed by a gradient? and I want to know the formal definition for the direction of a gradient

I know about the direction that is given by gradient to move our parameters. But, I am not sure about the rigorous definition for the direction of a gradient vector.

",18758,,18758,,11/14/2021 10:21,12/11/2022 22:01,What is the rigorous and formal definition for the direction pointed by a gradient?,,1,5,,,,CC BY-SA 4.0 32314,2,,28662,11/7/2021 0:05,,1,,"

Current AGI approaches are very heterogenous and therefore there are no dominant algorithms. Nevertheless, I would suggest you to have a look at Knowledge Graphs and the related algorithms to build and query the graphs, for instance here.

",16363,,,,,11/7/2021 0:05,,,,0,,,,CC BY-SA 4.0 32316,1,32322,,11/7/2021 11:22,,3,368,"

I am reading the book titled Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig (4th edition) and came across this sentence about depth-first search (page 79, line 12):

For acyclic state spaces it may end up expanding the same state many times via different paths, but will (eventually) systematically explore the entire space.

My question is: how this is possible? Can you please show me some examples?

",50418,,2444,,12/28/2021 12:57,12/28/2021 12:57,How DFS may expand the same state many times via different paths in an acyclic state space?,,1,0,,,,CC BY-SA 4.0 32319,1,,,11/7/2021 14:13,,2,119,"

According to [1], in MLOps, continuous training is

a new property, unique to ML systems, that's concerned with automatically retraining and serving the models.

While lifelong/incremental learning mainly studies how to incrementally learn rather than retrain. [2]

Lifelong Machine Learning or Lifelong Learning (LL) is an advanced machine learning (ML) paradigm that learns continuously, accumulates the knowledge learned in the past, and uses/adapts it to help future learning and problem-solving.

I can see some links or conflicts between the two but cannot explicitly explain, and I asked an author in the second link about this issue and he said that the two are complementary. I wonder how will the two help each other? Or will this kill that?

",5351,,18758,,11/13/2021 10:47,7/31/2022 7:56,How will MLOps and lifelong learning be complementary?,,2,0,,,,CC BY-SA 4.0 32320,1,,,11/7/2021 15:53,,-2,54,"

I'm looking for different models (specifically ResNet18/20, ResNet32/34, VGG16, MobileNet and SqueezeNet) and their parameters after training (i.e., .pth file) that were trained on CIFAR-10 or CIFAR-100. I tried looking for them for hours and couldn't find anything. perhaps someone could refer me to a site which have trained models for CIFAR-x?

Thank you.

",50834,,,,,11/7/2021 17:50,model and trained model parameters on CIFAR-10,,1,0,,11/7/2021 18:30,,CC BY-SA 4.0 32321,1,,,11/7/2021 16:12,,5,817,"

Based on my research, I've seen so many on-policy AC approaches that utilise a critic network to estimate the value function $V$. The Bellman equation for the value function is as bellow:

$$ V_\pi(s_t) = \sum_a \pi(a|s_t)\sum_{r, s'}(r+V_\pi(s'))P(s', r|s, a) $$

It makes sense not to have a replay buffer due to the current policy in the formula and the fact that our approach is on-policy. However, I really do not figure out why no one uses a target network to stabilize the training process of the critic, like what we have in DQN, namely the variant published in 2015. Does anyone have an idea for that with probably a citation?

I know that DDPG uses a critic with a fixed target network, but be aware that it is a real off-policy actor-critic. By "real" I mean it is not due to importance sampling.

I have to mention that I can imagine something, but I'm not sure whether it is true or not. If we have a target network, it means we are trying to find a deterministic, optimal in the case of DQN, policy, while we are learning the current policy's data for the actor-critic case with the critic.

",11599,,2444,,11/25/2021 1:25,11/12/2022 19:50,Why isn't a target network used for the critic in on-policy actor-critic methods?,,1,3,,,,CC BY-SA 4.0 32322,2,,32316,11/7/2021 17:25,,1,,"

First, note that the verb to expand has a specific meaning in this context: when you expand a node/state $s$, you try each action $a_1, \dots, a_n$ available from $s$, and each of these actions $a_i$ leads to another state.

Second, if the graph is a DAG, then the edges have a direction and there are no cycles; so, if you follow the edges, you will never get back to a node that you have already been at. However, this is not sufficient to understand that statement and why or when it is true.

Third, that statement is true only if you use the tree search version of the depth-first search, i.e. you don't keep track of the nodes that you have already explored. In other words, you don't use graph search, which is a tree search but that keeps track of the already expanded nodes in a set called the explored set (or closed list), so that to avoid expanding them again. So, here, if you explored a node/state $s$, you added it to the explored set, so you won't expand it again.

So, why is it possible to expand a node more than once in the tree search version of DFS? Well, you don't keep track of the nodes that you already expanded, so if you encounter it again from another path, you may expand it again. Given that it's a DAG (and assuming it's finite), then you will eventually try all possible paths, so you will find the goal.

Now, here's an example of when you expand a node more than once in the tree search version of DFS.

Let's say we have this search space, where you start at $A$ and the goal is $E$.

A -> B -> C -> D
A -> C -> D  
A -> E

If the nodes are at the same depth, let's assume that you expand them alphabetically; so, in the search space above, you expand first $B$ and then $C$, because $B$ comes before $C$ (in the English alphabet).

Now, note that we can go directly to $C$ from $A$ or we can go to $C$ by first going to $B$. $C$ has a child. When you expand $C$, basically, you try the possible actions from $C$. In this case, there's only one action, i.e. going to $D$.

So, without keeping track that you expanded $C$, when you got there from $B$, after you backtracked from $D$ the first time, you will try again to expand $C$. In the graph search, this would not be possible because, the first time we expand $C$, we add it to the explored set, so we don't expand it again.

",2444,,2444,,11/7/2021 17:30,11/7/2021 17:30,,,,3,,,,CC BY-SA 4.0 32323,2,,32320,11/7/2021 17:50,,1,,"

You can find pretrained models on CIFAR-10 in this GitHub repository.

Also, for fun you can take any backbone trained on ImageNet from TorchVision models. Just replace the classification part from the Linear layer, outputting 1000 classes to 10 or 100. Then you can freeze most of the layers of the network and fit just some bottom layers with the classification head (last nn.Linear).

Modern papers often use CIFAR-10 for transfer learning and you can find the parameters in the paper or discussion on GitHub like here.

",38846,,,,,11/7/2021 17:50,,,,0,,,,CC BY-SA 4.0 32324,1,32348,,11/7/2021 21:12,,0,84,"

I am using non-linear data to SVR and have tried tuning the hyperparameters and still have a poor model performance. Do I need more data or format the data for more suitable results?

I get similar performance for ANNs, decision tree, and random forest (slightly better) and even negative for polynomial regression. The graphs for test data performance and training data also get a DataConversionWarning

You can find the data I used here

The plots I obtained look like this:

actual vs predicted for test data actual vs predicted for training data

import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.svm import SVR
from sklearn.metrics import r2_score
#
#
dataset = pd.read_csv('Data.csv')
X = dataset.iloc[:, :-1].values
y = dataset.iloc[:, -1].values 
#
#
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 1)
#
#
regressor = SVR(kernel = 'linear', gamma='auto')
regressor.fit(X_train, y_train.ravel())
y_predict = regressor.predict(X_test)
np.set_printoptions(precision=2)
print(np.concatenate((y_predict.reshape(len(y_predict),1), y_test.reshape(len(y_test),1 )), 1))
#
#
r2_score(y_test, y_predict)
#
#
#model performance
fig, ax = plt.subplots()
ax.scatter(y_test, y_predict)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('Actual')
ax.set_ylabel('Predicted')
#regression line
y_test, y_predict = y_test.reshape(-1,1), y_predict.reshape(-1,1)
ax.plot(y_test, regressor.fit(y_test, y_predict).predict(y_test))
ax.set_title('Final Prediction-R2: ' + str(r2_score(y_test, y_predict)))
plt.show()
#
#
#training data performance
fig, ax = plt.subplots()
ax.scatter(y_train, y_train_predict)
ax.plot([y.min(), y.max()], [y.min(), y.max()], 'k--', lw=4)
ax.set_xlabel('Actual')
ax.set_ylabel('Predicted')
#regression line
y_train, y_train_predict = y_train.reshape(-1,1), y_train_predict.reshape(-1,1)
ax.plot(y_train, regressor.fit(y_train, y_train_predict).predict(y_train))
ax.set_title('Final Prediction-R2: ' + str(r2_score(y_train, y_train_predict)))
plt.show()
",50839,,18758,,11/13/2021 10:49,11/13/2021 10:49,Do I need to tune the hyper-parameters or more data if SVR model performs poorly?,,1,2,,,,CC BY-SA 4.0 32327,2,,21033,11/8/2021 9:57,,0,,"

to the cash bias: I think this is simply the money that is still available at time t=50 and has not yet been invested.

",50845,,,,,11/8/2021 9:57,,,,0,,,,CC BY-SA 4.0 32329,1,32331,,11/8/2021 14:15,,3,189,"

I'm starting to learn about the Bellman Equation and a question came to my mind.

A policy $\pi$ is optimal if the value $v_\pi(s)$ is greater or equal than the value $v_{\pi'}(s)$ for all states $s \in S$.

Why does this work?

Can't it be that the optimal policy thinks a state isn't that good and gives him a low value but perform best in comparison with other policies which have higher values for this state?

",36080,,2444,,11/8/2021 18:57,11/8/2021 18:57,Can an optimal policy have a value function that has a smaller value for a state than a non-optimal policy?,,1,0,,,,CC BY-SA 4.0 32330,1,,,11/8/2021 16:22,,0,40,"

I have a neural network that takes the state (which contains a lot of data), and the possible action (which is very little data), and predicts the Q-value of the action. I am double Q-learning.

I've noticed that, given a particular state $s$, the neural network will predict nearly identical $q(s, a_i)$ for all actions $a_1, \dots, a_n$. As the neural network gets trained, this situation gets worse.

I think it is predicting the mean Q value of the state. Perhaps, the small amount of data that comprises the action is being drowned out by state input?

I've considered using softmax to predict the best action, but it seems like we have nearly an unlimited number of possible actions to take, and I don't want to hard-code them.

EDIT, More Detail: The state is represented by all the text detected by an OCR of a GUI program + an embedding of screenshot image of the current GUI state. While in comparison the action is just 1-2 words (the text or tooltip of the GUI element we are considering clicking on), and I'm concerned that this imbalance of representation of state vs action in the input could be causing the network to accidentally (nearly) ignore the action input.

",42996,,42996,,11/10/2021 17:13,11/10/2021 17:13,How to deal with Q-learning having low variance in predicted Q-values?,,0,7,,,,CC BY-SA 4.0 32331,2,,32329,11/8/2021 16:38,,1,,"

Can't it be that the optimal policy thinks a state isn't that good and gives him a low value but perform best in comparison with other policies which have higher values for this state?

No, this is not possible, and this is part of the definition of an optimal policy.

You are asking if it is possible to construct a policy $\pi^?$ where for some state $s_z$, $v^?(s_z) \gt v^*(s_z)$, yet for some other state $s_y$, $v^?(s_y) \lt v^*(s_y)$

In general, comparing two arbitrary policies, this situation is possible. However, there is no way to construct a policy that it does better than the optimal policy from any specific state.

The only thing you can change between policies is the action choice. Looking at the Bellman equation:

$$v_{\pi}(s) = \sum_a \pi(a|s) \sum_{r,s'} p(r,s'|s,a)(r +v_{\pi}(s'))$$

you can see, substituting in $s_z$, that its value depends on the immediate decision made by the policy then a weighted sum of values of next states, depending on how likely they are to be found depending on the policy. An optimal policy maximises this value in all states.

If there were a policy that made a different decision to the optimal policy on state $s_z$ and that was the only difference (all other decisions the same, and all other $v_{\pi}(s)$ values the same in the sum, then it would clearly be as good or better than the optimal policy in all states, and that would mean that it was the optimal policy, and the policy that you originally labelled as optimal, $\pi^*$, was not. That contradicts your starting statement.

If, in order to get a higher value, you said that one of the $r + v^?(s')$ values were higher (in expectation), then you could set $s_z = s'$ and repeat the argument - at some point the alternative "non-optimal" policy has to make a decision that is better than the optimal policy in order to get a higher value out of values that are otherwise the same (or even worse). The optimal policy would not be restricted from choosing that action, and in fact must do so in order to be optimal, so you would have shown that the optimal policy $\pi^*$ is not in fact optimal, which is a contradiction to your starting statement.

If the other $r + v^?(s')$ values were lower in expectation - a necessary condition at some point in the allowed trajectories in order to make $\pi^?$ non-optimal, then you will have shown that $v^?(s_z) \lt v^*(s_z)$ which contradicts your original statement.

In any case where you construct a locally unexplained better $v^?(s_z)$ (i.e. where somehow the expected values at the next state are better) then you have kicked a can down the road, it depends on other values in the Bellman equation that you can follow through. Eventually you have to either accept that your declared optimal policy is not optimal, or that $\pi^?$ cannot be better than it, because it involves contradicting the statement that it is not optimal.

This kind of thinking, handled more formally, leads to the Policy Improvement Theorem, where you can unroll the dependency between values for states indefinitely and show that it is possible to reach the optimal policy by improving the decisions made by the policy in each state until it can no longer be improved. Your concern that somehow there could be locally-better-than-optimal policies different to the optimal policy is the flip side of that.

",1847,,1847,,11/8/2021 16:44,11/8/2021 16:44,,,,2,,,,CC BY-SA 4.0 32334,1,,,11/8/2021 19:01,,1,57,"

From what I have gathered so far, an AI has some prior (stored in the form of some probability distribution), and, based on experiences/data, changes the distribution (via Bayes rule) accordingly. This idea seems intuitively correct, as humans do something similar: we have some prejudice about certain things and refine it further based on additional observations.

I am wondering if there is a different (possibly, non-probabilistic) setting for designing an AI.

",49367,,2444,,11/8/2021 21:30,11/8/2021 21:58,Is there any other (possibly less popular) approach to create AI apart from statistical methods?,,2,0,,,,CC BY-SA 4.0 32335,2,,32334,11/8/2021 19:12,,1,,"

What you're looking for are Expert systems and Knowledge Based Systems. Really similar to each other, they encompass all systems built upon experts knowledge, from which analytic rules are derived in order to allow their implementation in computer programs.

A trivial example could be a self driving system based on proximity sensors, a camera and a set of predefined rules (basically a set of explicit if-else statements) like:

  • if value recorder from front proximity sensor < 10cm & left bottom corner of picture from camera is white (wall on the left) -> turn wheels 90 degrees to the right.
  • if value recorder from front proximity sensor < 10cm & right bottom corner of picture from camera is white -> turn wheels 90 degrees to the left.
  • ... (and so on and do forth)
",34098,,34098,,11/8/2021 19:17,11/8/2021 19:17,,,,1,,,,CC BY-SA 4.0 32336,2,,32334,11/8/2021 19:38,,2,,"

Yes, there is symbolic AI. This was the 'original' approach to AI, at a time when there was very little data and/or processing power available. The focus was on logic and calculus, not on machine learning, which was just in its infancy.

A lot of natural language processing was developed using grammar rules (which only later were learned from data).

There still is a lot of this around, but often it's now hybrid, where human-authored rules correct the systematic mistakes of statistical approaches.

Update: as an example the Teneo system for building conversational agents (aka chatbots) uses both pattern matching rules and machine learning for intent recognition. The (human-created) patterns are more precise, but sometimes lack breadth of coverage, which the ML part provides, which works as a fallback.

",2193,,2193,,11/8/2021 21:58,11/8/2021 21:58,,,,2,,,,CC BY-SA 4.0 32337,1,32339,,11/8/2021 19:53,,2,204,"

The update formula for the TD(0) off-policy learning algorithm is (taken from these slides by D. Silver for lecture 5 of his course)

$$ \underbrace{V(S_t)}_{\text{New value}} \leftarrow \underbrace{V(S_t)}_{\text{Old value}} + \alpha \left( \frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)} (\underbrace{R_{t+1} + \gamma V(S_{t+1}))}_{\text{TD target}} - \underbrace{V(S_t)}_{\text{Old value}} \right) $$

where $\frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)}$ is the ratio of the likelihoods that policy $\pi$ will take this action at this state divided by the likelihood that behavior policy $\mu$ takes this action at this state.

What I do not understand is:

Assume the behavior policy $\mu$ took an action that is very unlikely to happen under policy $\pi$. I would assume this term goes towards $0$.

$$ \frac{ \pi(A_t|S_t)}{\mu (A_t|S_t)} = 0 $$

But, if this term goes to $0$, the whole equation would become the following

$$ V(S_t) \leftarrow V(S_t) - \alpha V(S_t) $$

This would mean we decrease the value of this state.

But this doesn't make any sense to me, if the 2 policies are very different, we gain little to no information. Therefore, I would assume the value would be unchanged instead of decreased.

What is my misconception here?

",43729,,18758,,11/13/2021 10:51,11/13/2021 10:51,How does this TD(0) off-policy value update formula work?,,1,0,,,,CC BY-SA 4.0 32338,1,32340,,11/8/2021 19:55,,0,109,"

I have a dataset that contains 560 datapoints, and I would like to do binary classification on it. 400 datapoints belong to class 1, and 160 points belong to class 2. In the case of an imbalanced dataset like this, how to arrange the test dataset to get valid performance results? Should I keep the same imbalanced data distribution for the test set which is similar to the distribution of the training data, or arrange it in a way that half of the test points belong to the first class and the remaining half belongs to the second class?

",49588,,18758,,11/13/2021 10:52,11/13/2021 10:52,How to arrange test dataset distribution for an imbalanced classification problem?,,1,1,,,,CC BY-SA 4.0 32339,2,,32337,11/8/2021 22:06,,2,,"

This would mean we decrease the value of this state.

Yes. This update that reduces the estimate is correct because it adjusts for the inevitable over-estimate of value when the exploration policy selects an action that is more likely in the target policy than in the behaviour policy. This over-estimate must happen, if your agent experiences some actions with a likelihood ratio close to $0$, it must also experience some actions with a likelihood ratio greater than $1$. This follows because $\mathbb{E}_{\mu}[\frac{\pi(a|s)}{\mu(a|s)}] = 1$*, provided the behaviour policy covers the target policy (behaviour policy has non-zero probability for all actions where the target policy has non-zero probability).

Those actions with a likelihood ratio greater than $1$ are not actually better than before due to being taken off-policy, so their value used in the update will be an over-estimate.

It is only in the limit of large numbers of samples that the adjusted value function will converge on a good estimate of the value function for the target policy.

Basic importance sampling can have problems with increased variance due to this effect. In fact the variance can be shown to be unbound/infinite in some cases. I am not sure if it is the case in TD learning, but definitely the case in Monte Carlo with importance sampling. TD(0) does benefit from reduced variance from bootstrapping, so it probably doesn't have unbound variance. Still, it has higher variance than on-policy TD(0).


*

For any fixed state, $s$,

$$\mathbb{E}_{\mu}[\frac{\pi(a|s)}{\mu(a|s)}] = \sum_a \mu(a|s) \frac{\pi(a|s)}{\mu(a|s)} = \sum_a \pi(a|s) = 1$$

",1847,,1847,,11/8/2021 22:42,11/8/2021 22:42,,,,2,,,,CC BY-SA 4.0 32340,2,,32338,11/9/2021 8:44,,1,,"

The test set should represents the "real" data distribution your model will tackle once deployed and used in real applications. So the quick answer is yes, the test data should be imbalanced, which comes also as a sort of forced choice for you considering the super small size of you dataset.

You want to keep in mind though that this makes a bit trickier to interpret the resulting metrics.

For example, if your test set is made of 90 instances labeled A and 10 instances labeled B, a model might predict class A for all instances and still have a final accuracy of 90%, correct, but completely unrepresentative of the real behavior of the model. So make sure to compute metrics like precision and recall for each class.

Extra: considering the size of your dataset, you might consider using k-fold cross validation. Split your dataset into k train-test subsets, train k models and average the resulting metrics. The logic being that with such few instances the dataset is almost guaranteed to contain biases of some sort.

",34098,,34098,,11/9/2021 8:55,11/9/2021 8:55,,,,1,,,,CC BY-SA 4.0 32341,1,,,11/9/2021 15:48,,2,1054,"

Transformer model of the original Attention paper has a decoder unit that works differently during Inference than Tranining.

I'm trying to understand the shapes used during decoder (both self-attention and enc-dec-attention blocks), but it's very confusing. I'm referring to this link and also the original Attention paper

In Inference, it uses all previous tokens generated until that time step (say kth time-step), as shown in the diagram below and explained at this link.

Another diagram that shows self-attention and enc-dec-attention within decoder:

Question:

However when I look at actual shapes of the QKV projection in the decoder self-attention, and feeding of the decoder self-attention output to the "enc-dec-attention"'s Q matrix, I see only 1 token from the output being used.

Let's assume 6 deocder blocks one after the other in the decoder stack (which is the base transformer model).

I'm very confused how the shapes for all matrices in the Decoder blocks after decoder-1 of the decoder-stack (more specifically decoder-block-2 decoder-3, decoder-4..., decoder-6 of the decoder stack) self-attention and enc-dec-attention can match up with variable length of input to the decoder during inference. I looked at several online material but couldn't find answer. I see only the BGemms in the decoder's self-attention (not enc-dec-attention) using the variable shapes until all previous k steps, but all other Gemms are fixed size.

  • How is that possible? Is only 1 token (last one from decoder output) is being used for qkv matmuls in self-attention and Q-matmul in enc-dec-attention (which is what I see when running the model)?
  • Could someone elaborate how all these shapes for QKV in self-attention and Q in enc-dec-attention match up with decoder input length being different at each time-step?**
",33580,,33580,,11/10/2021 3:01,12/14/2022 13:04,What is input (and shape) to K/V/Q of self-attention of EACH Decoder block of Language-translation model Transformer's tokens during Inference?,,1,2,,,,CC BY-SA 4.0 32342,2,,32341,11/9/2021 16:02,,0,,"

Edit 3

OP seems to think value, query and keys are supposed to be different in the original Vaswani multi-head attention. As can be seen in Keras' documentation on their implementation of the multi-headed attention layer, "If query, key, value are the same, then this is self-attention."


Edit 2

One thing missing from the graphics you use are the skip connections in transformers. Look at figure 1 in the original Vaswani et al paper. The skip connections should make it quite obvious what the shape of the outputs have to be like after each layer; After all, you cannot add two tensors that do not have the same shape.


Edit:

I realize now that your question is regarding the key, value and query values in an attention mechanism. They are always the same. It's called self-attention for that reason. The attention mechanism used in all papers I have seen use self-attention: K=V=Q

Also, consider the linear algebra involved in the mechanism; The inputs make up a matrix, and attention uses matrix multiplications afterwards. That should tell you everything regarding the shape those values need.

Here's a valuable visual that depicts attention in a slightly different way:

It's from this blog post which explains self-attention in-depth.


Transformers only output one prediction at a time because it is an autoregressive model.

Lets break a transformer during runtime step by step. The decoder gets the output as its input, which is done in the following manner:

Step one: The decoders input is only the start token and padding $(start,0,0,..,0,0)$

Step two: The decoder input is the start-token and the prediction from step one + padding $(start,y_1,0,0,..,0,0)$

Step three: The decoder input is the decoders input in step two and the prediction from step two + padding $(start,y_1,y_2,0,0,..,0,0)$

And, so on..

So, if you want n outputs, the transformer will have to run n times. If you want to train it to stop by itself after a while, you can introduce an end-of-sequence token which stops inference once it's predicted by the transformer. The sequences that are being fed into the decoders and encoders are always max_length long. That length is preserved using padding.

",31879,,31879,,11/19/2021 9:28,11/19/2021 9:28,,,,9,,,,CC BY-SA 4.0 32343,1,,,11/9/2021 16:14,,2,90,"

Most of the papers on multi-agent RL (MARL) that I have encountered have multiple agents who have a common action space.

In my work, my scenario involves $m$ numbers of a particular agent (say type A) and $n$ numbers of another type of agent. Here, the type A agents deal with a similar problem due to which they have the same action space, and type B deal with another type of problem and they have the same action space.

The type A agents are involved in an intermediary task that doesn't reflect in the final reward, and the final reward comes from the actions of type B agents. But the actions of type B are dependent on type A agents.

Any idea on what kind of MARL algorithm is suitable for such a scenario?

",41984,,2444,,11/9/2021 23:21,11/9/2021 23:21,Which multi-agent reinforcement learning algorithm can I use when there are two types of agents with different action spaces?,,0,0,,,,CC BY-SA 4.0 32344,1,,,11/9/2021 17:14,,1,124,"

I am using clipped PPO to train a neural network to act as the controller for steering an aircraft, and am finding that my networks aren't learning. The goal is to keep the aircraft flying to cover the distance, and the code is implemented using Pytorch. I am wondering if anyone who is more experienced would be willing to take a look at my implementation.

Aircraft and actor model
I have a flight simulation environment with the dynamical system of an aircraft, where the simulation is propagated through time in 0.1s increments. The states of the aircraft serve as inputs to the actor network (airspeed, pitch, heading, height), and the controls include pitch and roll angles.

The actor network uses a multivariate normal distribution to output two values that serve as the means for the pitch and roll angle controls. The variances are fixed. This way, the policy is stochastic and allows for exploration. I am using two hidden layers of 256 neurons each for both actor and critic networks, with learning rates of 1e-4. ReLu activation on the hidden layers and sigmoid on the actor's output, with the critic's output being linear.

Reward structure
I have tried two different reward structures:

  1. A reward of 1 for every time step that the aircraft is in the air (with a reward of 0 in the terminal state, which is when the aircraft crashes to the ground or if it achieves the maximum flight time)
  2. A reward of 0 for every time step, but a single reward computed at the terminal state that is proportional to the displacement of the aircraft over the simulation, and an extremely large penalty that is subtracted from this reward if the aircraft crashes.

Neither of these reward structures seemed to allow learning. I am wondering if simply having a reward of 1 at every time step, much like the gym cartpole environment is insufficient in getting the model to learn how to fly. I was expecting the network to figure out ways of controlling the aircraft to fly longer to obtain a greater total reward. For the second reward structure, I expected the advantage estimation to carry the large reward/penalty backward through the trajectory so that the actor would learn to avoid flying in certain ways that end up in crashing the plane.

PPO implementation
For the clipped PPO implementation, I am:

  1. Resetting the flight simulation environment
  2. Propagating the simulation until the aircraft crashes or reaches maximum flight time
  3. Training the actor and critic networks every 20 time steps (2s of simulation)
  4. Repeating this for a maximum number of simulations

For training, I take the 20 time steps, shuffle them, and divide them into 5 batches. Then I train the network on these batches, reshuffle the 20 time steps and create 5 new batches, and train again. I repeat this process for a total of 4 epochs per learning cycle.

The fastest that the aircraft can reach the ground (crash) is 1.7s of simulation time (17 time steps), whereas I want the aircraft to fly ultimately for 10m (6000 time steps). I am thinking that it is too difficult to train a network to fly for such a long period of time because it would have to learn all the states leading up to that point.

Results
I found that training the algorithm typically resulted in extreme volatility in the test simulation's score (sum of all rewards at every time step). The score history would look like: (moving average of 100 scores in orange).

What I've tried

  1. Changing the standard deviation of the normal distribution of the actor output so that there is less exploration (stability of the aircraft is quite sensitive to the particular control values).
  2. Advantage normalization
  3. Increasing training time to 10 000 simulations (~40 000 learning iterations since there are 4 epochs per simulation)
  4. Testing the algorithm on the gym's cartpole environment (discrete actions). I was able to successfully train the actor in 200ish games.

If anyone has any other suggestions of things to try, it would be greatly appreciated.

",45349,,18758,,11/13/2021 9:24,11/14/2021 21:15,Proximal Policy Optimization for continuous control problem,,2,0,,,,CC BY-SA 4.0 32345,1,,,11/9/2021 18:20,,0,63,"

I have a database that contains healthy persons and lung cancer patients. I need to design a deep neural network for the binary classification problem (cancer/no cancer). I need to split the dataset into 70% train and 30% test.

How can I do the splitting? According to persons?

I think that splitting according to persons is correct since this will ensure that the same person will not exist simultaneously in both the training and the test subsets. This is reasonable since we are recognizing the disease, not the person. If images from the same person exist in both subsets, the problem will be easy, and not reasonable, from a practical point of view. Do you agree?

",50786,,18758,,11/13/2021 9:23,4/12/2022 11:05,"Given a dataset of people with and without cancer, should I split it into training and test datasets such that the same person is not in both?",,1,2,,,,CC BY-SA 4.0 32346,2,,32344,11/9/2021 20:07,,1,,"

Training using only 20 timesteps at a time is far too small, especially when the goal will ultimately consist of episodes of length 6000. You definitely need to increase that substantially and that will probably solve your problem immediately. You might try something like simulate 5 episodes and then train on all timesteps in those 5 episodes.

If that still doesn't work, another thing you can do is try training the value function (critic) differently, e.g. use monte carlo returns or more steps in the temporal difference, and you can keep a memory replay for the value function as well.

",47080,,,,,11/9/2021 20:07,,,,5,,,,CC BY-SA 4.0 32348,2,,32324,11/9/2021 21:04,,0,,"

Looking at your data, I see that it's only 126 samples long. For datasets of that size, classical statistical methods outperform all ML methods. Look at this paper, where they use the 1045 M3 univariate time series to show that old school statistical methods always outperform machine learning methods for short sequences.

Sure, your data is not univariate, but the argument still stands; Your dataset is nowhere large enough to justify using ML.

",31879,,,,,11/9/2021 21:04,,,,2,,,,CC BY-SA 4.0 32354,1,32357,,11/10/2021 17:31,,4,643,"

I am training a model for image reconstruction. I used several metrics to assess the quality of the reconstructed images. LPIPS is decreasing, which is good. PSNR goes up and down, but the L1 loss and SSIM loss are increasing.

So, which metric should I care more about?

My datasets are Paris Street View and CelebA.

I'm not sure if the VGG that extracts features for LPIPS is reliable here or not.

",50897,,2444,,11/11/2021 12:37,11/11/2021 12:37,"To assess the quality of the reconstructed images, which metric is more reliable: PSNR or LPIPS?",,1,1,,,,CC BY-SA 4.0 32355,1,32361,,11/10/2021 18:37,,1,56,"

I'm reading Sequence to Sequence Learning with Neural Networks and there's a thing that I couldn't quite grasp.

Paper says the encoder outputs a vector to be fed to the decoder. More precisely

Our method uses a multilayered Long Short-Term Memory (LSTM) to map the input sequence to a vector of a fixed dimensionality, and then another deep LSTM to decode the target sequence from the vector

However, when I look at the diagram:

there's no such vector here. What I understand from this diagram is decoder RNN takes the weights of the last encoder cell as an input.

Which one is correct? can you explain?

Stanford notes put it as

The final hidden state of the cell will then become C

So, is there no vector?

",47305,,2444,,11/11/2021 12:30,11/11/2021 12:30,Does Seq2Seq decoder take a special vector or the weights of the last encoder cell as an output?,,1,1,,,,CC BY-SA 4.0 32356,1,32374,,11/10/2021 18:50,,1,263,"

There is a nice post about the intuition why AlphaZero works.

One of the advantages of using a policy network in the games where a perfect simulator is available (such as chess) is to save computation time by not generating all subsequent moves and then evaluating them using the value network. Instead, we can only focus on the good moves given by the policy network.

However, besides the computation time savings of the policy network, are there any requirements why it needs to be used during training?

What if we would replace the computation of the policy network with this logic: generate all subsequent moves, evaluate them using value network, and create policy from these predictions. Would this still work?

I would appreciate any references where this topic is discussed.

",50898,,18758,,11/13/2021 9:20,11/13/2021 9:20,Would AlphaZero work just with a value network?,,2,0,,,,CC BY-SA 4.0 32357,2,,32354,11/10/2021 19:06,,2,,"

After working 1 year on a project on super resolution, I've learnt the following things about image quality metrics in general:

  • There's no such thing as a perfect metric. Every metric suffers from specific downsides and has specific benefits. So you ultimately want to rely on more than one, and check that they all have a consistent (improving) trend.
  • Every metric is sensitive to different types of noise, and I would argue (no reference to point to unfortunately) even to different image contents. The first point is easily provable with a toy experiment, check images (and code to reproduce them) below. This is a fact you can leverage. For example the LPIPS metric seems to lead to much worse values when gaussian or salt and pepper noise are present compared to speckle noise. So bad lpips value might be a hint that your model is producing those kind of artifacts.
  • The PSNR is indeed really unstable. In the images below the image with 3 iterations of speckle noise (bottom right) has a higher psnr than the image with one iteration of gaussian noise (top left), despite looking worse.

My suggestion is to play with the images you're using, add noise, compute metrics and see what happens. It's the only way to gather knowledge about your data, and it makes much more easier to trouble shoot what might be going wrong in your training regime.

Code:

import matplotlib.pyplot as plt
import numpy as np
import skimage.util as sku
from skimage.data import astronaut

import torch
import piq

img = astronaut()
# Normalize image
img = (img - img.min()) / (img.max() - img.min())

modes = ["gaussian", "s&p", "speckle"]

for mode in modes:
    img_noise1 = sku.random_noise(img, mode=mode)
    img_noise2 = sku.random_noise(img_noise1, mode=mode)
    img_noise3 = sku.random_noise(img_noise2, mode=mode)

    tensor = torch.tensor(img).permute(2,0,1).unsqueeze(0)
    tensor_noise1 = torch.tensor(img_noise1).permute(2,0,1).unsqueeze(0)
    tensor_noise2 = torch.tensor(img_noise2).permute(2,0,1).unsqueeze(0)
    tensor_noise3 = torch.tensor(img_noise3).permute(2,0,1).unsqueeze(0)

    psnr1 = piq.psnr(tensor_noise1, tensor).item()
    psnr2 = piq.psnr(tensor_noise2, tensor).item()
    psnr3 = piq.psnr(tensor_noise3, tensor).item()

    ssim1 = piq.ssim(tensor_noise1, tensor).item()
    ssim2 = piq.ssim(tensor_noise2, tensor).item()
    ssim3 = piq.ssim(tensor_noise3, tensor).item()

    lpips = piq.LPIPS()
    lpips1 = lpips(tensor_noise1, tensor).item()
    lpips2 = lpips(tensor_noise2, tensor).item()
    lpips3 = lpips(tensor_noise3, tensor).item()

    fig, (ax1, ax2, ax3) = plt.subplots(1, 3)

    ax1.imshow(img_noise1)
    ax1.set_xlabel(f"PSNR: {psnr1:.2f} \n SSIM: {ssim1:.2f} \n LPIPS: {lpips1:.2f}")
    ax2.imshow(img_noise2)
    ax2.set_xlabel(f"PSNR: {psnr2:.2f} \n SSIM: {ssim2:.2f} \n LPIPS: {lpips2:.2f}")
    ax2.set_title(f"{mode}")
    ax3.imshow(img_noise3)
    ax3.set_xlabel(f"PSNR: {psnr3:.2f} \n SSIM: {ssim3:.2f} \n LPIPS: {lpips3:.2f}")
    plt.show()
",34098,,,,,11/10/2021 19:06,,,,0,,,,CC BY-SA 4.0 32360,2,,32097,11/11/2021 6:22,,0,,"

Model and Objective function are playing together. If you can have an objective function that somehow exclude this relation that you have in your mind, then the model can focus on learning to predict based on other information. Then you have trained the model, but if your downstream task should consider that at the end, you could manually add this relation and apply the relation you have in your mind at the end. That's my philosophical answer, but the question is how do you want to implement it?‌‌‌‌‌‌‌‌‌‌ It depends on details of the project, and this approach might not be feasible, but I guess your concern is real. The model is inclined to learn the shortest path to the answer, and if you provide it for it, then it will use it. If I were you, I might try to first normalize the features so they'd have similar effects. I don't know how this would be implemented in your problem though.

",50897,,,,,11/11/2021 6:22,,,,0,,,,CC BY-SA 4.0 32361,2,,32355,11/11/2021 8:26,,1,,"

That drawing it's a bit oversimplified. Check this blog for a better explanation and implementation details. I'll refer to the image they have to answer:

  • the yellow boxes represent embedding layers, required to convert words in numbers
  • the green boxes represent the unfolded encoder
  • the red box represent the context vector, i.e. the vector you're looking for. Note that is just the final vector you obtain by applying the encoder to a sequence of words. For this reason some people prefer to draw directly a line to the decoder part, without drawing the final vector explicitly.
  • the blue boxes represent the unfolded decoder.
  • the purple boxes represent the linear layer used to predict a final word for the decoder hidden state.

",34098,,,,,11/11/2021 8:26,,,,2,,,,CC BY-SA 4.0 32362,1,,,11/11/2021 10:29,,0,85,"

There are 8 distinct action classes and around 50+ videos per class. I was wondering if flipping videos from the training set can be a good option to generate additional data. Is it?

",50906,,,,,8/8/2022 17:04,Can I flip a video to generate more data for action recognition?,,1,0,,,,CC BY-SA 4.0 32363,1,,,11/11/2021 12:33,,1,61,"

I want to calculate an upper bound on how many training points an MLP regressor can fit with ~0 error. I don't care about the test error, I want to overfit as much as possible the (few) training points.

For example, for linear regression, it's impossible to achieve 0 MSE if the points don't lie on a line. An MLP, however, can overfit and include these points in the prediction.

So, my question is: given an MLP and its parameter, how can I calculate an upper bound on how many points it can exactly fit?

I was thinking to use the VC dimension to estimate this upper bound.
The VCdim is a metric for binary classification models but a pseudo dimension can be adapted to real-value regression models by thresholding the output:

$$Pdim(\mathcal{G}) = {VCdim}(\{(x,t) \mapsto 1_{g(x)-t>0}:g \in \mathcal{G}\})$$ (from the book Foundation of Machine Learning, 2nd edition, definition 11.5)

where $\mathcal{G}$ is the concept class of the regressor, $g(x)$ is the regressor out and $t$ is a threshold.

The model is an MLP with RELU activations. As far as I understood on Wikipedia, it should have a VCdim equal to the number of the weights (correct me if I'm wrong).

So, the question is: how to practically calculate the pseudo-dim for the regressor given the VCdim? Does it make sense for the purpose that I want to achieve?

",50909,,2444,,11/13/2021 12:19,11/13/2021 12:19,Is the VC dimension of a MLP regressor a valid upper bound on how many points it can exactly fit?,,0,1,,,,CC BY-SA 4.0 32364,2,,32362,11/11/2021 12:34,,1,,"

Probably flipping a video left/right will be OK and useful for your case.

When considering data augmentation approaches, you should think about two things that may prevent it working:

  • Could the augentation change the label? E.g. would a human looking at the augmented data still label it the same way?

  • Does the augmentation create data that is too different from expected later use?

So in your case for action recognition you should only be concerned (and maybe not add the left-right flipped videos) if:

  • The activity label would change depending if tasks were performed left-handed or right-handed.

  • There is lots of content in the videos (e.g. writing) that is unrealistic when flipped. This is why you should only flip left/right, not top/bottom, in most cases. However, top/bottom and even arbitrary rotation might be fine if your videos are a top-down view, so it depends on your specific case.

As an aside, it is also important to keep the original and augmented copy in the same part of the dataset (training, cross-validation or testing), because they are correlated - not doing so may cause a data leak that will prevent you measuring performance correctly. To play this safe, you should only augment training data, so that you don't risk measuring generalisation of your model against imaginary production data that could not occur in reality.

Other augmentations you might consider for video could be small rotations, random crops, and minor colour, contrast and brightness adjustments.

",1847,,1847,,11/11/2021 15:18,11/11/2021 15:18,,,,3,,,,CC BY-SA 4.0 32371,2,,32356,11/11/2021 14:07,,0,,"

The solution approach for the linked work Mastering the game of Go with deep neural networks and tree search is not only valid for the game Go but can also be used for other games, e.g. chess. The policy network has the task to learn the rules of the game and is then improved only with the help of the value network by focusing on winning games. If I understand you correctly, you want to evaluate all the moves using the value network and pick the best one? For this you need rules - i.e. which moves are allowed to be made at all. That's what I think you need the policy network for.

",44454,,,,,11/11/2021 14:07,,,,1,,,,CC BY-SA 4.0 32372,1,32408,,11/11/2021 16:01,,1,229,"

In supervised or unsupervised learning, it is advised to reduce the dimensionality due to the curse of dimensionality in general.

Is this also generally advisable for the action space of reinforcement learning?

As far as I understand (and inspired by the answer here Is it possible to tell the Reinforcement Learning agent some rules directly without any constraints), you can reduce the dimensionality of the action space always to 1, meaning that you solely have 1 action. This can be done by using a mapping (e.g. in the step function of Open AI Gym).

Let's have a look at an example: We have a heating device that can heat 3 storages and we have a discrete action variable for all of them with 11 steps [0.0, 0.1, 0.2, ..., 1.0]. So we have

  • action_heatStorage1: [0.0, 0.1, 0.2, ..., 1.0]
  • action_heatStorage2: [0.0, 0.1, 0.2, ..., 1.0]
  • action_heatStorage3: [0.0, 0.1, 0.2, ..., 1.0]

In this case, we would have a 3-dimensional action space

  • action = [action_heatStorage1, action_heatStorage2, action_heatStorage3]

However, it is also possible to combine the 3 actions into 1 action variable "action_combined" of the size [11 * 11 * 11=1331] by just using a mapping of this one action into the separate 3 actions. For example like this:

  • action_combined = 0 --> action_heatStorage1 =0, action_heatStorage2 =0, action_heatStorage3 =0
  • action_combined = 1 --> action_heatStorage1 =0.1, action_heatStorage2 =0, action_heatStorage3 =0
  • action_combined = 2 --> action_heatStorage1 =0.2, action_heatStorage2 =0, action_heatStorage3 =0

...

  • action_combined = 1331 --> action_heatStorage1 =1.0, action_heatStorage2 =1.0, action_heatStorage3 =1.0

Is it generally advisable to reduce the dimensionality of the action space (Option 2), or to use multidimensional action variables directly (Option 1)?

I know that the is most probably not an answer that is valid for all problems. But, as I am relatively new to reinforcement learning, I would like to know whether in the theory of reinforcement learning there is a general recommendation to do something like this or not or whether this question can't be answered in general as it is something that totally depends on the application and should be tested for each application individually?

Reminder: I have already received a good answer. Still, I would like to remind you on this question to maybe hear also the opinion and experience of others regarding this topic.

",48758,,2444,,11/22/2021 13:24,11/22/2021 13:24,Is it generally advisable to have a low dimensional action space in Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 32373,2,,32345,11/11/2021 18:10,,1,,"

we are recognizing the disease, not the person.

If you're training a computer vision model with only images and no auxiliary information then a randomized sampling should be enough to prevent the model from over fitting on x-ray scans taken on the same person.

If images from the same person exist in both subsets, the problem will be easy, and not reasonable, from a practical point of view. Do you agree?

Only partially. The aim of a test set is to allow you to quantify how well your model will perform on a real use case. Which is close but not the same as making the problem as hard as possible in testing phase.

Also, it is not necessarily true that having 20 examples from the same person in the training set will lead to high accuracy on even 1 single test instance coming again from the same person. This is because those training instances might contain bias (like many tumors on the left side of the scan rather than the right side).

This is why I personally would go for the following splitting approach:

  • sample from the whole dataset a small group of people (for example top n people with fewest amount of scans). Let's call this set between groups test set.
  • sample per person (from the remaining data) 70% training instances and 30% test instances. Let's call it within groups test set.

During validation I would then perform the following checks:

  • metrics calculation (precision, recall, f-score, i.e the whole package) on the between group test performed as usual.
  • metrics calculation per person (or clusters of people, paying attention in keeping the clusters the same at every validation step) on the within groups test set.

The second check is the most interesting. Of course there will be some random variation in the performance, but if the model will start performing better on some specific clusters of people that might be a hint that the model is learning some confounding variables probably related to the presence of really similar scans or other bias. testing only on the between groups will not let you to draw that conclusion.

",34098,,,,,11/11/2021 18:10,,,,0,,,,CC BY-SA 4.0 32374,2,,32356,11/11/2021 19:24,,1,,"

I'm not aware of anyone running a setup of everything that AlphaZero does, minus the Policy Network, and reporting on how well it worked, so I don't think I can provide a definitive 100% certain answer. My intuition says that it would "work" in the sense that it could still produce a very strong agent, but I suspect it could be slower to train and/or not reach as high of a peak.


One important advantage of the Policy Network over the Value Network is specifically during the training phase, and in particular in the beginning of training; the Policy Network receives much more training data. When we run self-play games as in the AlphaZero paper, we get one target for the Policy Network for every distinct state we encounter (so many different targets per state), but only a single target for the Value Network for every full game (just the outcome of the game). There are papers that try to address this in different ways (some extract multiple value targets from the tree, some speed up the generation of self-play games by only running shorter searches in some states and not generating policy targets from the shorter searches, etc.), but ignoring those... in the standard AlphaZero setup, we may expect the Policy Network to train faster / more easily, especially in the beginning. The Value Network can "catch up" afterwards.

A second advantage of the Policy Network over the Value Network is that it may more effectively learn to distinguish good moves from bad moves in highly one-sided game states. Consider a game state $s$ where the player is (almost) sure to win, and can even afford to make a few mistakes. We'll probably get that all the successor states $s'$ get approximately the same value estimates $V(s') \approx 0.999$, with only a tiny amount of variation, and we may be unable to distinguish good moves from bad moves. The value network may no longer have any sort of noticeable preference for good moves over bad moves, until it has made so many blunders/mistakes that it actually becomes necessary to play well again. It would probably still win, but possibly in a less convincing manner (and slower) than it could. It's not even wrong for the value network to learn in this way, since we do want it (ideally) to learn the game-theoretic values, and this would be correct from that point of view. But probably still undesirable. In contrast, the Policy Network is not trained to predict game-theoretic values, or trained just to win. It's actively trained to pick good moves over bad moves. Due to the exploratory behaviour that is inherent in MCTS, MCTS generally ends up preferring more convincing / faster / safer wins, so the Policy Network does actually get an incentive to really play well even in such states where mistakes can be afforded.

A similar story to the above also applies to game states where the player is almost sure to lose; the Value Network gets no incentive to distinguish states from each other and might end up playing poorly in losing states, whereas a Policy Network still gets incentive to play well and "fight back", which may end up allowing it to still reach a win against suboptimal players.


What if we would replace the computation of the policy network with this logic: generate all subsequent moves, evaluate them using value network, and create policy from these predictions.

This particular implementation you suggest would have an additional disadvantage of computational inefficiency: to evaluate the Value Network for all possible successor states, you actually need to 1) generate all those successor states first and 2) run forwards passes of the Neural Network for every successor. Both of those things take time. In contrast, the Policy Network just runs once for the single parent state, and at once computes all the probabilities for all the actions. This is much more efficient.


Related to your suggestion, but quite a bit different because it does not use MCTS (but a different tree search algorithm) as the underlying search algorithm, a rather effective approach that only uses a Value Network (no Policy Network) is described in "Learning to Play Two-Player Perfect-Information Games without Knowledge" by Cohen-Solal, with additional empirical results described in "Minimax Strikes Back".

",1641,,,,,11/11/2021 19:24,,,,4,,,,CC BY-SA 4.0 32375,1,34145,,11/11/2021 22:38,,0,479,"

Can anybody explain how the training steps work for the Tensorflow Object Detection algorithms available on the Tensorflow 2 Detection Model Zoo? For instance, YOLOv5 cycles through epochs. As I understand it, one epoch is completed after all the training data passes through the algorithm. However, the Tensorflow models I just described are set up so they pass through a certain amount of training steps (several are optimized for 100,000 training steps, including some with 200,000 and 300,000 steps, depending on the algorithm).

What is the difference between epochs and these steps? Just trying to understand how the algorithm trains my data.

",32750,,32750,,1/12/2022 15:49,1/13/2022 17:34,How do Tensorflow models and YOLO differ in terms of training steps?,,1,0,,,,CC BY-SA 4.0 32377,1,32378,,11/12/2021 12:23,,3,117,"

I've been using Tensorflow and just started learning PyTorch. I was following the tutorial: https://pytorch.org/tutorials/beginner/nlp/word_embeddings_tutorial.html#sphx-glr-beginner-nlp-word-embeddings-tutorial-py

Where we try to create an n-gram language model. However, there's something I don't understand.

class NGramLanguageModeler(nn.Module):

    def __init__(self, vocab_size, embedding_dim, context_size):
        super(NGramLanguageModeler, self).__init__()
        self.embeddings = nn.Embedding(vocab_size, embedding_dim)
        self.linear1 = nn.Linear(context_size * embedding_dim, 128)
        self.linear2 = nn.Linear(128, vocab_size)

at self.linear1 = nn.Linear(context_size * embedding_dim, 128) why did we multiply embedding_dim with context_size? Isn't the embedding_dim input size? So why do we multiply it by the context size?

",47305,,18758,,11/13/2021 9:10,11/13/2021 9:11,Why do we multipy context_size with embedding_dim? (PyTorch),,1,0,,,,CC BY-SA 4.0 32378,2,,32377,11/12/2021 12:55,,3,,"

An n-gram language model is a language model trained with n context words. This means you're not feeding the model a single word but n. This is why the dimension of the input layer is "context_size * embedding_dim" or "n * embedding_dims"

",34098,,18758,,11/13/2021 9:11,11/13/2021 9:11,,,,0,,,,CC BY-SA 4.0 32379,2,,32319,11/12/2021 13:22,,2,,"

Lifelong learning and MLOps are indeed complementary.

Lifelong learning (LL) can be defined as the set of learning algorithms and models that can deal with more and more data and/or tasks without forgetting (completely) the previous one and (usually) without fully retraining the model with all data that you have available now. So, in LL, we attempt to mimic the way humans continually learn (different tasks) throughout their lives and transfer knowledge/skills acquired in one task to other tasks (e.g. from the task of walking to the task of running) [1].

MLOps is the application of DevOps to the machine learning or data science context. In other words, it is more a software engineering practice, where you define a pipeline (or sequence) of tasks that need to be done (semi-)automatically until and after you deploy your software. So, MLOps is just DevOps, but, in addition to the common practices in DevOps, like continuous integration and continuous deployment, you also have ways to deal with the ML part of the software, like automatic retraining as more data is available, and automatic evaluation of the (new) trained model on a test dataset.

They are complementary, because MLOps is concerned with the automatization of the data science process (i.e. collection and cleansing of data, training of the model with the new data, evaluation, deployment of the new trained and evaluated model to a production server), while LL is (only/usually) concerned with the development of learning algorithms and models that can cope with new tasks and data (without being completely reconfigured or retrained), so it has nothing or little to do with continuous integration and continuous deployment. In MLOps, given that you may (automatically) retrain the model as more data is available, you may think that this is a form of lifelong learning, but you completely retrain the model and you may not be able to cope with different tasks, but you could indeed use LL techniques in MLOps.

",2444,,,,,11/12/2021 13:22,,,,1,,,,CC BY-SA 4.0 32380,1,,,11/12/2021 14:15,,0,35,"

In the following structure when we use MADE due to the constraints for making a masked autoencoder, it seems some inputs do not have any connection to the next layer, and also there is the output that does not have a connection to the previous layer!

Can someone clarify?

",32974,,2444,,11/13/2021 12:14,11/13/2021 12:14,Masked Autoencoder Structure,,0,2,,,,CC BY-SA 4.0 32383,1,,,11/12/2021 18:16,,2,35,"

I have a model that uses an STN module for number detection and Mean Squared Error loss. But I would like to replace it for GIoU, because MSE doesn't take into account how much of the target area has been detected, only how close individual coordinates are close to the target. But I wonder if this makes sense. Has anyone tried it, or has some insight?

",48816,,18758,,11/13/2021 8:55,11/13/2021 8:55,Can a GIoU loss (generalized intersection over union) be used after an STN module (spatial transformer network)?,,0,0,,,,CC BY-SA 4.0 32385,1,,,11/12/2021 22:13,,0,167,"

For conventional 'Neural Networks', the weights simply act as a transformation in highly multi-dimensional space; for a forward pass, the output is always the same since there is no stochastic weighting component in the process.

However, in Transformers (self-attention based encoder-decoder type architecture to be specific) we get different outputs with the same prompts (assuming $T > 0$). This doesn't make sense to me because the set of weights are always static, so the probability distribution produced should be the same; this simple decoding should yield the same output.

However, in practice, we observe that it is not actually the case. Any reasons why?

",36322,,2444,,11/13/2021 12:22,11/13/2021 12:22,Why do language models produce different outputs for same prompt?,,1,0,,,,CC BY-SA 4.0 32386,2,,32385,11/12/2021 23:15,,2,,"

Language models produce a probability distribution over a set of words. You determine the next word by sampling from this distribution. So, determining the next word is stochastic even though the distribution is the same given the initial prompt.

",32621,,,,,11/12/2021 23:15,,,,2,,,,CC BY-SA 4.0 32388,1,,,11/13/2021 12:59,,2,145,"

I am reading the book titled Artificial Intelligence: A Modern Approach 4th ed by Stuart Russell and Peter Norvig. According to the book, the complexity of uniform-cost search is as $$ O(b^{1+\lfloor{C^*/\epsilon}\rfloor}), $$

where $b$ is the branching factor (i.e. the number of available actions in each state), $C^*$ is the cost of the optimal solution, and $\epsilon > 0$ is a lower bound of the cost of each action.

My question is: Why is there is a 1 in the formula?

For example, suppose in the following tree, the red node is the initial state and the green one is the goal state, and two actions are needed to reach the goal state from the initial state. If the cost of both actions is equal to $\epsilon = 1$, so, $C^*$ will be $2$. Therefore, the complexity will be $O(b^{2})$. But, from the above formula, the complexity will be $O(b^{3})$.

PS. I know there is a similar question in stackoverflow and have read the answer. But there is a disagreement between the answers about the 1.

",50418,,2444,,11/13/2021 14:13,12/26/2022 22:00,Why is there a 1 in complexity formula of uniform-cost search?,,1,0,,,,CC BY-SA 4.0 32389,1,,,11/13/2021 13:00,,1,134,"

In the paper The Perceptron: A probabilistic model for information storage and organization in the brain, Rosenblatt used the probability theory to model his perceptron.

My professor told me that today's neural networks are not modeled with probability theory anymore. Why is that?

",50933,,2444,,11/13/2021 21:56,11/22/2021 1:25,Why are today's neural networks not modeled with probability theory?,,1,1,,,,CC BY-SA 4.0 32390,2,,32389,11/13/2021 14:28,,1,,"

Your professor is wrong (or maybe you misunderstood what he wanted to say, or he did not explain correctly what he wanted to say). It may be a good idea to ask your professor for some clarification and tell him about the info given in this answer.

Probability theory is widely used to model problems in machine learning, including in the context of today's neural networks. There are many examples of research papers or books where you will see probabilities and probability distributions. For example, you can take a look at the neural machine translation paper.

More importantly, we often formulate the problem of learning (in the context of neural networks) as a minimization of an objective function, which is equivalent to the maximization of a likelihood of the parameters (given the data). It's also often the case that neural networks produce a probability vector (from the so-called logits, often, by using a softmax function).

There are many other (maybe clearer) examples of the use of probability theory to model problems in machine learning. For example, you can take a look at Bayesian neural networks, variational auto-encoders, or generative adversarial networks. You will see a lot of probabilities and probability distributions in these linked papers.

You may also be interested in this answer.

",2444,,2444,,11/13/2021 14:36,11/13/2021 14:36,,,,0,,,,CC BY-SA 4.0 32392,2,,3903,11/14/2021 0:19,,0,,"

Good verses bad? From an imaging standpoint, humans/biologicals tend to notice details not “imagined”. A “good” feedback learning loop might be refining the correctness of the imagined. Using efforts to reduce the number or amount discrepancies between imagined or predicted environment and perceived reality as measured against “beneficial” outcomes. I’m not sure what a Ai wood think of as good/bad in the abstract, in my limited considerations of imagination in humans, I seem most useful in recognizing things that be mor meaning than the general background. As the imagination becomes more able to predict a more “real” outcome, it is more useful to the individual in predicting a more correct outcome and greater usefulness in defining things that deverge form the expected. Sort simplistic I realize, but intelligence might be as simple as refining.the ability to imagine a result and judging the more meaningful of the descrpencies .

",50949,,50949,,11/14/2021 16:46,11/14/2021 16:46,,,,1,,,,CC BY-SA 4.0 32393,1,33872,,11/14/2021 10:17,,4,66,"

I am reading these slides. On page 38, the update for the parameters for the linear function approximation of TD(0) is given. I have a doubt regarding this. The cost function (RMSE) is given on page 37.

My doubt is: why is the gradient of $\hat v(S^{\prime}, \mathbf w)$ with respect to parameters $w$ not considered?

I think the parameter update should be: $$\mathbf w \leftarrow \mathbf w +\alpha [R + \gamma \hat v(S', \mathbf w) - \hat v(S, \mathbf w)] (\nabla \hat v(S, \mathbf w)- \gamma \nabla \hat v(S', \mathbf w))$$ Instead in the material it is given as:- $$\mathbf w \leftarrow \mathbf w +\alpha [R + \gamma \hat v(S', \mathbf w) - \hat v(S, \mathbf w)] \nabla \hat v(S, \mathbf w)$$

Could someone please explain?

",50787,,47080,,12/23/2021 13:38,12/23/2021 13:38,"In TD(0) with linear function approximation, why is the gradient of $\hat v(S^{\prime}, \mathbf w)$ wrt parameters $\mathbf w$ not considered?",,1,0,,,,CC BY-SA 4.0 32396,1,32429,,11/14/2021 19:44,,0,277,"

I am trying to make a model that uses a Transformer to see the relationship between several data vectors, but the order of the data is not relevant in this case, so I am not using the Positional Encoding.

Since the performance of models using Transformers is quite improved with the use of this part, do you think that if I remove that part I am breaking the potential of Transformers or is it correct to do so?

",22930,,2444,,11/30/2021 15:49,11/30/2021 15:49,Is Positional Encoding always needed for using Transformer models correctly?,,1,1,,,,CC BY-SA 4.0 32397,2,,32344,11/14/2021 20:58,,1,,"

A different, variable reward structure might help. You could try a combination of airspeed, pitch, roll and whether it is hovering in the air or not in each timestep as a representation for the reward.

Maybe airspeed should, in expectation, contribute up to 30% of the reward, pitch up to 15%, roll up to 15% and being in the air up to 40%. This would explicitly reward motion, trying new movements as well as being reasonable, i.e. hovering in the air. You can create a new formula for the reward around this premise, even use some logarithms or other fancy functions that you see fit.

The important thing is that this way, the reward in each timestep is varying based on the four aforementioned variables and the chosen formula. It is not 0 or 1 like your previous structures. There is also feedback in each timestep on what it is doing right and how well it is doing, not just the last timestep like the second structure.

",36447,,36447,,11/14/2021 21:15,11/14/2021 21:15,,,,1,,,,CC BY-SA 4.0 32399,1,32401,,11/14/2021 22:59,,0,32,"

Consider the following remark about writing gradients from the chapter named Vector Calculus from the test book titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.

Remark (Gradient as a Row Vector). It is not uncommon in the literature to define the gradient vector as a column vector, following the convention that vectors are generally column vectors. The reason why we define the gradient vector as a row vector is twofold: First, we can consistently generalize the gradient to vector-valued functions $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ (then the gradient becomes a matrix). Second, we can immediately apply the multi-variate chain rule without paying attention to the dimension of the gradient.

The above remark implicitly says that there is no standard in writing the gradients. So, I can write a gradient of a scalar-valued multivariate function either as a column vector or as a row vector.

But, I want to know which is more common in the AI community?

",18758,,2444,,12/1/2021 8:17,12/1/2021 8:17,Which is more popular/common way of representing a gradient in AI community: as a row or column vector?,,1,0,,,,CC BY-SA 4.0 32401,2,,32399,11/15/2021 4:58,,1,,"

The issue doesn't come up terribly often. If you are only dealing with vectors, everything is either a row or column vector. It makes no difference which it is.

A more relevant issue is whether one uses the so-called "numerator layout" or "denominator layout". In the numerator layout, $\partial f / \partial x$ is $\mathbb{R}^{n\times m}$, where $f \in \mathbb{R}^n$, $x \in \mathbb{R}^m$. The denominator layout is the other way ($\partial f / \partial x$ is $\mathbb{R}^{m \times n}$).

If you think of the gradient as a row vector, then you are using numerator layout.

I think the numerator layout is overall more common. But there is no real convention, and if the paper content deals with Jacobian matrices it will definitely be mentioned somewhere in the text which layout they use.

",47080,,,,,11/15/2021 4:58,,,,0,,,,CC BY-SA 4.0 32402,1,32407,,11/15/2021 9:18,,3,383,"

I'm playing around with TCN's lately and I don't understand one thing. How is the receptive field different from the input size?

I think that the receptive field is the time window that TCN considers during the prediction, so I guess the input size shall be equal to it.

According to the WaveNet paper, I cannot see a reason why it should be otherwise. I'm using TensorFlow with this custom library.

Please help me understand.

",46718,,2444,,11/15/2021 9:27,11/15/2021 15:07,"In a Temporal Convolutional Network, how is the receptive field different from the input size?",,1,0,,,,CC BY-SA 4.0 32403,1,,,11/15/2021 11:01,,4,451,"

I have a background in mathematics and I am accustomed to reading papers with lemma and proofs. When I see a deep learning paper, they seem to be of practical nature.

How can I improve my reading and understanding of deep learning papers?

To truly understand, should I have to implement the code? What is the best approach (if any)?

",50963,,2444,,11/15/2021 17:37,11/15/2021 17:37,How should I read a deep learning paper?,,2,3,,,,CC BY-SA 4.0 32404,1,32405,,11/15/2021 11:53,,4,245,"

Across multiple pieces of literature describing MLPs or while describing the universal approximation theorem, the statement is very specific on the activation function being non-polynomial.

Is there a reason why it cannot be a higher-order polynomial? Is it just an attempt to use the least complex solution or we really cannot use higher-order polynomial?

I can understand the reason for the non-linear, but I am clueless about the non-polynomial requirement.

",50966,,2444,,11/15/2021 12:25,11/15/2021 15:02,Why does the activation function for a hidden layer in a MLP have to be non-polynomial?,,1,0,,,,CC BY-SA 4.0 32405,2,,32404,11/15/2021 13:35,,3,,"

The paper Multilayer feedforward networks with a nonpolynomial activation function can approximate any function (by Leshno et al., 1993) provides a theorem claiming this and the (quite long) proof of the theorem in the appendix.

After a quick reading, it seems to me that they do not provide a very intuitive explanation of why the non-polynomiality of the (bounded, non-constant, and not necessarily continuous) activation function is necessary and sufficient (it's an if and only if result, see the last paragraph on p. 4) for a single-layer neural network to approximate any continuous function.

The theorem (p. 10) formally states that the set of all possible functions that the neural network can compute, denoted by $\Sigma_n$, is dense in $C(\mathbb{R}^n)$, the set of continuous functions from $\mathbb{R}^n$ to $\mathbb{R}$ (and being dense in $C(\mathbb{R}^n)$ is the equivalent mathematical statement to "neural networks can approximate any continuous function"). To understand this, you need to understand what a dense (sub)set is. To understand that, you need to understand what a closure of a set $S$ is: intuitively, it's the set of all points $S$ plus the points near the set $S$.

To prove this theorem (p. 12), they proceed in 7 steps, so it's a long proof.

For example, in step 1, they show (or just state) that, if the activation function $\sigma$ is a polynomial, then $\Sigma_n$ is not dense in $C(\mathbb{R}^n)$. They conclude that $\Sigma_n$, with such an activation function, would be a set of polynomials, which is not dense in $C(\mathbb{R}^n)$ (not sure why this is the case, but there is an old question here that asks exactly this; I think that understanding this would be a great step for intuitively understanding the theorem).

I don't plan to go over all the steps now, but, if you spend some time reading this paper, you should have the answer of why the non-polynomiality of the activation function is necessary and sufficient for the neural network to approximate any continuous function of the form $f : \mathbb{R}^n \rightarrow \mathbb{R}$. If that is not very useful, you could try reading the other related (but even longer) paper Approximation theory of the MLP model in neural networks (by Pinkus, 1999).

So, I wouldn't say that non-polynomiality is a complexity requirement, but really a theoretical requirement for approximating continuous functions.

(By the way, I think there's a typo on page 6. They write $f_\omega : \mathbb{R} \rightarrow \mathbb{R}^n$ to denote the function the neural network computes, but I am pretty sure they meant $f_\omega : \mathbb{R}^n \rightarrow \mathbb{R}$; in fact, previously, they assume that the neural network has $n$ inputs and $1$ output).

",2444,,2444,,11/15/2021 15:02,11/15/2021 15:02,,,,0,,,,CC BY-SA 4.0 32406,2,,32403,11/15/2021 13:56,,3,,"

I have some experience reading research papers. However, in my view, there is no single answer to this question (apart from this answer I am giving you, i.e. "it depends"). The answer to your question depends on

  1. your background knowledge/education

    • If you don't know much about the specific topic in the paper, you may need to study at least briefly the prerequisites for reading the paper, otherwise, you won't understand much about the paper.
  2. why you are reading the paper

    • Are you just interested in or curious about the topic in the paper? Then maybe you can only quickly read it, or only read specific sections, such as the ones that introduce the technique (abstract, introduction, and maybe conclusion) and skip all the math; the figures are sometimes also insightful.

    • Are you doing (serious) research on a related topic? If yes, you will probably need to read the paper multiple times (at least the sections that you don't fully understand). It may also be useful to look at implementations of the approach. If they don't exist, you may consider implementing the approach yourself.

    • Do you need to present the paper? In this case, you may also need to read the paper multiple times, understand well the figures (because you will probably use them in your slides: an image is worth 1000 words, etc.)

  3. how much time you have to read the paper

Note that these guidelines should also be applicable to other cases (i.e. when you're reading other types of research papers that involve math).

You may also be interested in the paper How to Read a Paper (by Keshav). It provides a few tips that could be useful.

",2444,,2444,,11/15/2021 14:02,11/15/2021 14:02,,,,0,,,,CC BY-SA 4.0 32407,2,,32402,11/15/2021 15:07,,2,,"

As you can see in Fig. 2 of the WaveNet paper the receptive field is 5, but the input size is larger (16). The receptive field defines what a single output neuron can see (see arrows in Fig. 2).

The receptive field could also be greater than the input, e.g. if you want to use or you only have the last 12 time steps and use the following structure (WaveNet paper, Fig 3), which can cover different powers of two depending on the number of layers.

If you want to calculate not only the last output neuron, it can make sense that the input size is larger than the receptive field, as the outputs before use also older inputs (see dashed lines).

",8035,,,,,11/15/2021 15:07,,,,3,,,,CC BY-SA 4.0 32408,2,,32372,11/15/2021 15:42,,1,,"

Since the question may not be answered unambiguously in general, I will use the given example as a guide. As you correctly write, a large dimensionality leads to a very large solution space because of the curse of dimensions.

However, whether one restricts this solution space by allowing only discrete values $[0.0, 0.1, 0.2, ..., 1.0]$ for the three actions or by using the resulting combinations as a list of options (one dimension) makes no difference with respect to solution space coverage. Depending on which inputs the model receives, however, it would be conceivable that the separate variables (option 1) could be more easily assigned and learned, e.g. if there are sensors for the state, temperature, etc. of each of the 3 heat storages as inputs. Option 2 would still require the model to implicitly learn one-dimensional coding to target specific heat storages.

",8035,,,,,11/15/2021 15:42,,,,6,,,,CC BY-SA 4.0 32409,2,,32097,11/15/2021 16:34,,3,,"

You are refering to the first and very important step in a machine learning process called data preprocessing. Refering to this article, inside data preprocessing there are many smaller processes that deal with features: feature extraction, feature selection, feature aggregration and feature encoding to name a few.

The idea of creating new features out of raw features is not new, but rather a well known concept in machine learning called feature extraction. Suppose you decide to create the new $X_1 / X_2$ feature and add it to the raw set of features ${X_1, X_2}$, because you are completely sure that it has an effect on $Y$, then that is a perfectly reasonable example of feature extraction.

Refering to the article above on feature extraction, notice that nowadays there is some growing consensus that when using deep neural networks, the first hidden layers of the network can serve the purpose of automatic feature extraction, without you manually adding the new features to the raw set of features. This premise stems from the fact that deep neural networks are essentially non-linear function approximators, so the first hidden layers can approximate any feature extraction that you do manually.

You can understand that in order to do automatic feature extraction, you might need more hidden layers and more neurons in each layer, to increase the computing power of your model. There is also the downside that you can not know for sure what is happening in the first hidden layers, if it is doing feature extraction the way you intended or not.

In your case, my advice would be to do the manual addition of $X_1 / X_2$ in the set of features, if you are only adding one such fraction or a handful. Also, if you are completely sure there is a correlation between those fractions and $Y$. If on the other hand, you are adding more than a handful of such fractions, then it does not make sense to add them all. It is hard to be completely sure that all the fractions have an effect on $Y$. It would also exhaust the memory space, by having a lot of features on the input layer. In that case I would suggest that you skip manual feature extraction, increase the number of hidden layers and let the network do its auto-magic.

",36447,,36447,,11/15/2021 18:15,11/15/2021 18:15,,,,2,,,,CC BY-SA 4.0 32410,2,,32403,11/15/2021 16:50,,4,,"

Adding something to nbro answer, from my personal experience there are also some hints that can quickly tell you if you're dealing with a good machine learning paper, i.e. worth to read in its entirety or not.

In random order:

  • Clear contribution description: machine learning and artificial intelligence in general are both broad fields. A paper can be about several things, and it should be clear already from the title what the main contribution is: a new model architecture? A new pre/post processing step? New loss function? If it's not clear after reading the abstract there are high chances that this point will not be clear also after reading the whole paper.

  • Clear architecture/algorithm description and visualization: images and code snippets make a huge difference. As you pointed out, machine learning is mostly an applied field, hence having a code snippet or a clear list of implementation passages reduce the overhead of thinking about how to turn into code the math in an efficient way, and it also reduces the chances of making interpretation mistakes when the passages are not clear or give for granted. Since you're from a mathematical background you are probably experiencing the opposite feeling and wondering why there's not that much math inside. Point is that most machine learning papers are structured like experimental papers. You have hypotheses, not theorems to prove, you run experiments to test them and you describe the results at the end.

  • Code availability: unfortunately, still not such a common practice, making your code available is fundamental, not to make other people's life easier, but to grantee the reproducibility of the published results. Moreover, machine learning is characterized by many subtle and arbitrary choices, especially when it comes to hyperparameters, which are many times not reported on the real papers, and, when that's the case, looking at the code becomes the last resource to find that information.

  • Proper benchmarking: evaluating the impact of a new loss/architecture is always hard. Many papers just report tables with random scores like "97% accuracy", which means nothing when not compared to other models. A good paper always reports the state of the art (SOTA) scores and test the proposed improvements ON EXACTLY THE SAME DATA. Furthermore, a paper ideally should report mean scores over several training runs. Unfortunately, due to the expensive hardware and long time required to train only a single model, this is almost never done.

  • Short but clear SOTA analysis: it is super hard to stay on track with the state of the art when it comes to machine learning, since dozens of papers are published every month. For this reason, the literature research section should be concise and point to works that are as closest as possible to what is going to be described (and improved upon) in the paper, otherwise, you know you're reading a survey instead.

",34098,,2444,,11/15/2021 17:34,11/15/2021 17:34,,,,0,,,,CC BY-SA 4.0 32411,1,32412,,11/15/2021 17:00,,2,260,"

I was trying to code the on-policy Monte Carlo control method. The initial policy chosen needs to be an $\epsilon$-soft policy.

Can someone tell me how to code an $\epsilon$-soft policy?

I know how to code the $\epsilon$-greedy. In $\epsilon$-soft, there are inequalities in place of equalities which is the issue for coding the $\epsilon$-soft.

",50002,,2444,,11/16/2021 11:29,12/7/2021 15:32,How to code an $\epsilon$-soft policy for on-policy Monte Carlo control?,,1,0,,,,CC BY-SA 4.0 32412,2,,32411,11/15/2021 17:33,,1,,"

You cannot code an $\epsilon$-soft policy directly, because it is not specific enough.

A policy is $\epsilon$-soft provided that there is at least a probability of $\frac{\epsilon}{|\mathcal{A}|}$ for choosing any action, where $\mathcal{A}$ is the set of all possible actions.

I know how to code the $\epsilon$-greedy.

Then you already know how to code the most commonly-used $\epsilon$-soft policy, because an $\epsilon$-greedy policy is an $\epsilon$-soft policy.

there are inequalities in place of equalities which is the issue for coding the $\epsilon$-soft

That is correct. In fact $\epsilon$-soft can be thought of as a constraint or test. So you could write some code that tested whether any policy was an $\epsilon$-soft policy for any given value of $\epsilon$. Or you could write code that determined the value of $\epsilon$ for any policy.

Slightly harder would be code that forced a supplied policy to meet constraints of being $\epsilon$-soft, because adjusting any low probabilities to be high enough would mean reducing other probabilities, and there are multiple ways you could do that.

However, a really simple way to make any starting policy $\pi$ into an $\epsilon$-soft variant is to make the policy choice in 2 steps - first step choose between the original policy with probability $(1-\epsilon)$, and with probability $\epsilon$ choose a fixed policy with equal probability for each action. Second step, evaluate whichever policy the first step chose to determine the action.

",1847,,1847,,12/7/2021 15:32,12/7/2021 15:32,,,,0,,,,CC BY-SA 4.0 32413,2,,32097,11/15/2021 19:27,,1,,"

Ideal advise is to feed the raw data to the neural networks to let neural networks make its own inference

Considering you have expert knowledge that $X1/X2$ has effect on $Y$ , here the new feature ($X1/X2$) is referred to as a derived feature

However, there are few advantages which can help you consider of using derived features like $X1/X2$ in your case

  • Being a domain expert or SME choosing X1/X2 as an important feature, you can ideally accelerate the training process
  • Highly advantageous when you are short on CPU time
  • Most neural networks calculate sums fed through the activation functions, estimating the ratio or the multiplication requires lot of neurons especially if $X1/X2$ is an important feature
  • You can use feature pruning techniques to eliminate redundant features
  • One major drawback of neural network is often considered to be a black box model that’s the approximation of neural networks doesn’t give any insight of the form of function f. Use make use of feature selection algorithms to pare down feature space.
  • With large number of parameters in model also increasing the risk of overfitting the network. You can however overcome this with good regularisation methods. You can also use feature pruning to avoid overfitting when you are having limited data.

I found a literature in the field of medicine that deals with deriving features based on expertise accurately identified the required states.

Heart rate variability-derived features based on deep neural network for distinguishing different anaesthesia states

The incorporation of four HRV-derived features in the time and frequency domain and a deep neural network could accurately distinguish between different anaesthesia states.

",48391,,48391,,11/23/2021 6:45,11/23/2021 6:45,,,,0,,,,CC BY-SA 4.0 32415,1,33918,,11/15/2021 23:40,,1,177,"

Are there any resources (either books, articles, or tutorials) that introduce the basics of online machine learning?

For example, this website has nice lecture notes (from lec16) on some of the aspects. Or this book.

I can't seem to find many resources on this. I'm trying to understand the basics, not read research papers.

",27749,,2444,,12/26/2021 18:31,12/28/2021 18:02,Are there any resources that introduce the basics of online machine learning?,,2,0,,,,CC BY-SA 4.0 32416,1,32430,,11/16/2021 4:28,,0,62,"

My goal is to identify the horse in a photo. I'm dealing with about 500 unique horses.

My feeling is that the best way to distinguish one horse from another is by its face. So I trained Yolov5 successfully to find faces at reasonable angles.

I'd like to take this a step further, and teach it to identify which horse's face it sees.

I'm new to this sort of thing (though not programming in general), so the way I assume I should approach this is to add an additional label like face_horsename, with the unique name for the horse (or really, a unique reference to a database of horses).

Is that the right approach? It seems like the Yolo file format doesn't allow for multiple labels for the same box, so my guess is I should just make 2 rectboxes that are identical, but both point to different labels.

Frankly, I'd like to take it even further and label the same thing with the type of "blaze" of the horse's face, and its proper name for the horse's color. So now I'm talking about 4 labels.

Is that the right approach (duplicate boxes with unique labels)?

",50978,,18758,,11/16/2021 11:11,11/17/2021 2:21,Multiple labels for the same rectbox?,,1,0,,,,CC BY-SA 4.0 32418,1,,,11/16/2021 10:52,,0,23,"

We have a legacy code solution in C#. We have to change the code so that it fetches internal data via APIs and not via DB calls.

E.g. if the current code GETS Payment object from DB, we have to replace the logic so that the code calls the GET PAYMENT API instead.

Since there are 100s of code files and multiple DB hits in a single file, doing this manually is not feasible.

I was thinking of building an AI-based tool that would take my code file as input and point me out where I would need to replace the existing code and suggest what API to call at that place.

I have never worked on AI and it would be great if anyone suggests which algorithm to refer for my tool and also how should I proceed with solving the above problem.

",50985,,,,,11/16/2021 10:52,Which AI algorithm to use for identifying API for a specific use from a list of APIs?,,0,6,,,,CC BY-SA 4.0 32419,1,,,11/16/2021 11:15,,2,34,"

I have a mixture of real (float) and categorical features to use as input in a neural network. I encode the categorical features using one-hot / multi-hot encoding.

If I want to use all the features as input what is usually/empirically the best practice:

  1. Concatenating all the features - sparse one-hot/multi-hot vectors and float values features - in one vector which is part dense part sparse and using this as input, or

  2. Splitting the sparse one-hot/multi-hot vectors from the dense features and using an extra separate layer for the sparse features to make them dense before concatenating them with the other already dense features.

  3. Same as 2 but maybe using a separate layer for the dense features too so we concatenate "embeddings" instead of features and embeddings.

What, in your experience / opinion, should I do, trial-and-error aside?

",46020,,46020,,11/16/2021 11:30,11/16/2021 11:30,Is it a good practice to split sparse from dense features?,,0,0,,,,CC BY-SA 4.0 32420,1,,,11/16/2021 12:08,,2,43,"

I'm trying to train an agent on a self-written 2d env, and it just doesn't converge to the solution.

It is basically a 2d game where you have to move a small circle around the screen and try to avoid collisions with randomly moving "enemy" circles and the edge of the screen. The positions of the enemies are initialized randomly, at a minimum distance of 2 diameters from the enemy. The player circle has $n$ sensors (lasers) that measure the distance and speed of the closest object found.

The observation space is continuous and is made of concatenated distances and speeds, hence has the dimension of $\mathbb{R}^{n * 3}$.
I scale the distances by the length of the screen diagonal.

The action space is discrete (multidiscrete in my implementation) $[dx, dy] \in \{-1, 0, 1\}$

The reward is +1 for every game step made without collisions.

I use PPO implementation from Stable Baselines, but the return variance just gets bigger over the training. In addition to that, the agent hasn't run away from the enemies even once. I tried even setting the negative reward, to test if he can learn the suicide behavior, but no results either.

I thought maybe it's just possible for some degenerate policies like going to the corner of the screen and staying there to gain big returns, and that jeopardizes the training. Then I increased the number of enemies, thinking that it will enforce the agent to learn actually to avoid the enemies, but it didn't work as well.

I'm really out of ideas at this point and would appreciate some help on debugging this.

",50986,,50986,,11/16/2021 19:35,11/16/2021 19:35,How to fix high variance of the returns on a 2d env?,,0,6,,,,CC BY-SA 4.0 32421,1,32425,,11/16/2021 14:56,,5,309,"

I am working on a project where I have to classify around 1000 unique objects. I'm trying to plan how much training data I will need to collect. I was planning on using ResNet-50. Is there anyway I can estimate the amount of photos I should plan to collect ahead of time (assuming I will collect an equal amount of photos of each class)?

",41421,,2444,,11/16/2021 22:19,11/16/2021 22:30,How can I estimate how many photos I need to train ResNet-50 for image classification?,,1,0,,,,CC BY-SA 4.0 32422,1,32465,,11/16/2021 18:14,,2,119,"

I'm a senior in a bachelor Multimedia and Creative Technology. My experience is mostly full-stack web app development.

For my bachelor's thesis, I need to do research in a subject I have no experience in. I want to build an application where students can exercise HTML and CSS. Teachers can upload simple code pieces (e.g. h1, h2, and list with elements) with difficulty levels and students can try to copy these exercises with a code editor on the web with live preview.

My question:

  • Is it possible to use AI for grading these "copies", give the students scores, and, based on these scores, adjust the difficulty level so the next exercise is harder or easier?

  • And if so, could you put me in the right direction?

",50992,,2444,,11/19/2021 11:43,11/19/2021 15:26,Can AI be used for grading code copy exercises and adjust difficulty based on these scores?,,1,0,0,,,CC BY-SA 4.0 32423,2,,32313,11/16/2021 19:38,,1,,"

If $u$ is a vector, the direction pointed by the vector is defined as $\dfrac{u}{\lVert {u}\rVert}$ where $\lVert \cdot \rVert$ is the 2 norm (euclidean norm).

",47080,,,,,11/16/2021 19:38,,,,4,,,,CC BY-SA 4.0 32425,2,,32421,11/16/2021 22:08,,5,,"

What you want to calculate/estimate is known as the sample complexity in computational learning theory. If you knew the VC dimension of the neural network, you may be able to estimate the sample complexity, but your estimate (bound) may not be very tight anyway (maybe because the estimate of the VC dimension is also not tight). Here is more info about the VC dimension and the VC dimension of a few neural networks.

If we were able to easily compute (an accurate estimate of) the sample complexity in all cases, this would be extremely useful, as we could exclude (or not) that the size of the training dataset is the problem of under-fitting/over-fitting.

Nowadays, I think that most people do not usually estimate the sample complexity (I could be wrong, i.e. there may be simple ways to do this approximately, apart from saying that you probably need more training data points than the number of parameters, which is what many practitioners will tell you, maybe misleadingly), but, instead, collect as much data as possible, then train the neural network with that data; if it performs poorly on the test dataset, then you may try to collect more data (given that it's an indication that the neural network did not learn the target function); if you see that the test performance initially increases but then decreases while the training performance is always increasing, it may be an indication of over-fitting, so you may want to use a regularisation technique (like dropout), which you should probably use anyway, especially when you have a small dataset.

I also want to point out that, with the discovery of the phenomena double descent and grokking, these traditional guidelines or practices (e.g. early stopping) may sometimes be misleading (I don't know when or why), i.e. just keep in mind that sometimes they may not be applicable.

",2444,,2444,,11/16/2021 22:30,11/16/2021 22:30,,,,1,,,,CC BY-SA 4.0 32426,1,,,11/16/2021 22:45,,4,468,"

I came across the "reparametrization trick" for the first time in the following paragraph from the chapter named Vector Calculus from the test book titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.

The Jacobian determinant and variable transformations will become relevant ... when we transform random variables and probability distributions. These transformations are extremely relevant in machine learning in the context of training deep neural networks using the reparametrization trick, also called infinite perturbation analysis.

The trick has been used in the context of neural networks training in the quoted paragraph. But when I search about the reparametrization trick, I found it only or widely in training autoencoders.

In the context of training a traditional deep neural network, is the trick useful?

",18758,,18758,,11/17/2021 8:02,1/18/2022 18:00,"Can I apply reparametrization trick on ""any"" deep neural network?",,2,0,,,,CC BY-SA 4.0 32427,1,,,11/17/2021 1:06,,2,218,"

I'm interested in using ResNet-50 to classify images of objects for around 1000 unique classes. I'm wondering if there is any way to estimate how many unique angles I need in my training set to classify images that can be taken from any angle. For example, if for a given object I had 500 training images from directly the front and 500 training images from directly the top, I'd have 2 unique angles.

A model trained with only those 2 unique angles probably wouldn't be able to classify the same object if it was given a photo from the top right looking down.

Is there anyway to figure out how many unique angles I would need in my training set to classify images that could be taken from any angle? If I had 12 unique angles (top, bottom, front, back, left, right, front-left, front-right, front-top, front-bottom, back-left, back-right, back-top, back-bottom) would I then be able to classify images of any arbitrary angle?

To clarify, if I had 12 unique angles, that would mean I would have many photos from each of the 12 angles, but the 12 angles would all be exactly the same with no variation. I.E. top would be exactly a 90-degree angle towards the object on the Z-axis and 0-degree angles on the X and Y axis, for many photos.

",41421,,2444,,11/17/2021 14:11,8/16/2022 20:34,How many unique angles of an object do you need in your image training set in order to correctly classify it?,,2,0,,,,CC BY-SA 4.0 32428,1,,,11/17/2021 1:11,,5,1886,"

Based on some preliminary research, the conjugate gradient method is almost exactly the same as gradient descent, except the search direction must be orthogonal to the previous step.

From what I've read, the idea tends to be that the conjugate gradient method is better than regular gradient descent, so if that's the case, why is regular gradient descent used?

Additionally, I know algorithms such as the Powell method use the conjugate gradient method for finding minima, but I also know the Powell method is computationally expensive in finding parameter updates as it can be run on any arbitrary function without the need to find partial derivatives of the computational graph. More specifically, when gradient descent is run on a neural network, the gradient with respect to every single parameter is calculated in the backward pass, whereas the Powell method just calculates the gradient of the overall function at this step from what I understand. (See scipy's minimize, you could technically pass an entire neural network into this function and it would optimize it, but there's no world where this is faster than backpropagation)

However, given how similar gradient descent is to the conjugate gradient method, could we not replace the gradient updates for each parameter with one that is orthogonal to its last update? Would that not be faster?

",26726,,2444,,11/17/2021 10:09,11/29/2021 22:18,Why is gradient descent used over the conjugate gradient method?,,2,4,,,,CC BY-SA 4.0 32429,2,,32396,11/17/2021 1:35,,1,,"

Positional Encodings in Transformers exist to give the model some information about the position of the embedding. This makes sense in fields like NLP or Time Series Data, since the position(order) matters in this case.

However, since you say that order of the data is not relevant in your use case, positional encoding would not be necessary.

",50997,,,,,11/17/2021 1:35,,,,0,,,,CC BY-SA 4.0 32430,2,,32416,11/17/2021 2:21,,1,,"

Duplicate boxes with unique labels make the problem too complex for the model. What I suggest is you use the horse face detection model to get a bounding box of the horse's face, crop the face image and use that image as a training sample for a separate classification model.

I have seen this method used often in human identification, and dividing the tasks/models seems much more reasonable than trying to solve it in one model.

P.S. Just out of curiosity, you said that

the best way to distinguish one horse from another is by its face

Is this really true? Aren't there better features to use from the body?

",50997,,,,,11/17/2021 2:21,,,,2,,,,CC BY-SA 4.0 32432,2,,32428,11/17/2021 10:26,,6,,"

When dealing with optimization problems, a fundamental distinction is whether the objective is a (deterministic) function, or an expectation of some function. I will refer to these cases as the deterministic and stochastic setting respectively.

Almost always machine learning problems are in the stochastic setting. Gradient descent is not used here (and indeed, it performs poorly, which is why it is not used); rather it is stochastic gradient descent, or more specifically, mini-batch stochastic gradient descent (SGD) that is the "vanilla" algorithm. In practice however, methods such as ADAM (or related methods such as AdaGrad or RMSprop) or SGD with momentum are preferred over SGD.

The deterministic case should be thought of separately, as the algorithms used there are completely different. It's interesting to note that the deterministic algorithms are much more complicated than their stochastic counterparts. Conjugate gradient is definitely going to be better on average than gradient descent, however it is quasi-Newton methods, such as BFGS (and its variants such as l-BFGS-b) or a truncated method that are currently considered state of the art.

Here's a NIPs paper that says CG doesn't generalize well. There are similar results for quasi-Newton methods. If you want something better than SGD, you should look into a method like ADAM, which was designed for the stochastic setting. CG and ADAM both use information from past gradient directions to improve the current search direction. CG is formulated assuming that the past gradients are the exact gradient. ADAM is formulated assuming that the past gradients are gradient estimates, which is the case in the stochastic setting.

",47080,,47080,,11/17/2021 19:04,11/17/2021 19:04,,,,7,,,,CC BY-SA 4.0 32433,2,,32426,11/17/2021 10:45,,2,,"

The reparameterization trick (also known as the pathwise derivative or infinitesimal perturbation analysis) is a method for calculating the gradient of a function of a random variable. It is used, for example, in variational autoencoders or deterministic policy gradient algorithms.

If you plan on working with models that involve random variables, you definitely need to understand what the reparameterization trick is.

You will also need to understand the other method to calculate gradients for functions of random variables, which is known as the likelihood ratio (also known as the score function or the REINFORCE gradient).

If your definition of a "traditional" neural network does not involve random variables, then such a method is irrelevant.

",47080,,2444,,11/19/2021 11:36,11/19/2021 11:36,,,,7,,,,CC BY-SA 4.0 32434,1,,,11/17/2021 11:17,,0,79,"

I am a Ph.D. candidate in High Energy Physics and my research involves numerical simulations and data analysis. I am interested to learn Artificial Intelligence and Machine Learning from the basics so that I could implement the same in my research. Due to the huge popularity of AI & ML one could easily find a large number of articles on the internet on the topic, but it seems to me that a search for a proper course (that would teach the basics at the beginner level) is nearly impossible for me.

It would be helpful if someone suggest me an introductory course (preferably video lecture) on 'machine learning' with the goal of applying the concepts to simulations of black hole environments.

",50999,,50999,,11/17/2021 13:17,11/17/2021 13:17,Which introductory courses (preferably video lectures) could I use to learn ML for applying ML to black hole simulations?,,1,10,,11/21/2021 6:32,,CC BY-SA 4.0 32435,2,,32434,11/17/2021 12:54,,2,,"

Cornell University offers lecture notes and videos of over thirty machine learning related courses in this link. CS4780 Introduction to Machine Learning in particular is a great resource (lecture notes, recordings).

The Visual Computing Group of TU Munich also offers lecture notes and videos of modern computer vision related tasks in this link. Introduction to Deep Learning in particular is a great resource (lecture notes, recordings). There are also courses on advanced topics of deep learning, image segmentation, 3D scanning, as well as a lot of publications on scene reconstruction, simulations and more.

",36447,,36447,,11/17/2021 13:02,11/17/2021 13:02,,,,0,,,,CC BY-SA 4.0 32436,1,,,11/17/2021 13:46,,0,2550,"

I have followed this guide as closely as possible: https://github.com/kingoflolz/mesh-transformer-jax

I'm trying to fine-tune GPT-J with a small dataset of ~500 lines:

You are important to me. <|endoftext|>
I love spending time with you. <|endoftext|>
You make me smile. <|endoftext|>
feel so lucky to be your friend. <|endoftext|>
You can always talk to me, even if it’s about something that makes you nervous or scared or sad. <|endoftext|>
etc...

Using the create_finetune_tfrecords.py script (from the repo mentioned above) outputs a file with 2 in it. I understand that means my data has 2 sequences.

I could really use some advice with the .json config file. What hyperparameters do you recommend for this small dataset?

The best I came up with trying to follow the guide:

{
  "layers": 28,
  "d_model": 4096,
  "n_heads": 16,
  "n_vocab": 50400,
  "norm": "layernorm",
  "pe": "rotary",
  "pe_rotary_dims": 64,

  "seq": 2048,
  "cores_per_replica": 8,
  "per_replica_batch": 1,
  "gradient_accumulation_steps": 2,

  "warmup_steps": 1,
  "anneal_steps": 9,
  "lr": 1.2e-4,
  "end_lr": 1.2e-5,
  "weight_decay": 0.1,
  "total_steps": 10,

  "tpu_size": 8,

  "bucket": "chat-app-tpu-bucket-europe",
  "model_dir": "finetune_dir",

  "train_set": "james_bond_1.train.index",
  "val_set": {},

  "eval_harness_tasks": [
  ],

  "val_batches": 2,
  "val_every": 400000,
  "ckpt_every": 1,
  "keep_every": 1,

  "name": "GPT3_6B_pile_rotary",
  "wandb_project": "mesh-transformer-jax",
  "comment": ""
}

The problem is that, when I test the fine-tuned model, I get responses that make no sense:

",51003,,2444,,11/19/2021 11:17,1/10/2023 12:33,How to fine-tune GPT-J with small dataset,,2,1,,,,CC BY-SA 4.0 32437,1,32450,,11/17/2021 13:54,,0,71,"

I am familiar only with basic AI/NN concepts but never worked with any libraries/tools as tensor flow. Currently, I have a task for which AI might be ideal: detection of a certain image artifact in a picture (lets say I want to detect a black circular spot of a variable size). Because the spot can be very small or very large, I guess the NN would have to somehow process the whole picture and then proceed in smaller regions? Anyway, for such a task, do I need to learn more about machine learning or there are already tools that I could simply train (e.g. providing "clear" and "stained" image samples in their training sets) without worrying about internal details?

",51005,,51005,,11/18/2021 7:47,11/18/2021 9:27,"For a task that searches for an image artifact within a picture, can existing tools can be used or do I need to design the process myself?",,1,0,,,,CC BY-SA 4.0 32438,1,32441,,11/17/2021 13:54,,0,71,"

I often heard people saying, "large set of training data is needed for producing an accurate AI".

But when I looked for articles explaining backpropagations online, it all seems like you could get the job done by "one single set of input", as long as you repeat the process enough times.

So what's the "large set of training data" for!?

After the optimized set of weights was calculated from the first input, plug in the second input and repeat the process again?

Won't that "screw up" the result from the first input since it was "tailored" from it?

",51004,,,,,11/17/2021 16:06,"Why ""large set of training data"" is needed in Neural Network AI training?",,1,0,,,,CC BY-SA 4.0 32439,1,,,11/17/2021 14:12,,0,135,"

Expert Systems (ES) are regarded as AI. However, ES can be as simple as a system of If-Then rules. But AI seems like a big name for a set of (could be rather simple) If-Then rules. Is this indeed the case that certain systems of If-Then rules are regarded as AI?

",51007,,,,,11/18/2021 15:15,Can a system of If-Then rules be regarded as AI?,,2,5,,11/18/2021 15:20,,CC BY-SA 4.0 32440,2,,32427,11/17/2021 15:36,,1,,"

[I wanted it to be a comment but it's too long :)]

I don't think it's a good approach to split point of views into a group of 12 angles. The main purpose of using neural net is to have model that is able to generalize the data. That means the perfect model will be able to recognize an object in every orientation. Your task is to create a model that will be able to do similar thing. In my opinion You should try to make your dataset as differential as possible with not only different point views but also different lightning, background etc. According to the ResNet paper, they evalueted the model on ImageNet 2012 dataset. It was 1.28mil images for 1000 classes. That means approximately 1280 images per class. I guess that's good starting point for You. During training You'll be able to see if that is enough, or if You need to get more data or use some data augmentation techniques.

",46718,,,,,11/17/2021 15:36,,,,2,,,,CC BY-SA 4.0 32441,2,,32438,11/17/2021 16:06,,3,,"

You are not training the model to give the optimum result for one input; You want the model to produce the minimum loss for the whole population of samples that model may be given. The inputs are only considered samples from that population. The more samples you have, if they are drawn i.i.d., the better they can represent that population.

That is, you do not want a cat detector that is good at identifying one cat; You want a cat detector that can identify all cats.

Also, data is often noisy. So, just relying on one sample won't lead you to the exact local minimum for the loss even for that particular sample. That is, if you collect that exact same sample again, the noise from the sensors may be different. So, the negative gradient of one sample may not point exactly towards the local minimum for the loss. By combining several samples, you can "average out" the errors and get closer to the actual minimum.

Moreover, even if the gradient was pointing in the correct direction, you can still overshoot. Again, combining several different samples can help you minimize loss..

I think, before reading about backpropagation, you need to understand the underlying theory. So, look up gradient descent and stochastic gradient descent. Preferably, you should have some understanding of calculus and statistics before you do that.

",31879,,,,,11/17/2021 16:06,,,,2,,,,CC BY-SA 4.0 32445,1,32449,,11/17/2021 19:29,,0,84,"

I have read that bias in neural networks is used to avoid situation in which output of neuron is equal to 0. But what if the same output is equal to -1 and we add 1 to it? Isn't it the same issue as in case of zero output and no bias?

",48659,,,,,11/29/2021 0:57,Bias equal to 1 and neuron output equal to -1 in neural networks,,1,0,,,,CC BY-SA 4.0 32446,1,32550,,11/17/2021 20:20,,-1,101,"

So the code is related to using a buffer

class BufferWrapper(gym.ObservationWrapper):
    def __init__(self, env, n_steps, dtype=np.int):
        super(BufferWrapper, self).__init__(env)
        self.dtype = dtype
        old_space = env.observation_space
        self.observation_space = gym.spaces.Box(old_space.low.repeat(n_steps, axis=0),
                                                old_space.high.repeat(n_steps, axis=0), dtype=dtype)

    def reset(self):
        self.buffer = np.zeros_like(self.observation_space.low, dtype=self.dtype)
        return self.observation(self.env.reset())

    def observation(self, observation):
        self.buffer[:-1] = self.buffer[1:]
        self.buffer[-1] = observation
        return self.buffer

It is used to basically do some image processing so that the DQN is fed some transformation of the image. This link provides some higher-level logic behind some operations.

How can I actually understand what's the reason behind the code? Almost all repos have the exact same lines with no explanation (e.g. atari games GitHub repo).

My specific question is what is the purpose of the line self.buffer[-1] = observation?

In my case, my observation is a (7*1) array and I have to return that in an appropriate manner from the observation function.

The book has some mention of this class but I couldn't understand much from it

",41984,,2444,,11/29/2021 11:54,11/29/2021 11:54,"What does the line of code ""self.buffer[-1] = observation"" do in this BufferWrapper class for DQN?",,1,0,,,,CC BY-SA 4.0 32447,1,,,11/18/2021 0:57,,2,63,"

I have started working on Reinforcement Learning, specifically DQN. And I have watched some interesting videos on it. However, I have some doubts about how the model works.

Let's say we are playing Atari Breakout where we have only 3 actions: left, stay still, right. We have 2 networks- the policy_net and the target_net (technically both are the same) and they give 3 outputs which are the q values for the 3 actions. During exploration we choose:

random.randrange(3)

and during exploitation, we choose:

argmax(policy_net output)

where the input of the policy net is the current state.

Now, during each timeline of each episode, we are storing current_state, action, reward, and next_state in storage that we will later randomly shuffle and use in training, which we call experience replay memory. During training, let's say we extract a batch of (current_states, actions, rewards, next_states). Now, we get current_q_val and next_q_val as:

current_q_values = policy_net(current_states)
next_q_values = target_net(next_states)

We use the following equation to find the loss:

$$q_*(s,a) - q(s,a) = \text{loss}$$

$$E[R_{t+1} + \gamma \max_{a'} q_*(s', a')] - E[\Sigma_{k=0}^ \inf \gamma^k R_{t+k+1}] = \text{loss}$$

And for that, we find which one from the next_q_val is the best one and that we call $\max q_*(s',a')$ (we already have $q(s,a)=\text{current_q_values}$).

Now my question is, which is $R_{t+1}$ here? As we are taking a random batch from the experience replay memory, we don't have any specific time here and we cannot calculate the $R_{t+1}$ from the time $t$. And if we simply use $q*(s,a) - q(s,a)$ or $\text{next_q_val} - \text{current_q_val}$, then where is the importance of the reward? I don't really understand how we are using the rewards in the training. I mean, where are we making sure the positive and negative influences of good and bad rewards respectively? The fact that the agent takes an action (randomly or from the policy_net) which then gives a reward, I don't understand how to use this reward in the loss function so as to influence how the agent should take action given a state.

",51018,,2444,,11/19/2021 11:09,11/19/2021 11:09,How rewards are playing role in Deep Q Network,,0,6,,,,CC BY-SA 4.0 32448,2,,32439,11/18/2021 3:34,,0,,"

AI is not an objective definition. It's extremely broad, and perpetually up for debate. However, I would say most would agree that yes, any system of If-Then rules can be considered an AI.

",26726,,,,,11/18/2021 3:34,,,,0,,,,CC BY-SA 4.0 32449,2,,32445,11/18/2021 4:04,,3,,"

Short answer: The bias in a neural network is not used to avoid an output of 0. The bias is used to shift the activation function.

What this means

An activation I'm sure almost anyone is familiar with, the sigmoid function:

$$f(x)=\frac{1}{1+e^{-x}}$$ Which looks like:

If we use the sigmoid function as the activation of all neurons (without bias'), if a value needs to be shifted, we can never do this directly. Instead, it can only ever be scaled in such a way that it shifts on the next layer when we multiply by more weights and add. That might be a little confusing, so imagine it like this:

If you had a desired output of 2, how would you ever reach that with the sigmoid function? The answer is you can't, and that's where a bias comes in. With a bias of 1.5 and an output of 0.5, you could reach your desired output of 2.

For an even more concrete example, let's take a look at the Rectified Linear Unit (ReLU):

$$ f(x) = max(0, x)$$

Which looks like:

Now the goal of a neural network is to approximate a function, so lets in this case try and approximate a 2d function using the ReLU activation function. Conveniently, there's a really nice example in desmos of exactly this. As you can see, each ReLU equation employs a bias to move the function around. I challenge you to try and fit $f\left(x\right)=x^{3}+x^{2}-x-1$ without using a bias, and you'll really quickly see the need for a bias:

*Note I should've included the sign along with the numeric value of each bias inside the red bubble below

ETA: Variance!!!

How could I forget.

An important distinction to make is this - An output of 0 is perfectly normal in a neural network. As an example, look at ReLU! A proven, extremely effective activation function that for half of all inputs, outputs a 0!

The issue, is when 0 decreases your variance. It's extremely important to maintain variance in a neural network. By this I mean: The spread of the inputs should roughly equal the spread of the outputs. The reason for this is obvious once you know:

  • If spread gradually increases, the values in the network will explode to infinity, rendering most activation functions useless and slowing down calculations by orders of magnitude
  • If spread gradually decreases, the values in the network will vanish to 0, meaning the network doesn't calculate anything at all!

So how does this tie in? Well, sticking with the ReLU example, say you have 9 positive inputs and 1 negative input. When that negative input goes through the ReLU function, it will come out as 0. This will invariably decrease variance in the long run (there are some cases where it increases initially, but eventually it will decrease), eventually vanishing all values to 0. But with a bias, this can be counteracted. By keeping it even slightly away from 0, in combination with the parameters following, you can ensure that the variance stays roughly the same. Alas, the need for a bias.

For more information on keeping variance level, see my question from when I was just learning to count, and how I came to know exploding numbers all too well...

",26726,,26726,,11/29/2021 0:57,11/29/2021 0:57,,,,2,,,,CC BY-SA 4.0 32450,2,,32437,11/18/2021 9:08,,0,,"

For detecting something like a circle, or other geometric shapes, modern machine learning may be a bit overkill.

You should look at Haar Cascades, which you can find implemented in OpenCV. These are the reason cameras have been able to detect faces since the early naughts; They are efficient and easy to train; They are trained by providing positive and negative examples. They also scale better than something like a convolutional neural network; The implementation in OpenCV is made specifically to be able to detect objects of different sizes.

Here is OpenCVs tutorial to get started.


This, of course, assumes that the artifact is of a known geometric shape. If not, you'll probably have to look at convolutional neural networks and anomaly detecting autoencoders.

Object detection with neural networks is generally done with RCNNs; These break up the image into regions using various methods and do object detection on these regions. The most popular algorithm in this family is probably YOLO, which learns how to break up the image. Many other approaches exist.

",31879,,31879,,11/18/2021 9:27,11/18/2021 9:27,,,,2,,,,CC BY-SA 4.0 32451,2,,31821,11/18/2021 13:02,,2,,"

Your understanding is totally correct. The chain rule is defined as the product of derivatives, and as you well mention, from the mathematical point of view four scenarios can happen (you can visualize them here):

  1. If the terms are in $(-1,1)$, in their limit they tend to 0.
  2. If they are all 1, they stay at 1.
  3. If they are all -1, they alternate between -1 and 1.
  4. If they are greater than 1, they tend to $\infty$. If they are smaller than -1 they tend to $-\infty$.

In practice, however, cases 1. and 4. are the most common, and most of the strategies (e.g. resnet, lstm) are designed to tackle this problem as you probably well know already.

",45714,,,,,11/18/2021 13:02,,,,0,,,,CC BY-SA 4.0 32453,2,,11979,11/18/2021 14:22,,0,,"

Another view on this topic: think about the derivative of the MSE with respect to the inputs. You will need to apply the chain rule:

$$\frac{dMSE}{dx_i}=\sum_i2(y_i-f(x_i))\cdot(-1)\cdot\frac{df(x_i)}{dx_i}$$

which only resembles the derivative of the parabola when, as mentioned in nbro's answer, $f(x_i)=c, \forall\ i$ or, more generally, when they $f(x)$ is linear (Neil's answer).

",45714,,,,,11/18/2021 14:22,,,,0,,,,CC BY-SA 4.0 32454,2,,32439,11/18/2021 14:51,,0,,"

If-Then rules cannot exhaustively cover all possibilities that can be encountered in real world situations. One of the hallmarks of A.I is the ability to generalize across problem domains, not just individual problems like Vision or Natural language or Automated driving. Specifically, a true A.I system would not need a programmer to specify each and every problem that it might have to solve or even the specific conditions within each problem that it might encounter.

Consider this, the number and type of problems that potentially exist is virtually infinite. So, to be able to exhaustively catalog all the potential problems is virtually impossible. Furthermore, to be able to then write If-Then rules for all the problems and their individual conditions is a step beyond the impossible. Automated driving is a good example as a problem where there can be an infinite number of If-Then rules to write down if one were to go that route but you can always create a new situation that the programmer didn't think of like what should the system do if a meteor crashes in the middle of the road.

What A.I promises is automation of reasoning ability in domains and problem sets that are broad. In some very narrow fields where the possibilities are countable and finite and it can be taken for granted that the system exists in a pre-defined set of states, one could use If-Then rules to surpass human ability. For example, in Chess simple rule based systems could surpass human abilities a long time ago. But even in this well defined game, a rule based expert system worked well until a new system (AlphaZero) came along which was not designed with If-Then rules so was not limited by the imagination of the programmers either tactically or strategically and blew all the previous ES systems out of the water. But real world problems are seldom well defined and the set of potential states of the system can be large and varying in size.

Note that there are many well defined real world problems that ES can tackle and A.I wouldn't provide any additional advantage but the number of problems that A.I potentially could handle where ES would be sub-optimal is much larger. Also check out LCS that can synthesize rules through learning and exploration.

",51025,,51025,,11/18/2021 15:15,11/18/2021 15:15,,,,0,,,,CC BY-SA 4.0 32455,1,,,11/18/2021 15:33,,-2,105,"

I have an image which consists of a start and an end point, the journey has some obstacles which have to be avoided.

  • Is it possible to train an RL agent using such images to find the best path avoiding objects.
  • Or what algorithm should be used in order to find the best path avoiding objects, where the input is an image.

For example a picture of a person on a field track and there are obstacles in between from the start to the end point. I want to predict the series of actions that are required to reach the final position.

",51026,,48391,,11/19/2021 18:46,11/19/2021 18:46,Is it possible to train an RL agent using images?,,1,5,,,,CC BY-SA 4.0 32456,2,,32455,11/18/2021 16:56,,1,,"

A quick search about reinforcement learning applied to video games will lead you to countless tutorials that describe exactly what you're asking for.

With images the way to go is usually deep reinforcement learning. A convolutional neural network (or any other deep learning architecture) is used to process the image and compress it to a latent vector used as the "environment" seen by the agent.

Given that you can then apply whatever reinforcement learning algorithm (sarsa, q-learning, monte carlo tree search, etc.) to train the agent itself on a specific task, in this case reaching the end pixels area without hitting obstacles.

If you're familiar with python a good starting point is OpenAI Gym, and I would say in particular the Super Mario tutorial, since conceptually the game is basically the same as you're task of interest.

",34098,,,,,11/18/2021 16:56,,,,1,,,,CC BY-SA 4.0 32458,1,,,11/19/2021 3:39,,2,19,"

In deep learning, most of the applications are from text and images. Both text and images can be converted into a tensor of real numbers.

Other than both mentioned above, there may be some other real-world data used by Deep learning algorithms. Any algorithm in general takes tensor as input and gives tensor as output. As far as I know, tensors always consist of real numbers.

I want to know whether there are any practical applications that deal with the tensors containing non-real numbers. Is there any such possibility?

",18758,,2444,,11/22/2021 11:42,11/22/2021 11:42,Do any practical deep learning algorithms deal with tensors containing non-real entries?,,0,2,,,,CC BY-SA 4.0 32459,1,,,11/19/2021 3:59,,1,76,"

Recently I heard about the term volumetric data. The definition for volumetric data is as follows

#1: Definition

Volumetric data is typically a set S of samples $(x, y, z, v)$, representing the value v of some property of the data, at a 3D location $(x, y, z)$. If the value is simply a 0 or a 1, with a value of 0 indicating background and a value of 1 indicating the object, then the data is referred to as binary data. The data may instead be multivalued, with the value representing some measurable property of the data, including, for example, color, density, heat, or pressure.

#2: Some more details

A volumetric dataset consists of information at sample locations in some space. The information may be a scalar (such as density in a computed tomography (CT) scan), a vector (such as velocity in a flow field), or a higher-order tensor (such as energy, density, and momentum in computational fluid dynamics (CFD)). The space is usually 3D, consisting of either three spatial dimensions or another combination of spatial and frequency dimensions.

In simple words, I can say that volumetric data is nothing but a three-dimensional collection of tensors.

The articles linked above contain some examples of volumetric data in the medical (and probably physics) domain.

Are there any other simple real-world examples for volumetric data other than the medical and physics domain and physics?

",18758,,,,,12/21/2021 16:05,Is there any simple example for volumetric data except from physics and medicine?,,1,2,,,,CC BY-SA 4.0 32460,1,,,11/19/2021 4:02,,0,110,"

I realized that my state space is very large in size. I had planned to use tabular Q-learning (Bellman equation to update the $Q(s, a)$ after each action taken). But this 'large space' realization has now disappointed me and I read a lot of stuff on the internet. I have the following confusions.

I saw the 'approximation' term for the 'large space' scenario (for example, in this Medium blog post). But what it is exactly? I can't reduce the states I have nor can I club together different states and update the Q values. So, what is it I should do when they say 'approximate'? If it is the $Q(s,a)$ we approximate, then won't we anyway do for each state $s$ as and when it is encountered? How does this help in a 'large space' scenario?

",51037,,2444,,11/19/2021 18:47,11/19/2021 18:47,What do we actually 'approximate' when dealing with large state spaces in Q-learning?,,0,3,,,,CC BY-SA 4.0 32463,1,,,11/19/2021 11:06,,1,27,"

I am currently working in Python with a random forest algorithm to perform a scoring. My output is binary.

The idea now is to derive sub-scores from the above model that give an opinion on different topics within my dataset.

Unfortunately, I don't have an output variable for these sub-scores in my dataset, so I have the idea of deriving the information from the feature importance of the larger model. Any ideas on how to do this methodically?

As an example:

The goal is to determine if a person is creditworthy. The dataset contains a lot of information about the person's occupation, the area where he/she lives, past payment history etc. Now I want a score for overall creditworthiness and sub-scores for

  • Job features
  • Features on the housing situation
  • Features on the payment history/behaviour
",51045,,40434,,11/22/2021 7:01,11/22/2021 7:01,Derive information for sub-scoring from one scoring model,,0,0,,,,CC BY-SA 4.0 32464,1,32466,,11/19/2021 14:35,,4,496,"

I saw the difference between value function $V(s)$ and $Q(s, a)$. But when do I use each one? When I coded in Matlab I only used $Q(s, a)$ directly (as I was thinking of a tabular approach). So, when is more beneficial than the other? I have a large state space.

",51037,,2444,,11/19/2021 22:11,11/20/2021 0:40,"When to use the state value function $V(s)$ and when to use the state-action value function $Q(s, a)$?",,1,1,,,,CC BY-SA 4.0 32465,2,,32422,11/19/2021 15:26,,2,,"

I might be wrong, but I would suggest you approach this problem more simply rather than using neural networks or other machine learning constructs. Machine learning is concerned with making a computer learn from a lot of data. You do not need a lot of data to score how well the student's code compares to the teacher's code. You also do not need recommender systems to suggest the next question. Recommender systems suggest by infering the user's preferences, whereas in your case you can simply suggest the next question based on how well the student did on the current question and the type of current question.

I would first identify the following three subproblems:

  1. Scoring and assigning a difficulty score to a teacher's code piece.
  2. Scoring how similar the student's solution is to the true solution.
  3. Predicting next question based on the current problem difficulty and student score for the current problem.

For the first you can develop some algorithms to do feature extraction from the html, css or whatever code piece you want to evaluate. Features can be the following: length of the code piece, number of tags, number of different tags, number of attributes and so on. Combine them in a mathematical formula to calculate a difficulty score for the code piece. I would suggest a linear combination like the following: $Y = \sum{a_i X_i}$ where $X_i$ is the $i$th feature. Then, normalizing all the scores of all questions to a range of $[0, 100]$. You can even have an upper decision boundary such that whenever a certain score is reached, the normalized difficulty score will be $100$.

For the second you should start by checking if the student's code compiles, i.e. there are no syntax errors. Then, you can use Levenshtein Distance to calculate how many characters the student was wrong from the original solution. The goal of a good training should be for the student to infer the exact tags and attributes, so the exact sequence of characters. Calculate the percentage of mistakes over total length of code piece and assign it to $x$. Use a mathematical formula of your liking to score how well the student did given $x$. I would suggest you consider the following formula: $e^{-0.1x}$. It is $100\%$ score for $0$ mistakes and $50\%$ score for having $8\%$ of mistakes. Have a look at the graph at desmos.com.

Similarly, you can construct a formula or recipe for choosing the difficulty of the next problem. It should be based on the current score and the current problem's difficulty. You can even force the student to repeat the question, or a question of similar difficulty if the score is below $50\%$. If the score is above $50\%$, you can for example increase the difficulty level of the questions by 1 point every 3 correctly solved problems. Nevertheless, there are many ways you can approach this.

Hope this helps. Good luck in your thesis.

",36447,,,,,11/19/2021 15:26,,,,1,,,,CC BY-SA 4.0 32466,2,,32464,11/19/2021 16:10,,5,,"

The core differences between using $V(s)$ or $Q(s,a)$ are:

$V(s)$ cannot be used stand-alone to decide a policy.

You either need a separate policy function $\pi(a|s)$ that it is the value function for, or you can derive a policy from it if you have access to the environment's distribution model $p(r,s'|s,a)$ - the probability of receiving reward $r$ and arriving in next state $s'$ given start $s,a$. The derived policy would be deterministic $\pi(s) = \text{argmax}_a \sum_{r,s'} p(r,s'|s,a)(r + \gamma V(s'))$

In comparison, $Q(s,a)$ can be used to derive a policy without reference to any model: $\pi(s) = \text{argmax}_a Q(s,a)$. This makes the action value function Q necessary for model-free value-based control methods. Hence Monte-Carlo Control, SARSA, Q-Learning will all use action values.

$V(s)$ is usually more compact than $Q(s,a)$.

Space used by a state value table is $O(|\mathcal{S}|)$

Space used by an action-value table is $O(|\mathcal{S} \times \mathcal{A}|)$

These space concerns also apply when you are using approximation - the input domain you need to learn to approximate $Q(s,a)$ is larger than the one you need to learn $V(s)$. Although for large state spaces with lots of dimensions, the relative difference after approximation may be small enough that you don't really care.

The smaller space makes $V(s)$ a good choice if you can use it. The benefits are smaller memory footprint, and often faster learning because there are fewer separate values to learn (not always faster, as you still need to explore all actions in all states to learn accurate values).

Typical scenarios where you can use $V(s)$ instead of $Q(s,a)$

  • You have access to the full environment model, as described in the first section, above.

or

  • You are working with a separate policy function. In optimal control, this typically occurs in actor-critic methods, which are relatively advanced RL approach, but a popular one, especially when the action space is large. The situation also occurs in prediction problems where the task is not to optimise anything, but to evaluate a fixed policy.

or

  • Your actions are directly choosing the next state. This is called afterstate representation, and is subtly different to state values $V(s)$, but has similar benefit of smaller space used. It is useful in determining values in board games and similar environments.

If none of these situations applies, then you will have to use action-value function $Q(s,a)$. You might also choose to use $Q(s,a)$ because it is easier to stick with it in whatever framework you are working in. Sometimes the benefits of using $V(s)$ instead are not large enough to be worth the effort.

",1847,,1847,,11/20/2021 0:40,11/20/2021 0:40,,,,1,,,,CC BY-SA 4.0 32468,1,,,11/19/2021 20:37,,1,99,"

Conditional Variational Autoencoders (CVAE) and Mixture Density Networks (MDN) are supposed to address this issue. However, these models provide the distribution parameters, e.g., mean and standard deviation, for each given sample, while I need a single underlying distribution that the given data is generated from.

To put it simply, I would like to find the parameters of a normal distribution that estimates $P(Y \mid X)$, given $X$ and $Y$. Let's imagine the given data, $X$, have an $(n, m)$ dimension, where $n$ and $m$ indicate the number of samples, and features, respectively. Using CVAE/MDN I get the parameters with dimension $(n, d)$, where $d$ is the dimension of the parameters. But I am looking for a model to provide parameters with dimensions $(1, d)$.

",51060,,51060,,11/22/2021 16:15,11/22/2021 16:15,How to estimate conditional density using neural network?,,0,0,,,,CC BY-SA 4.0 32471,1,,,11/20/2021 0:09,,0,113,"

I want to know if there is anything other than neural networks (or Deep NNs) that I can effectively use to perform function approximation? I am asking this w.r.t to the use of approximators in Q learning with large state space.

",51037,,,,,1/15/2023 21:03,Alternatives to neural networks for function approximation in Q learning?,,1,5,,,,CC BY-SA 4.0 32472,1,,,11/20/2021 0:58,,1,80,"

I have already asked 2-3 general questions w.r.t Q learning and now I am asking a scenario specific one. I will try to be concise and understandable. I really really need help.

Scenario: I have a network with few nodes and links. On each link, there are some slots (#1 to #800). I generate traffic requests (come one by one) that want to go from one node to another and need certain slots to do so. So, my task is to allot the slots to each upcoming request and finally achieve a low rejection probability i.e. able to allot slots to as many requests as possible. The allotted slots are also freed up depending on when the arrived requests leave the system. I use the Poisson process to do, but this is not important here.

What I thought: There have been certain simple benchmarks to do this but I wanted to use Q learning so that in the long run the agent (a centralized controller) takes better decisions as to which slots to assign on the particular link i.e. which slots position (#1 to #800).

What I did: I decided to take the state space as the links (say 10 links in my case) and the action space as the #1 to #800 slots. I use binary notation 1 to say slot is occupied or 0 that is free.

Problem encountered: But it is long later I realized that my state space is infinitely big. For E.g. For request 1, I give two slots on Link1 & state is 1 1 0 0 0 .... up to 800 zeros. Another request comes (say 3 slots) and say Link1 state is now 1 1 1 1 1 0 0 0...up to 800 zeros. This is when I realized that the state space is unimaginably large as departures can also occur leading to freeing up and some 1s becoming 0s and so on.

What I am asking for: So, does anyone have any ideas on how can I still use Q learning in this case. The point is that someone already used deep Q on this. I was thinking I am approaching it in a different and simplified way of just using link state as state space that would enable me to have a small Q table. But it is later I realized that each link state will vary every time and lead to large state space thus putting me back to square one after investing a lot of time on this. So, please give any suggestions as I don't want to leave it altogether.

",51037,,2444,,11/22/2021 11:55,11/22/2021 11:55,"Can Q-learning be used for my scenario, and how might I do so?",,0,7,,,,CC BY-SA 4.0 32474,1,,,11/20/2021 19:46,,2,63,"

I was wondering if the Frechet inception distance for two colored datasets would be the same than the FID calculated for the same datasets converted to grayscale.

I know that it depends on the feature extraction, which is the Inception network. And that is the question, I don't fully understand the role of color in CNNs. I thought that all color information is lost in the feature extraction, and in this case I grayscale and colored datasets would produce the same FID. But I am not sure about that.

",48816,,18758,,11/24/2021 7:39,11/24/2021 7:39,Does the Frechet Inception Distance (FID) consider color?,,0,2,,,,CC BY-SA 4.0 32475,1,,,11/20/2021 20:24,,1,56,"

I'm trying to understand the paper by Carlini and Wagner on deep neural networks adversarial attacks. On page 44, in Section V-A, it is explained how the loss function to the described problem was designed. One part of this loss function is called "objective function".

There are 7 such functions proposed on which experiments are made, but there is no information provided on where they came from and why those are chosen.

Is this some commonly known thing in the Deep Learning area? Do you recognize them and can tell me where they came from and why they were chosen?

\begin{align} f_{1}\left(x^{\prime}\right) &= -\operatorname{loss}_{F, t}\left(x^{\prime}\right)+1 \\ f_{2}\left(x^{\prime}\right) &= \left(\max _{i \neq t}\left(F\left(x^{\prime}\right)_{i}\right)-F\left(x^{\prime}\right)_{t}\right)^{+} \\ f_{3}\left(x^{\prime}\right) &= \operatorname{softplus}\left(\max _{i \neq t}\left(F\left(x^{\prime}\right)_{i}\right)-F\left(x^{\prime}\right)_{t}\right)-\log (2) \\ f_{4}\left(x^{\prime}\right) &= \left(0.5-F\left(x^{\prime}\right)_{t}\right)^{+} \\ f_{5}\left(x^{\prime}\right) &= -\log \left(2 F\left(x^{\prime}\right)_{t}-2\right) \\ f_{6}\left(x^{\prime}\right) &= \left(\max _{i \neq t}\left(Z\left(x^{\prime}\right)_{i}\right)-Z\left(x^{\prime}\right)_{t}\right)^{+} \\ f_{7}\left(x^{\prime}\right)&= \operatorname{softplus}\left(\max _{i \neq t}\left(Z\left(x^{\prime}\right)_{i}\right)-Z\left(x^{\prime}\right)_{t}\right)-\log (2) \end{align}

",51073,,2444,,12/10/2021 23:26,12/10/2021 23:26,Where do the objective functions proposed in this paper by Carlini-Wagner attack come from?,,0,5,,,,CC BY-SA 4.0 32477,1,32478,,11/21/2021 1:34,,8,6924,"

What does the temperature parameter mean when talking about the GPT models?

I know that a higher temperature value means more randomness, but I want to know how randomness is introduced.

Does temperature mean we add noise to the weights/activations or do we add randomness when choosing a token in the softmax layer?

",34358,,2444,,11/22/2021 11:39,11/22/2021 11:39,"What is the ""temperature"" in the GPT models?",,1,0,,,,CC BY-SA 4.0 32478,2,,32477,11/21/2021 5:32,,12,,"

In sequence generating models, for vocabulary of size $N$ (number of words, parts of words, any other kind of token), one predicts the next token from distribution of the form: $$ \mathrm{softmax} (x_i/T) \quad i = 1, \ldots N, $$ Here $T$ is the temperature. The output of the softmax is the probability that the next token will be the $i$-th word in the vocabulary.

The temperature determines how greedy the generative model is.

If the temperature is low, the probabilities to sample other but the class with the highest log probability will be small, and the model will probably output the most correct text, but rather boring, with small variation.

If the temperature is high, the model can output, with rather high probability, other words than those with the highest probability. The generated text will be more diverse, but there is a higher possibility of grammar mistakes and generation of nonsense.

The difference between the low-temperature case (left) and the high-temperature case for the categorical distribution is illustrated in the picture above, where the heights of the bars correspond to probabilities.

Example

A good sample is provided in the Deep Learning with Python by François Chollet in chapter 12. An extract from the tutorial, refer to this notebook.

import numpy as np

tokens_index = dict(enumerate(text_vectorization.get_vocabulary()))

def sample_next(predictions, temperature=1.0):
    predictions = np.asarray(predictions).astype("float64")
    predictions = np.log(predictions) / temperature
    exp_preds = np.exp(predictions)
    predictions = exp_preds / np.sum(exp_preds)
    probas = np.random.multinomial(1, predictions, 1)
    return np.argmax(probas)
",38846,,38846,,11/21/2021 19:03,11/21/2021 19:03,,,,4,,,,CC BY-SA 4.0 32479,1,,,11/21/2021 10:57,,1,69,"

I need some advice. I am currently trying to do a printer classification with ML/DL.

What do I have?

11 colored-images with high resolution from 8 different inkjet-printers (in total 88 images) I have 8 classes (printers) All images are scanned with 2.400 dpi, so you are able to see the halftone of the images and the matrix dots I know each printers are different in terms of size of matrix dots, dot pattern etc. Based on that I need to do a feature extraction and train a ML model which can classify the correct printer. There is a previous work which has been done with Wavelet-Transformation for feature extraction and SVM for classification. The goal now is to find another approach of feature extraction and training.

My question here is, what do you think is the best solution?

My idea is:

Isolate the dots into binary color (black/white) do an edge detection with opencv (using filters like sobel, canny etc.) But I am not sure if this is a good approach. After reading a lot of papers on related work I found out many used Transfer Learning (e.g. VGG, Resnet) where features are learned in the training process.

So basically I have images and when you can zoom in you see for each printers the pattern are different. So instead of doing Wavelet-Transformation I need to do another approach.

In the litarature common feature extractor for this are Gray-levcel Co-Occurences, Wavelet-Transformation, Spartial filters which will be used in SVM or AdaBoost. Another approach is as said above with pre-trained CNN (transfer learning).

So, what do you think I should tackle next?

",51083,,,,,4/21/2022 2:06,Feature Extraction for printer classification,,1,0,,,,CC BY-SA 4.0 32480,1,,,11/21/2021 13:37,,-1,440,"

I am using a DDPG agent for doing prediction on the position on an asset in a stock trading-like environment. I am using the cumulative reward as the reward for each timestep. Since it is trained over many years of data, the reward tends to become large. I have realized that after some training the agent becomes lazy, it just keeps the same position.

Is it a bad practice to use cumulative rewards as rewards? Would the daily revenue be a better reward for the agent?

",49718,,2444,,11/21/2021 18:09,11/22/2021 8:16,Is it a bad practice to use cumulative rewards in reinforcement learning,,1,1,,,,CC BY-SA 4.0 32481,2,,32459,11/21/2021 15:34,,1,,"

Whenever a system is being modelled in three dimensions, you can be sure that tensors either can or are being used. Most of the systems I can think of are either simulated volumetric data, or a combination of real measurements and interpolated values.

CAD and BIM

Computer-Aided Design, or CAD, if quite commonly used in engineering. It's quite natural that, if they are modelling a mechanical system, that they need information regarding the materials they are considering to use to make the device; A little bit of googling showed that Autodesks Inventor computes inertia tensors and their mechanical simulation software computes stress tensors.

BIM is essentially CAD for the construction industry, but again, a lot of the same kind of information regarding the materials used is needed; Else, the building may collapse.

Geology

Modelling the stability of soil to construct a skyscraper may require 3D data. There is an old question at gis.stackexchange discussing software used to model geological systems. Of those, this one and this other one still seems to be up.

Meteorology

Most meteorological data, to the best of my knowledge, is two-dimensional. However, weather radar can be used to extract 3D data with information about precipitation and motion.

",31879,,31879,,11/21/2021 15:51,11/21/2021 15:51,,,,0,,,,CC BY-SA 4.0 32485,1,,,11/21/2021 20:05,,1,185,"

What are the approaches of encoding text data? I would be glad to hear some summarization from experienced persons.

And are there any solutions accepting words outside the vocabulary and including them to the results (online machine learning)?

Data input

So my basic understanding is that if we want predict some value (linear regression) or say what is the probability of occuring some event (logistic regression) we have to gather some features as our input and encode them as number. But this is not necessarily true when working with continuous data like sentences.

The most naive aproach, which comes to my mind is just to assign some natural numbers to each word in the vocabulary. But this number does not contain any meaningful data about the word itself. On the other hand what seems to be important in NLP is just the order of the words. This is where I think about n-grams so we feed network with more than just one word. Or attention like in the Transformer.

Another idea, which cames to my mind is to vectorize the word using one of the Word Embedding technique. Here we have some context about the word so the input is not just a dumb number. But does it have any value when we want to predict the next word? Can Word Embedding be used in that way or it's purpose is completely different.

Last thing I was reading of was to encode characters rather than words but it feels pointless in such basic example as next word prediction. I would think about it more for sub-word tasks like inflection generating.

Labelling

Again based on my knowledge when we want to solve yes/no problem we're using sigmoid function. If we have more classes we can use one-hot encoding. But sometimes the output of the network might give us ambiguous meaning so we're using the softmax function so all output sum to 1.

How this looks in NLP area? When having a vocabulary consisting of 600k words do we really need 600k softmaxed outputs? I'm also thinking there about Word Embedding solutions where we can reduce the number of outputs to let's say 300 numbers and then find the closest word matching the output without using softmax.

",51090,,,,,11/22/2021 10:44,General approaches in text encoding and labelling for NLP,,1,3,,11/22/2021 11:45,,CC BY-SA 4.0 32487,2,,32479,11/22/2021 1:28,,1,,"

How about you make a dataset using patches from your image, and train a CNN model with that dataset?

That is, if you want to train a neural net, a dataset with 11 images for each class is too small and thus is prone to overfitting.

However, since your image is high resolution and the printers can be classified by just using the zoomed in images, you can split each of your images into hundreds to thousands of patches and use that as a dataset. That way each of your class (printer) will have data samples equal to 11 * num_of_patches_per_image.

After you make your dataset, train a cnn classifier with 8 classes with it. You can do transfer learning and finetune a pretrained cnn, or I think depending on the amount of data you end up with, you can train a whole new cnn from scratch.

During inference time, you can pass your cnn model a patch from your test image (or you can pass a dozen of patches and ensemble the results for increased accuracy and robustness).

P.S.

With deep learning, you don't really have to worry too much about feature extraction as traditional machine learning. Especially with image-like data. I have a strong feeling that you can probably make a very high-performance model just by doing the above.

",50997,,50997,,11/22/2021 1:39,11/22/2021 1:39,,,,3,,,,CC BY-SA 4.0 32488,1,,,11/22/2021 1:58,,1,340,"

I want to train a neural network model with the arcface loss function and try to combine it with domain adaption. But when the training process continues, I find the test accuracy first increases and then decreases, the model cannot reach convergence. I chose the office31 dataset, and the feature_extractor was resnet50.

I want to know if it is caused by my code, or by my loss function

The arcface function was set as

def Arc_pred(cosine, s=64.0, m=0.1):
    cosine = cosine / s
    thea = torch.acos(cosine)
    top = torch.exp(torch.cos(thea + m) * s)
    _top = torch.exp(torch.cos(thea) * s)
    bottom = torch.sum(torch.exp(cosine * s), dim=1).view(-1, 1)
    divide = (top / (bottom - _top + top)) + 1e-10
    return divide

and my total loss function was set as

total_loss = 0.1*target_entropy_loss + label_loss + arc_loss + discriminator_loss

In that, the target_entropy_loss tries to make the decision boundary cross the sparsest sample area,label_loss was the classification loss, discriminator_loss was a domain adaptation loss function.

I tried to set a learning rate schedule for my experiment, it seems it did not work. So, could it be caused by my loss function?

",51094,,2444,,11/25/2021 1:04,11/25/2021 1:04,Test accuracy decreases during my train process,,1,2,,,,CC BY-SA 4.0 32490,2,,32488,11/22/2021 3:35,,1,,"

This looks like overfitting. You can try stop training earlier by using a validation dataset to prevent this, or you can try other regularization effects such as weight-decay, dropout etc.

",32621,,,,,11/22/2021 3:35,,,,0,,,,CC BY-SA 4.0 32491,1,,,11/22/2021 4:30,,2,400,"

I made a simple expert system using ES-Builder. Please click the link to view it. ES-Builder is a web-based expert system shell. There is a tree-based knowledge representation. In ES builder, User Interfaces are also automatically designed. They generate a link as I have shared above and anyone can access it and can use it.

But when I try other ES shells such as JESS, CLIPS & PyKE, I only noticed that there we have to write facts and rules and the program is run on command line upon consulting the Expert System. There is no UI like in ES-Builder.

My question is, is there any way to build UI to the expert systems created by CLIPS/JESS? Or else should I create a web application using another framework like Spring, DotNet, and integrate it with the knowledge base created with CLIPS/JESS?

(I am a bit confused, because according to what I have learned: if we use an expert system shell then we need not program it using languages (such as Prolog). Because the User Interfaces and Inference Engine is already there. What we are remained to do is just to build the knowledge base. Similar to ES builder UI is auto built.)

Thank you very much for the support! If the question is confusing, I am happy to modify it in a more understandable manner.

",50537,,50537,,11/26/2021 3:09,12/28/2022 15:01,How to build an expert system similar to ES-Builder Web,,1,3,,,,CC BY-SA 4.0 32492,2,,32480,11/22/2021 8:16,,1,,"

Reinforcement learning already has the objective of maximising the sum of future expected reward.

By making each reward the sum of all previous rewards, you will make the the difference between good and bad next choices low, relative to the overall reward guaranteed on each step.

The best reward for the agent should be as direct measure of what you want it to achieve as possible. Sometimes you need to compromise because you cannot easily measure what you really want. Sometimes you may need to assist the bot, giving it a more frequent reward signal (e.g. change to predicted cash-out value of all the current holdings) - but you should note this is also a compromise, because an agent that maximises this "reward shaping" reward scheme may not actually maximise what you really want.

You could view your attempt to use cumulative values as a failed experiment in reward shaping. I don't know if daily revenue would be a good for your bot, because a trading bot might have several different end goals for its creator.

First thing you should do is figure out what your success criteria is for the bot, and find the measure for it that best captures that goal as directly as possible. E.g. if the goal is for it to create a reliable income after 5 years of trading, then the reward might be the monthly income that it able to generate in the long term. Perhaps simulating or measuring this is too hard, in which case you could fall back to a proxy, such as cash value of any sell orders with a penalty if it is not maintaining a certain amount of total of holdings.

",1847,,,,,11/22/2021 8:16,,,,0,,,,CC BY-SA 4.0 32493,1,,,11/22/2021 10:37,,1,75,"

For a project, I've trained multiple networks for multiclass classification all ending with a ReLU activation at the output.

Now the output logits are not probabilities.

Is it valid to get the probability of each class by applying a softmax function at the end, after training?

",51011,,,,,11/22/2021 10:37,Use soft-max post-training for a ReLU trained network?,,0,3,,,,CC BY-SA 4.0 32494,2,,32485,11/22/2021 10:44,,0,,"

Text encoding depends very much on the purpose of your application. Here are some examples:

Text-to-speech: You would start with the word form itself, and probably look it up in a table that gives you a mapping to the phonological structure. Or you work through it (eg with a finite state transducer) and look at combinations of letters and check for the most common realistion of these letters (depending on the language and the writing system you are using). In the latter approach you can also deal with unknown words.

Part-of-speech tagging: You would look up the string in a dictionary, returning you a list of all the possible tags for the word (possibly augmented by frequency information). From then on you would operate on a sequence of tags, until you have them disambiguated, and you end up with a list that maps each word in the sentence (token) with a particular tag. For unknown words you can either assign them all possible tags (and let the disambiguation sort them out), or choose only the open class ones (you won't find many unknown prepositions or determiners).

Information retrieval/storage: This is one situation where you would want to replace word by numbers. You'd have a look-up table that maps word to integer values, and these can be very efficiently stored. For example, in the Bank of English one of the biggest early language corpora (several 100s of millions of tokens in the early 1990s), each token was encoded as a 4 byte integer. It was then very fast to go from the index positions of a word (from an inveretd index) to the actual position in the file, as each token is 4 bytes long, so the token at index $n$ is at file position $4n$. It also somewhat compresses the text if you assume the average word length is greater than 4 (and that you don't encode the spaces between the tokens). Unknown words: they are by definition not in your text, so you can just return "not found".

Machine learning: ML approaches don't really work on strings, so you need to encode word numerically. Ideally this should not be a random mapping (such as alphabetical order), so one approach that is used is to represent each word by a vector that encodes which other words occur in its neighbourhood. It is a well-known principle in corpus linguistics (since the 1930s) that words are not randomly distributed in a text, but that some words co-occur with regularity. For example, rasher and bacon usually go together. So the vector which describes rasher will have a positive number in the slot which relates to does it occur with 'bacon'?, whereas the vector describing bookshelf will not. With a large enough vector, you can approximate similarities in meaning between words (based on their lexcial environments). Unknown words: you could use a random or average vector -- I don't know what the standard way is to deal with that.

",2193,,,,,11/22/2021 10:44,,,,0,,,,CC BY-SA 4.0 32495,1,,,11/22/2021 14:40,,3,91,"

I can see mobile apps that can locate a 3D object on a surface with a mobile camera and you can turn around that object.

What is the name of the algorithm(s) that is used for that purpose? Or, is there AI in these algorithms? Are they use plane detection? After detection, which algorithm do they use to locate the objects?

It seems like a Computer Vision problem, but I do not know the title.

",28129,,2444,,11/23/2021 23:56,11/23/2021 23:56,Which algorithms are used to locate objects in a 3d space?,,0,2,,,,CC BY-SA 4.0 32496,2,,5769,11/22/2021 15:09,,0,,"

My understanding from this paper https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/TASLP2339736-proof.pdf was that the filters are used on each input channels (i.e input feature map in the paper) separately and the result is summed, as described in eq. 8. Here they use different filters for each channel, but you could totally use the same filter. As to knowing the software implementation in a particular ML library such as Tensorflow or PyTorch, this requires inspection as suggested by @Mohsin Bukhari

",51105,,,,,11/22/2021 15:09,,,,0,,,,CC BY-SA 4.0 32497,1,32499,,11/22/2021 15:24,,4,1213,"

I am pretty new to reinforcement learning and was working with some code for the PPO and DQN algorithms. After looking at the code, I noticed that the authors did not include any code to setup a validation or testing dataloader. In most other machine learning training loops, we generally include a validation and testing dataset to assure that the model is not overfitting the training data. However, in reinforcement learning the data is all simulated from the same environment, so perhaps the overfitting issue is not such a big deal?

Anyhow, could someone please indicate whether it is standard practice to only use a training dataset or dataloader for reinforcement learning, and to ignore validation or testing datasets?

",15765,,,,,11/22/2021 18:31,Do we use validation and test sets for training a reinforcement learning agent?,,1,2,,,,CC BY-SA 4.0 32499,2,,32497,11/22/2021 18:31,,3,,"

No, we typically don't use a validation/test data set in Reinforcement Learning (RL). This is because of how we use the data in RL. The use of a data set is very different to the classic supervised/unsupervised paradigms. Some RL algorithms don't even have a data-set as such. For instance, the vanilla tabular Q-learning does not use a data-set -- it will see an experience tuple $(s, a, r, s')$ and make an update based on this, and discard it, until it is potentially see again during training.

I have not looked at the code you have looked at for PPO and DQN but I would wager that the data loader they use is for a) in PPO when they are optimising the most recent trajectory, or b) use a data loader for the sampled experience from a replay buffer in DQN.

Note that the replay buffer is technically a dataset, but it is not a traditional dataset as in the other paradigms. That is essentially because

  1. the dataset is non-stationary, experience is added as it is collected, and typically it is deleted to make room for new experience once a size limit of the buffer has been reached;
  2. We don't necessarily use a data point in the buffer at all before it is removed -- consider a large buffer but small batch size. As a somewhat simple example, consider a replay buffer of size 10,000 and a batch size of 1, i.e. for every update we only sample 1 data point from the buffer. Assuming we sample uniformly at random as in vanilla DQN, then the probability of the point never being seen is 0.368.

To validate RL agents we typically assess how the trained agent performs on it's intended task.

",36821,,,,,11/22/2021 18:31,,,,2,,,,CC BY-SA 4.0 32500,1,32532,,11/23/2021 5:50,,0,28,"

I'm reading this paper for sub-character decomposition for logographic languages and the authors mention decomposition at inference-time. They're using Transformer architecture.

More specifically, the authors write:

We propose a flexible inference-time sub-character decomposition procedure which targets unseen characters, and show that it aids adequacy and reduces misleading overtranslation in unseen character translation.

What do inference-time and inference-only decomposition mean in this context? My best guess would be that inference-time would be at some point during the decoding process, but I'm not 100% clear on whether that's the case and, if so, when exactly.

I'm going to keep digging and update if I find something helpful. In the meantime, if anyone needs more context just let me know.

",51014,,2444,,11/23/2021 12:21,11/27/2021 20:13,What does it mean to apply decomposition at inference-time in a machine translation system?,,1,0,,,,CC BY-SA 4.0 32501,1,,,11/23/2021 15:44,,0,356,"

I have transaction data and I would like to extract the merchant from the transaction description. I am new to this but I just came across Named Entity Recognition and SpaCy. I have hundreds of thousands of different merchants.

Some questions that I have:

  • How much labelling do I need to do given the number of merchants I need to extract?

  • How many different instances of the same merchant I need to label to get decent results?

",51126,,2444,,11/24/2021 12:35,12/5/2022 19:03,How much labelling is required for NER with SpaCy?,,2,1,,,,CC BY-SA 4.0 32503,2,,26303,11/23/2021 23:52,,0,,"

Matching network and prototypical network few-shot learning can be a better option than Siamese network, There are many implementation available online. Also, these methods are fast in computation or classification.

",44197,,40434,,11/24/2021 7:40,11/24/2021 7:40,,,,0,,,,CC BY-SA 4.0 32504,1,,,11/24/2021 6:30,,0,34,"

I created a CNN using Tensorflow to identify pneumonia and sometimes it returns a very small number as a prediction. why is this happening?

I have attached the link for the dataset

Here I how I process and load the data.

from tensorflow.keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator( rescale = 1.0/255. )
val_datagen  = ImageDataGenerator( rescale = 1.0/255. )
test_datagen = ImageDataGenerator( rescale = 1.0/255. )

train_generator = train_datagen.flow_from_directory('/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/train/',
                                                    batch_size=20,
                                                    class_mode='binary',
                                                    target_size=(350, 350)) 

validation_generator =  val_datagen.flow_from_directory('/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/val/',
                                                         batch_size=20,
                                                         class_mode  = 'binary',
                                                         target_size = (350, 350))
test_generator = test_datagen.flow_from_directory('/kaggle/input/chest-xray-pneumonia/chest_xray/chest_xray/test/',
                                                         batch_size=20,
                                                         class_mode  = 'binary',
                                                         target_size = (350, 350))

And here the Model, compile and fit functions

import tensorflow as tf

model = tf.keras.models.Sequential([
    # Note the input shape is the desired size of the image 150x150 with 3 bytes color
    tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(350, 350, 3)),
    tf.keras.layers.MaxPooling2D(2,2),
    tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
    tf.keras.layers.MaxPooling2D(2,2), 
    tf.keras.layers.Conv2D(64, (3,3), activation='relu'), 
    tf.keras.layers.MaxPooling2D(2,2),
    # Flatten the results to feed into a DNN
    tf.keras.layers.Flatten(), 
    # 512 neuron hidden layer
    tf.keras.layers.Dense(1024, activation='relu'), 
    # Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('cats') and 1 for the other ('dogs')
    tf.keras.layers.Dense(1, activation='sigmoid')  
])

compile the model

from tensorflow.keras.optimizers import RMSprop

model.compile(optimizer=RMSprop(learning_rate=0.001),
              loss='binary_crossentropy',
              metrics = ['accuracy'])

model fit

history = model.fit(train_generator,
                              validation_data=validation_generator,
                              steps_per_epoch=200,
                              epochs=2000,
                              validation_steps=200,
                              callbacks=[callbacks],
                              verbose=2)

The evaluation metrics as followings, loss: 0.2351 - accuracy: 0.9847

The prediction shows a big negative number for the negative pneumonia, and for positive it shows more than .50.

I have two equations?

  1. why I getting a very small number as 2.xxxx * 10e-20

  2. why I can't get the following values as null?

    val_acc = history.history[ 'val_accuracy' ] val_loss = history.history['val_loss' ]

I am a still student in machine learning and I really appreciate your effort to solve my question.

",46537,,2444,,11/24/2021 10:52,11/24/2021 10:52,Why am I getting a very small number as CNN prediction?,,0,2,,,,CC BY-SA 4.0 32505,1,32506,,11/24/2021 7:59,,2,1508,"

CONTEXT

I was wondering why there are sigmoid and tanh activation functions in an LSTM cell.

My intuition was based on the flow of tanh(x)*sigmoid(x)

and the derivative of tanh(x)*sigmoid(x)

It seems to me that authors wanted to choose such a combination of functions, the derivative would make possible big changes around the 0, since we can use normalized data and weights. Another thing is that the output would go to 1 for positive values and go to 0 for negative values which is convenient.

On the other hand, it seems natural that we use sigmoid in forget gate, since we want to have a better focus on the important data. I just don't understand why there cannot only be a sigmoid function in the input gate.

OTHER SOURCES

What I found on the web is this article where the author claims:

To overcome the vanishing gradient problem, we need a method whose second derivative can sustain >for a long range before going to zero. Tanh is a good function that has all the above properties.

However, he doesn't explain why this is the case.

Also, I found the opposite statement here, where the author says that the second derivative of the activation function should go to zero, however, there is no proof for that claim.

QUESTION

Summing up:

  1. Why cannot we put a signal with just a sigmoid on the input gate?
  2. Why there are tanh(x)*sigmoid(x) signals in the input and output gate?
",46718,,2444,,11/24/2021 14:06,10/22/2022 15:38,Why is there tanh(x)*sigmoid(x) in a LSTM cell?,,2,0,,,,CC BY-SA 4.0 32506,2,,32505,11/24/2021 8:37,,1,,"

The tanh functions within the cell represent cell output or cell state. These are the values that either get passed on to other layers, or within the layer to the next time step. In theory, other activation functions could be used here according to taste, similar to other feed-forward or RNN networks. However, the -1 to 1 output range of tanh is useful, and I expect tanh has been experimentally validated as a good general case activation function here.

The sigmoid functions are used as soft gates for manipulating the raw RNN values. Importantly for your analysis, there is no sigmoid that takes the same input as any tanh. Each of the green boxes in the cell diagram in your question has a separate learnable set of weights applied to the combined input+hidden_state vector.

That means that your analysis of tanh(x)*sigmoid(x) is moot. The function is effectively tanh(x)*sigmoid(y) because inputs to each activation function can be radically different.

The intuition is that the LSTM can learn relatively "hard" switches to classify when the sigmoid function should be 0 or 1 (depending on the gate function and input data). As the weights are independent on the gates and input value processing components, the gradients to the cell output and state components are not composed in a combined function, but simply multiplied by the current value of the relevant switch. A muliplying hard switch of 1 will allow the gradient to flow back directly from the output loss to the point at which the gate decision was made - depending on which gate was activated, this improved gradient signal will either be to the input processing weights or the hidden state procesing weights.

It is also possible for the input and cell state processing to be mixed in various combinations, and the gradient is not guaranteed strong. However, in situations requiring strong memory-like signals (such as using punctuation characters when processing text), it is possible to observe LSTM learning those signals, effectively classifying inputs with high confidence (close to either 0 or 1), thus creating toggle switches, counters etc, within the cell state vector.

",1847,,1847,,11/24/2021 16:13,11/24/2021 16:13,,,,0,,,,CC BY-SA 4.0 32507,1,,,11/24/2021 9:12,,0,160,"

Suppose we have a DDPG algorithm. The actor has N input nodes, two hidden layers with J nodes, and S output nodes. The critic has N+S input nodes, two hidden layers with C nodes, and one output node. How does the time complexity of this algorithm could be calculated??

",51139,,,,,11/24/2021 9:12,What is the time complexity of DDPG algorithm?,,0,5,,,,CC BY-SA 4.0 32508,1,,,11/25/2021 0:04,,1,56,"

The paper: https://arxiv.org/abs/2110.11309, makes the following claim at the end of page 3:

The gradient of loss $L$ with respect to weights $W_l$ of an MLP is a rank-1 matrix for each of B batch elements $\nabla_{w_l}L = \sum_{i=1}^B \delta_{l+1}^i {u_l^i}^T$, where $\delta_{l+1}^i$ is the gradient of the loss for batch element $i$ with respect to the preactivations at layer $l + 1$, and ${u_l^i}^T$ are the inputs to layer $l$ for batch element i.

Suppose that we have an MLP with $k$ hidden layers (every hidden layer is followed by an activation function). Then the weight matrices will be $W_1, W_2, \dots, W_k$ (plus the biases, but they are irrelevant for now), and their sizes will be $(D_1, D), (D_2, D_1), \dots (D_k, D_{k-1})$ correspondingly, where $D$ is the number of input features.

Therefore, hidden layer $l$ has a weight matrix $W_l$ of size $(D_l, D_{l-1})$. Its gradient wrt the loss (for 1 batch element), $\frac{\partial L}{\partial W_l}$, will also be a matrix of size $(D_l, D_{l-1})$.

So if I understand correctly, the authors of the paper are claiming that $\frac{\partial L}{\partial W_l}$ is a rank-1 matrix? That is, every row (or column) can be expressed as a linear combination of 1 only row (or column)? If yes, why? How?

",44715,,,,,1/26/2023 9:08,Rank of gradient-of-loss with respect to layer weights in an MLP,,1,0,,,,CC BY-SA 4.0 32512,2,,26884,11/25/2021 19:22,,1,,"

I am currently working on a similar problem. I think your approach is good. As for setting the parameter lambda, since you are using deep neural networks, you can make it a learnable parameter, instead of a hyperparameter you set. This way, as the two losses fluctuate over your training iterations/epochs, the model will be able to adjust the lambda parameter accordingly.

",44905,,,,,11/25/2021 19:22,,,,1,,,,CC BY-SA 4.0 32514,1,,,11/25/2021 20:55,,4,159,"

I was wondering if a genetic algorithm is useful if the optimization problem has several optimal solutions.

My thought was that I should not use it since when combining two members of a population who have good fitness but are close to different optimal solutions, the child will get retarded.

Is this thinking wrong? If so, why?

",51164,,2444,,11/25/2021 21:56,12/21/2022 14:00,Are Genetic Algorithms suitable for a problem with a non-unique optimal solution?,,1,3,,,,CC BY-SA 4.0 32516,1,,,11/25/2021 21:35,,3,159,"

I don't know too much about Deep Learning, so my question might be silly. However, I was wondering whether there are NN architectures with some hard constraints on the weights of some layers. For example, let $(W^k_{ij})_{ij}$ be the weights of the (dense) $k$-th layer. Are there architectures where it is imposed something like $$ \sum_{i, j} (W^k_{ij})^2 = 1 $$ (namely the roll-out vector of weights is constrained to stay on a sphere) or $W^k_{ij}$ are equivalence classes $mod K$ for some number $K>0$?

Then, of course, one should probably think about proper activation functions for these cases, but it's probably not a big obstacle.

Putting constraints of these kinds will prevent the weights to grow indefinitely and maybe could prevent over-fitting?

",32901,,2444,,11/25/2021 21:59,11/25/2021 21:59,Are there neural networks with (hard) constraints on the weights?,,0,2,,,,CC BY-SA 4.0 32517,2,,32514,11/26/2021 2:38,,0,,"

Genetic algorithms (GA) have populations where it has an offspring in every generation usually the same quantity than the original population, so, if a child results from two good solutions (parents) but very different and it has a bad fitness, it will be not selected for being part of the next generation (elitism, where only the best n individual survive). But maybe you can find a new and good solution from that combination, and, in that case, it will be part of the new generation.

Maybe, if your problem has multiple solutions, the population can be formed by clusters of solutions that improve between them.

But there are many algorithms in GA (and Evolutionary Algorithms in general), so you need to read the details. Fortunately, there are many frameworks where only you need to define your problem and it can rapidly make comparisons between them.

",46432,,2444,,11/26/2021 11:36,11/26/2021 11:36,,,,0,,,,CC BY-SA 4.0 32518,1,,,11/26/2021 3:04,,1,136,"

I am implementing a simple REINFORCE (policy gradient) algorithm for openAI's FrozenLake-v0 environment. However, it does not seem to learn anything at all.

I have used the same neural architecture for openAI's CartPole-v0, and trained it using REINFORCE (policy gradient), and it works perfectly. So, what I am doing incorrectly for the FrozenLake-v0 environment? I think this has to do with the nature of the environment, but I am unsure which aspects of training REINFORCE must be altered to accommodate the dynamics of FrozenLake-v0. It seems like a very simple environment to solve, given that it has only 16 states.

My code is as follows:

import gym
from gym.envs.registration import register
import numpy as np
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.autograd import Variable
import matplotlib.pyplot as plt


# helper function for conversion of a state into an input to a neural network
def OH(x, n):
    '''
    :param x: state id
    :param n: n_states
    :return:  1-hot encoded numpy array of size [1,n]
    '''
    one_hot = np.zeros((n,))
    one_hot[x] = 1
    return one_hot



def running_mean(x, n):
    N=n
    kernel = np.ones(N)
    conv_len = x.shape[0]-N
    y = np.zeros(conv_len)
    for i in range(conv_len):
        y[i] = kernel @ x[i:i+N]
        y[i] /= N
    return y


# architecture of the Policy Network
class PolicyNetwork(nn.Module):
    def __init__(self, state_dim, n_actions):
        super().__init__()
        self.n_actions = n_actions
        self.model = nn.Sequential(
            nn.Linear(state_dim, 256),
            nn.ReLU(),
            nn.Linear(256, n_actions),
            nn.Softmax(dim=0)
        ).float()

    def forward(self, X):
        return self.model(X)


def train_reinforce_agent(env, episode_length = 100, max_episodes = 50000, gamma = 0.99, visualize_step = 50, learning_rate=0.003):

    # define the parametric model for the Policy: this is an instantiation of the PolicyNetwork class
    model = PolicyNetwork(env.observation_space.shape[0], env.action_space.n)
    # define the optimizer for updating the weights of the Policy Network
    optimizer = optim.Adam(model.parameters(), lr=learning_rate)


    # hyperparameters of the reinforce agent
    EPISODE_LENGTH = episode_length
    MAX_EPISODES = max_episodes
    GAMMA = gamma
    VISUALIZE_STEP = max(1, visualize_step)
    score = []



    for episode in range(MAX_EPISODES):
        # reset the environment
        curr_state = env.reset()
        done = False
        transitions = []

        # rollout an entire episode from the Policy Network
        for t in range(EPISODE_LENGTH):
            act_prob = model(torch.from_numpy(curr_state).float())
            action = np.random.choice(np.array(list(range(env.action_space.n))), p=act_prob.data.numpy())
            prev_state = curr_state
            curr_state, _, done, info = env.step(action)
            transitions.append((prev_state, action, t+1))

            if done:
                break
        score.append(len(transitions))
        reward_batch = torch.Tensor([r for (s, a, r) in transitions]).flip(dims=(0,))


        # compute the return for every state-action pair from the rewards at every time-step
        batch_Gvals = []
        for i in range(len(transitions)):
            new_Gval = 0
            power = 0
            for j in range(i, len(transitions)):
                new_Gval = new_Gval + ((GAMMA ** power) * reward_batch[j]).numpy()
            power += 1
            batch_Gvals.append(new_Gval)

        # normalize the returns for the batch
        expected_returns_batch = torch.FloatTensor(batch_Gvals)
        expected_returns_batch /= expected_returns_batch.max()

        # batch the states, actions, prob after the episode
        state_batch = torch.Tensor([s for (s, a, r) in transitions])
        action_batch = torch.Tensor([a for (s, a, r) in transitions])
        pred_batch = model(state_batch)
        prob_batch = pred_batch.gather(dim=1, index=action_batch.long().view(-1, 1)).squeeze()


        # compute the loss for one episode
        loss = -torch.sum(torch.log(prob_batch) * expected_returns_batch)

        # back-propagate the loss
        optimizer.zero_grad()
        loss.backward()
        # update the parameters of the Policy Network
        optimizer.step()

        # print the status after every VISUALIZE_STEP episodes
        if episode % VISUALIZE_STEP == 0 and episode > 0:
            print('Episode {}\tAverage Score: {:.2f}'.format(episode, np.mean(score[-VISUALIZE_STEP:-1])))


    # Training plot: Episodic reward over Training Episodes
    score = np.array(score)
    avg_score = running_mean(score, visualize_step)
    plt.figure(figsize=(15, 7))
    plt.ylabel("Episode Duration", fontsize=12)
    plt.xlabel("Training Episodes", fontsize=12)
    plt.plot(score, color='gray', linewidth=1)
    plt.plot(avg_score, color='blue', linewidth=3)
    plt.scatter(np.arange(score.shape[0]), score, color='green', linewidth=0.3)
    plt.show()
",51171,,51171,,11/27/2021 1:18,11/27/2021 1:18,FrozenLake-v0 not training using REINFORCE,,0,2,,,,CC BY-SA 4.0 32520,1,,,11/26/2021 7:53,,0,330,"

I have written a code for an RL agent such that at each state the model calculates the probabilities of all possible actions and samples one action randomly to proceed further. To acheive this, I have written the following code

act_prob_dist = tfp.distributions.Categorical(probs=act_probs)
action = act_prob_dist.sample()

It is working fine in the initial stages of training. Once the model has learnt about the particular state really well and the probability of one particular action has increased significantly than the others, the sample() call is picking is the same action every time. For example, when the action probabilities of a particular state are

tf.Tensor(
[[0.05213022 0.06613996 0.4933109  0.02918373 0.04188393 0.04100212
  0.03228914 0.00716161 0.08877521 0.02158365 0.04645196 0.07092285
  0.00916469]], shape=(1, 13), dtype=float32)

The model is sampling an action randomly. After decent iterations of learning the action probabilities became

tf.Tensor(
[[1.12852089e-12 1.54888698e-06 6.40413802e-08 1.03480375e-11
  2.05246806e-08 2.17290430e-09 1.04494591e-09 5.20959872e-11
  9.99995708e-01 1.26053008e-08 6.85156265e-10 1.70332885e-06
  9.99039457e-07]], shape=(1, 13), dtype=float32)

The model started picking index with high probability every time(index 8 in this case). The documentation reads Generate samples of the specified shape. I'm assuming it implies the choosing happens randomly. Can someone please explain why same action is being chosen in my case?
PS: tf.version is returning tensorflow._api.v2.version

",31799,,,,,11/26/2021 7:53,tfp.Distributions.Categorical.sample() is picking the same action everytime after certain iterations,,0,2,,,,CC BY-SA 4.0 32524,1,,,11/26/2021 14:52,,2,72,"

I am building a CNN and am wondering if inputting derived or computed inputs are generally bad for the effectiveness of CNNs? Or just NNs in general?

By derived or computed values I mean data that is not "raw" and instead is computed based on the raw data. For example, in a very simple form, using time-series data as the "raw" data and computing a 30 day SMA as a "derived/computed" value, and as another input.

Is this bad practice at boosting the effectiveness of the network? If it is not a bad practice, are there any tips on what kind of computed values someone should consider when adding new inputs?

The goal of my NN is for building predictions in time-series data.

",51180,,2444,,11/29/2021 12:16,12/25/2022 3:51,Are derived or computed inputs bad for CNNs?,,2,0,,,,CC BY-SA 4.0 32526,1,34263,,11/26/2021 15:32,,5,873,"

As far as I understand, Transformer's time complexity increases quadratically with respect to the sequence length. As a result, during training to make training feasible, a maximum sequence limit is set, and to allow batching, all sequences smaller are padded.

However, after a Transformer is trained, and we want to run it on a single sequence at inference time, the computational costs are far less than training. Thus, it seems reasonable that I would want to run the transformer on a larger input sequence length during inference time. From a technical perspective, this should be feasible.

I keep reading online that a Transformer cannot be run on a sequence size larger than the one seen during training. Why is this? Is it because the network weights will be unfamiliar with sequences of this length? Or is it more fundamental?

",12201,,18758,,11/26/2021 22:19,1/23/2022 21:07,Why do Transformers have a sequence limit at inference time?,,2,0,,,,CC BY-SA 4.0 32528,1,34172,,11/26/2021 22:37,,4,249,"

In convolutional neural networks, the convolution and pooling operations have a parameter known as stride, which decides the amount of jump the kernel needs to do on the input image. You can get more information regarding stride from follows taken from here

Stride is the number of pixels shifts over the input matrix. When the stride is 1 then we move the filters to 1 pixel at a time. When the stride is 2 then we move the filters to 2 pixels at a time and so on.

But, I am not getting what does it mean by stride information of an image at the tensor level. Consider the following paragraph from the chapter named Real-world data representation using tensors from the textbook titled Deep Learning with PyTorch by Eli Stevens et al.

img = torch.from_numpy(img_arr)
out = img.permute(2, 0, 1)

We’ve seen this previously, but note that this operation does not make a copy of the tensor data. Instead, out uses the same underlying storage as img and only plays with the size and stride information at the tensor level. This is convenient because the operation is very cheap; but just as a heads-up: changing a pixel in img will lead to a change in out.

It is mentioning about the stride information at the tensor level of an image. Do they mean the strides that are related to the CNN, pooling, etc., or are they referring to any other stride information?

",18758,,18758,,1/16/2022 8:39,1/20/2022 1:29,What is the stride information of an image referring here?,,1,0,,,,CC BY-SA 4.0 32532,2,,32500,11/27/2021 20:13,,0,,"

I've found this article that seems to answer my question: https://hazelcast.com/glossary/machine-learning-inference/

From this, my understanding is that inference-time describes when a machine learning system is put into use following training; so basically at the time of task application.

I think this would mean that the paper's authors are stating that the decomposition of sub-characters is occurring whenever the model is actively translating languages in a production environment.

",51014,,,,,11/27/2021 20:13,,,,0,,,,CC BY-SA 4.0 32534,1,,,11/27/2021 21:04,,1,124,"

I am training a model to generate images.

The model contains 5+5 layers:

Conv2D -> Upsample -> Conv2D -> Upsample -> Conv2D -> Upsample -> Conv2D -> Upsample -> Conv2D -> Upsample

I am modifying it as

Conv2D -> BatchNorm -> Upsample -> Conv2D -> BatchNorm -> Upsample -> Conv2D -> BatchNorm -> Upsample -> Conv2D -> BatchNorm -> Upsample -> Conv2D -> BatchNorm -> Upsample

I am applying the batch normalization layers just before upsampling as shown above and hence I am not getting the results that are at least comparable to the results by the model without any batch normalization layer.

Is my placement of the batch normalization layer wrong? If yes, then why?

",18758,,18758,,12/1/2021 9:07,12/1/2021 9:07,Why batch normalization before upsampling is giving worse results?,,0,2,,,,CC BY-SA 4.0 32536,2,,32524,11/28/2021 1:26,,1,,"

It seems to me that, you're basically asking whether feature engineering is bad or not. It's not necessarily bad, but the main advantage of deep neural networks stem from the fact that they do feature engineering for you. The earlier layers learn/extract useful features, and the last layer (usually a fully-connected one), just does some kind of regression on the extracted features.

All in all, feature engineering is not necessarily bad, but rather, the deep neural networks do it for you instead. So, they render feature engineering somewhat obsolete. However, if you've rather small amount of data, or using a shallow network for whatever reason, feature engineering can still benefit you a lot.

",32621,,,,,11/28/2021 1:26,,,,0,,,,CC BY-SA 4.0 32537,1,,,11/28/2021 2:51,,0,295,"

My CNN tensorflow model reports 100% validation accuracy within 2 epochs. But it incorrectly predicts on single new images. (It is multiclass problem. I have 3 classes). How to resolve this? Can you please help me understand these epoch results?

I have 1,000 images per class that are representative of my testing data. How can validation accuracy reach 1.00 in just the first epoch when I have a dataset of 3,000 images in total, equal amount per class? (I would expect this to start at around 33% percent -- 1/ 3 classes.)

I understand overfitting can be a problem. I've added a dropout layer to try to solve this potential problem. From this questionWhat to do if CNN cannot overfit a training set on adding dropout? I learned that a "model is over-fitting if during training your training loss continues to decrease but (in the later epochs) your validation loss begins to increase. That means the model can not generalize well to images it has not previously encountered." I don't believe my model is overfitting based on this description. (My model reports both high training and high validation accuracy. If my model was overfitting I'd expect high training accuracy and low validation accuracy.)

My model:

def model():
  model_input = tf.keras.layers.Input(shape=(h, w, 3)) 
  x = tf.keras.layers.Rescaling(rescale_factor)(model_input) 
  x = tf.keras.layers.Conv2D(16, 3, activation='relu',padding='same')(x)
  x = tf.keras.layers.Dropout(.5)(x)
  x = tf.keras.layers.MaxPooling2D()(x) 
  x = tf.keras.layers.Flatten()(x)
  x = tf.keras.layers.Dense(128, activation='relu')(x)
  outputs = tf.keras.layers.Dense(num_classes, activation = 'softmax')(x)

Epoch results:

Epoch 1/10
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:1096: UserWarning: "`sparse_categorical_crossentropy` received `from_logits=True`, but the `output` argument was produced by a sigmoid or softmax activation and thus does not represent logits. Was this intended?"
  return dispatch_target(*args, **kwargs)
27/27 [==============================] - 13s 124ms/step - loss: 1.0004 - accuracy: 0.5953 - val_loss: 0.5053 - val_accuracy: 0.8920
Epoch 2/10
27/27 [==============================] - 1s 46ms/step - loss: 0.1368 - accuracy: 0.9825 - val_loss: 0.0126 - val_accuracy: 1.0000
Epoch 3/10
27/27 [==============================] - 1s 42ms/step - loss: 0.0020 - accuracy: 1.0000 - val_loss: 5.9116e-04 - val_accuracy: 1.0000
Epoch 4/10
27/27 [==============================] - 1s 42ms/step - loss: 3.0633e-04 - accuracy: 1.0000 - val_loss: 3.5376e-04 - val_accuracy: 1.0000
Epoch 5/10
27/27 [==============================] - 1s 42ms/step - loss: 1.7445e-04 - accuracy: 1.0000 - val_loss: 2.2319e-04 - val_accuracy: 1.0000
Epoch 6/10
27/27 [==============================] - 1s 42ms/step - loss: 1.2910e-04 - accuracy: 1.0000 - val_loss: 1.8078e-04 - val_accuracy: 1.0000
Epoch 7/10
27/27 [==============================] - 1s 42ms/step - loss: 1.0425e-04 - accuracy: 1.0000 - val_loss: 1.4247e-04 - val_accuracy: 1.0000
Epoch 8/10
27/27 [==============================] - 1s 42ms/step - loss: 8.6284e-05 - accuracy: 1.0000 - val_loss: 1.2057e-04 - val_accuracy: 1.0000
Epoch 9/10
27/27 [==============================] - 1s 42ms/step - loss: 7.0085e-05 - accuracy: 1.0000 - val_loss: 9.3485e-05 - val_accuracy: 1.0000
Epoch 10/10
27/27 [==============================] - 1s 42ms/step - loss: 5.4979e-05 - accuracy: 1.0000 - val_loss: 8.5952e-05 - val_accuracy: 1.0000

Model.fit and model.compile:

model = model()

model = tf.keras.Model(model_input, outputs)
  
 model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
              metrics=['accuracy'])
  
hist = model.fit(
  train_ds,
  validation_data=val_ds,
  epochs=10
)

Code to predict new image:

def makePrediction(image):
  from IPython.display import display
  from PIL import Image
  from tensorflow.keras.preprocessing import image_dataset_from_directory 
  img = keras.preprocessing.image.load_img(
  image, target_size=(h, q)
  )
  img_array = keras.preprocessing.image.img_to_array(img)
  img_array = tf.expand_dims(img_array, 0) #Create a batch
 
  predicts = model.predict(img_array)
  p = class_names[np.argmax(predicts)]
  return p

Going to the "data" directory and using the folders to create a dataset. Each folder is a class label:

from keras.preprocessing import image
directory_data = "data"
tf.keras.utils.image_dataset_from_directory(
    directory_testData, labels='inferred', label_mode='int',
    class_names=None, color_mode='rgb', batch_size=32, image_size=(256,
    256), shuffle=True, seed=123, validation_split=0.2, subset="validation",
    interpolation='bilinear', follow_links=False,
    crop_to_aspect_ratio=False
)
 
tf.keras.utils.image_dataset_from_directory(directory_testData, labels='inferred')

Creating dataset and splitting it:

Train_ds code: (Output: Found 1605 files belonging to 3 classes. Using 1284 files for training.)

train_ds = tf.keras.preprocessing.image_dataset_from_directory(
  directory_data = "data",
  validation_split=0.2,
  subset="training",
  seed=123,
  image_size=(h, w),
  batch_size=batch_size)

Val_ds code: (Output: Found 1605 files belonging to 3 classes. Using 321 files for validation.)

val_ds = tf.keras.preprocessing.image_dataset_from_directory(
directory_data = "data",
  validation_split=0.2,
  subset="validation",
  seed=123,
  image_size=(h, w),
  batch_size=batch_size)
",51199,,51199,,11/29/2021 22:06,11/29/2021 22:06,"Why is val accuracy 100% within 2 epochs and incorrectly predicting new images? (1,000 images per class when training)",,0,10,,,,CC BY-SA 4.0 32541,2,,31952,11/28/2021 18:32,,0,,"

The Convolution Layer processes a certain part of the picture tensors and compresses it to a lower dimension. The spatial encoding adds the information of where the pixels were located in the image. Loosely speaking: It tells you the pixels I just processed were in the top left corner of the image. That way, the classification in the fully-connected layer can additionaly use this information. For more information on CNNs click here.

Maybe it becomes clearer in text processing. Transformer models for NLP have so-called positional encoding which is the counterpart to spatial encoding in image processing. When the sentence is processed it, the positional encoding tells the index of the word in the whole sequence.

",,user51195,,,,11/28/2021 18:32,,,,0,,,,CC BY-SA 4.0 32543,2,,31614,11/28/2021 19:47,,1,,"

Actually, the given pipeline was used in the old days of Graph Neural Networks.

Canonical paper on the subject is Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering.

You start from arbitrary graph with adjacency matrix $A_{ij}$ (let us assume that graph is undirected), such that: $$ A_{ij} = \begin{cases} 1 & \text{if there and edge between vertices $i$ and $j$} \\ 0 & \text{otherwise} \end{cases} $$

Then one constructs graph Laplacian: $$ L = D-A $$ $D$ is the degree matrix (number of edges entering the given vertex). There are also different normalizations in the literature, like $I - D^{-1/2} A D^{-1/2}$ is called normalized Laplacian.

This matrix has several properties:

  • It is symmetric
  • Non-negative definite

From the first statement it follows, that the matrix can be diagonalized due to the Spectral theorem,

Therefore, it makes sense to perform the eigendecomposition of this operator. And the eigenvectors form the graph Fourier basis.

Note, that in the special case, when the graph is a regular square grid, graph Laplacian just the discrete Laplace operator: $$ \begin{pmatrix} 0 & -1 & 0 \\ -1 & 4 & -1 \\ 0 & -1 & 0 \\ \end{pmatrix} \qquad (\text{for 2d case}) $$ And the Fourier basis consists of plane waves: $$ \sim e^{i (k_i i + k_j j)} $$

The next important fact is the Convolution theorem that states that convolution of two signals can be done as inverse Fourier transform of the dot product of their Fourier transforms: $$ f * g = \mathcal{F}^{-1} [\mathcal{F}[f] \cdot \mathcal{F}[g]] $$

These operations correspond to steps 3 and 4 in the pipeline in the OP.

I am not aware of the simple geometrical intuition in this setting. But in the following research, full eigendecomposition was truncated to Chebyshev polynomials, and then up to the first term in the decomposition, which gave rise to Graph Convolutional Networks by T.Kipf.

They allow for more intuitive and visual interpretation. Given adjacency matrix $A$, input feature map on the graph $H^{(l)}$ one defines the output of the layer to be: $$ f(H^{(l)}, A) = \sigma(\tilde{D}^{-1/2} A \tilde{D}^{-1/2} H^{(l)} W^{(l)}) $$ $\tilde{D}$ is the graph Laplacian for adjacency matrix with self-loops (arrows $i \rightarrow i$). $W^{(l)}$ is the matrix of learnable parameters.

In essence, one transforms the feature vectors $ H^{(l)}$ by a pointwise linear transformation, and aggregates the information from the neighborhood with some coefficients. Equivalently the function above can be rewritten as;

$$H_{i}^{l+1}=\eta\left(\frac{1}{\hat{d}_{i}} \sum_{j \in N_{i}} \hat{\boldsymbol{A}}_{i j}{W}^{l} H_{j}^{l}\right)$$

From the formula above one can see, that graph convolution is the summation of the features in the neighborhood vertices which are transformed by the weight matrix ${W}^{l}$ and division by the normalizing constant.

",38846,,38846,,11/29/2021 5:06,11/29/2021 5:06,,,,4,,,,CC BY-SA 4.0 32549,2,,32428,11/29/2021 10:23,,3,,"

The fundamental issue is that one doesn't really want to find an optimum of one's optimization problem. We are really interested in generalization - not optimality. And we still poorly understand how and why neural models generalize so well.

Now, it looks like the generalization properties of neural models have something to do with the structure of their optimization landscape and some particular properties of their well-generalizing minima. Empirically, the SGD class of optimizers is better at finding such generalizing minima.

This paper illustrates these ideas by talking about "wide flat" minima and showing how one can use SGD with stochastic weight averaging to improve generalization and convergence.

",20538,,,,,11/29/2021 10:23,,,,0,,,,CC BY-SA 4.0 32550,2,,32446,11/29/2021 11:53,,1,,"

I don't know if you're confused about this code because you're not very familiar with Python or reinforcement learning (specifically, DQN and experience replay), but that code should be very clear to you if you know Python, but maybe you're not very familiar with DQN.

Let's take a look at the observation method.

def observation(self, observation):
    self.buffer[:-1] = self.buffer[1:]
    self.buffer[-1] = observation
    return self.buffer
  • self.buffer[:-1] = self.buffer[1:] essentially drops/forgets the first (which is also the oldest) observation in the buffer.

  • self.buffer[-1] = observation adds the new observation (passed as parameter) as the last element of the buffer

You can execute the following code to see what that method does.

buffer = [10, 7, 3]
print(buffer)

buffer[:-1] = buffer[1:]

observation = 5
buffer[-1] = observation

print(buffer) # [7, 3, 5]

If you are not familiar with the experience replay technique, you can take a look at this or this answers.

",2444,,,,,11/29/2021 11:53,,,,0,,,,CC BY-SA 4.0 32551,2,,22627,11/29/2021 12:36,,1,,"

There is also the proto-book Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges (2021), written by some of the experts on the topic. The book does not focus only on graphs and graph neural networks (GNNs), but also covers manifolds, geodesics, and other mathematical concepts related to geometric deep learning and other GDL models.

",2444,,,,,11/29/2021 12:36,,,,0,,,,CC BY-SA 4.0 32552,1,,,11/29/2021 14:48,,0,24,"

As stated in the title, is there a way to adapt PSO to an online scenario where new data samples arrive continuously?

In more detail: suppose that I have a classifier with several parameters for which the optimal values are to be chosen automatically, instead of being predefined. I want to use PSO to select the parameters. I know this is doable in a static scenario, where the data set is fixed. However, if new data samples arrive over time (and in large amounts), is there a way to make PSO work on such dynamic data streams?

Also, I am open to other ways to implement self-adaptive parameters. PSO is a possible choice but if it's not possible I'd love to hear your suggestions about other approaches.

",31022,,2444,,11/29/2021 15:58,11/29/2021 15:58,Is there a way to adapt Particle Swarm Optimization to an incremental/online learning setting?,,0,3,,,,CC BY-SA 4.0 32553,1,32556,,11/29/2021 14:51,,2,148,"

Let's say that I want to create an optimization algorithm, which is supposed to find an optimum value for a given objective function. Creating an optimization algorithm to explore through the search space can be quite challenging.

My question is: can machine learning be used to automatically create optimization algorithms? Is there any source to look at for this?

",47317,,2444,,11/29/2021 16:03,11/29/2021 17:11,How do I use machine learning to create an optimization algorithm?,,1,1,,,,CC BY-SA 4.0 32554,1,,,11/29/2021 14:56,,0,101,"

Hopfield is a simple and traditional network. We feed into the network some patterns (Learning/Training). There is no training in Hopfield as the weight calculation adds up all the strength between neurons. The network goes into remembering mode by feeding a new unseen pattern (partially corrupted), and then the input is deactivated. The network iterates until it reaches a global or local minimum.

My question is that it finally remembers anything. It means that it remembers one of those patterns (combinations).

For example, we have five neurons. It remembers one of those $2^5=32$ patterns. So, one can say OK, this is what I am looking for, but it is not. What mechanism is available in the Hopfield network to check whether the found pattern is identical or similar to the input pattern?

",51236,,46469,,10/5/2022 15:48,11/4/2022 16:03,What is remembering in Hopfield network?,,1,0,,,,CC BY-SA 4.0 32555,1,,,11/29/2021 17:07,,2,101,"

I would like to implement the reinforcement learning algorithms SARSA and Q-Learning for the board game Connect Four.

I am familiar with the algorithms and know about their limitations regarding large states spaces due to storing all of this information in the Q-Table. Connect Four has a large state space estimated around 7.1*1013, e.g. MIT Slide 7, which is why simply applying these algorithms won't work (though this thesis claims it did)

However, I have found this answer to a similar post that proposes a possible solution to simplify the state space.

For one thing, you could simplify the problem by separating the action space. If you consider the value of each column separately, based on the two columns next to it, you reduce N to 3 and the state space size to 106. Now, this is very manageable. You can create an array to represent this value function and update it using a simple RL algorithm, such as SARSA

Unfortunately, I don't understand the proposed simplification and would like to ask the following questions:

  1. The action space is separated from the state space by considering each column separately. However, if my understanding of SARSA and QL is correct they use Q(S,A) to estimate the value function, therefore the state action pair is assigned a value.
  2. How does one calculate the value of a column based on the two columns next to it?
  3. Also what does next to it mean in this context? If the two adjacent columns of each column are used then we create five pairs (N=5) or are pairs created from the inside out (e.g. middle three columns, middle five, all seven)?
  4. Is a state (of the entire board?) then mapped to the array containing the value function for each action/column?

Any references to other literature or simplifications would be much appreciated!

",51242,,,,,11/29/2021 17:07,Reduction of state space of the game Connect Four to apply RL algorithms SARSA and Q-Learning,,0,0,,,,CC BY-SA 4.0 32556,2,,32553,11/29/2021 17:11,,3,,"

Machine learning has been used to automatically learn new optimization/learning algorithms. This task is often known as meta-learning, i.e. you learn to learn, in this case, an optimization algorithm, but note that meta-learning does not just refer to learning optimization algorithms (see this blog post).

The blog post Learning to Optimize with Reinforcement Learning (2017) is a good introduction to the topic and focuses on the approach proposed in this paper Learning to Optimize (2016), which uses reinforcement learning to solve this problem: more specifically, they learn a policy (in practice, represented as a neural network) that represents the learned optimization algorithm.

There are other related approaches: for example, you may be interested in the paper Learning to learn by gradient descent by gradient descent (2016, NeurIPS).

",2444,,,,,11/29/2021 17:11,,,,0,,,,CC BY-SA 4.0 32557,2,,32526,11/29/2021 17:47,,1,,"

To some extent, this is true; The piecewise feedforward layers can be added or subtracted to fit the sequence length. The matrix operations can similarly be scaled to fit sequence length.

However, the computational complexity comes from the matrix operations in the attention layer. Those are not trained; There are no trained parameters in the attention mechanism (see figure 2 and equation (1) in Vaswani et al). So, those have to be computed during inference as well.

Another challenge would be the output layer. That layer is a regular feedforward layer and thus has a fixed input size; That is, you cannot add new parameters during inference.

Of course, there is a caveat to this; There are now transformers now that allow recurrence, such as Transformer-XL and Memformer. These do, in a way, allow longer input sequences than "max sequence length".

",31879,,,,,11/29/2021 17:47,,,,3,,,,CC BY-SA 4.0 32560,2,,7525,11/30/2021 0:42,,2,,"

To better understand my point of view, I am using deep learning for geomatics and teledetection purposes.

So after reading with interest the great two previous answers, I would like to add my small contribution to this thread.

First, I would like to emphasis Insight knowledge: I do agree with the second point of Dennis's answer, "Simple" and well understood benchmarks helps for pin-pointing strengths of AI methods, which is good. But this is known by AI researchers. And if they work for general improvement and not just on one benchmark to "slingshot" their career as John's second point is mentioning, general improvement will be made. These thoughts links directly to my first point :

  1. Multiplicity of benchmarks = Variety of problems = overfitting avoidance: If a paper is presenting an increment of performance on only one benchmark, it will ultimately say that their method is a specific one. So even if one benchmark can lead to an "overfitting", a group of benchmarks will offer a better variety of problems and thus a better insight of the generalization properties of an AI technique. A good application case of this is the PointNet paper (first end-to-end 3D point cloud neural network), where authors tested their approach against ModelNet40 (a well-known 3D classification benchmark), but also against MNIST, to verify the generalization capabilities of the network. Can "Meta-benchmarks" be a thing ? Benchmarks that are a concatenation of actual well-known ones (or ones who are known to give insight into one specific perk).

Then, in my field, where AI is seen more as a tool than in the theoretical deep learning field. It is known that there is a gap between benchmark results and "real life" (or applied) cases. This, in my opinion, is induced by the very high quality of benchmarks ground truth that isn't available for all cases. This leads to my second point :

  1. Un/Semi-supervised benchmarks = as-noisy-as-the-reality benchmarks: As generalization is the capacity of dealing with the unknown, and because creating benchmarks on complex data is often time and money consuming, we should get out of the supervised way of thinking. This will increase the number of benchmarks available, which is good in regards to the first point, but also force theoretical AI research to focus more on these technic. It is well known that something is off with the actual supervised learning methods : Why do we need to show every possible case to a network ? Why can it not infer new knowledge from unlabeled data ? The biggest drag on this will be the data assessment. If semi-supervised benchmark will still give classical metrics, we will need a way of analyzing results on unlabeled data. In applied cases where ground truth is not often available, this is often done by visualization. But progress can be made to create a better, unbiased way on ranking methods. One idea can be to have a double-blinded random quality assessment in a captcha like manner of the results. I know this idea works only for human-understandable data, but other ways could be and needs to be found.
",51249,,,,,11/30/2021 0:42,,,,0,,,,CC BY-SA 4.0 32562,1,,,11/30/2021 9:21,,2,65,"

Typically, a Reinforcement Learning learning problem is formalized as finding an optimal policy for a Markov Decision Process (MDP). In many real-life situations, however, an agent can only get partial information from the environment. For example, Partially Observable MDPs are used to model the case where the agent does not fully observe the state.

I was wondering whether there is any well-established formalism for the case where the agent does not fully observe the reward signal.

In particular, I am thinking about the case where for every state-action pair $(s, a)$ the agent receives the reward $R(s, a)$ with probability $1 - \varepsilon$ and does not receive anything with probability $\varepsilon$. Of course, in principle, this setting can be thought as a regular MDP with a stochastic reward, but here I would like the agent to behave optimally w.r.t. to $R$.

I would really appreciate if you could point some relevant literature to me!

",32901,,2444,,12/1/2021 8:06,12/1/2021 8:06,Is there a mathematical formalism to deal with a missing reward signal?,,1,5,,,,CC BY-SA 4.0 32564,2,,32562,11/30/2021 10:10,,2,,"

Your setting (of randomly dropping out reward signals) impacts expected future reward by multiply everything by a common factor $(1-\epsilon)$.

As reinforcement learning (RL) control is based on maximising expected future reward, and multiplying by a positive constant does not affect ranking of action values, all existing RL methods will cope just fine without modification. They will behave optimally in the limit of training - with all the usual caveats of course - although learning will be slower due to added variance, and value estimates lower.

If the agent is allowed to observe that the reward signal is missing (as opposed to being an observed zero reward), then it could additionally estimate $\epsilon$, and correct its learned value function. I would recommend this is handled at the end of training as a separate function instead of modifying $Q(s,a)$ during training. That is because initial estimates for $\epsilon$ are likely to be inaccurate and make learning even slower (at least for TD learning due to bootstrapping on less accurate values).

Whether any corrected value function makes sense will depend on what it means to "not observe" the reward, and what the reward represents in terms of the environment and agent's goals. You may not need to know if all you care about is whether the agent is behaving optimally.

The setting becomes more complex if $\epsilon$ is made a function of $a$, $s$ or both, and solutions will be different depending on whether this function is known or unknown.

",1847,,1847,,11/30/2021 10:16,11/30/2021 10:16,,,,1,,,,CC BY-SA 4.0 32570,2,,24831,11/30/2021 22:19,,0,,"

As far as I understood, the difference is the following: original Transformers use a fixed type of encoding, based on sine/cosine functions.

On the other hand, GPT produces two embedding vectors: one of the input tokens, as usual in language models, and another for token positions themselves.

",26580,,2444,,12/31/2021 9:12,12/31/2021 9:12,,,,0,,,,CC BY-SA 4.0 32571,1,32589,,11/30/2021 22:31,,2,160,"

Usually, I see the conventions:

  • discrete random variable is denoted as $X$,
  • the pmf is written as $P(X=x)$ or $p(X=x)$ or $p_{X}(x)$ or $p(x)$, where $x$ is an instance of $X$
  • a continuous random variable is denoted as $X$,
  • the pdf is denoted as $f_{X}(x)$ or $f(x)$, where $x$ is an instance of $X$; sometimes $p$ is used here too instead of $f$.

However, the VAE paper uses slightly different notation that I'm trying to understand

Let us consider some dataset $\mathbf{X}=\left\{\mathbf{x}^{(i)}\right\}_{i=1}^{N}$ consisting of $N$ i.i.d. samples of some continuous or discrete variable $\mathrm{x}$. We assume that the data are generated by some random process, involving an unobserved continuous random variable $\mathbf{z}$. The process consists of two steps: (1) a value $\mathbf{z}^{(i)}$ is generated from some prior distribution $p_{\boldsymbol{\theta}^{*}}(\mathbf{z}) ;(2)$ a value $\mathbf{x}^{(i)}$ is generated from some conditional distribution $p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$. We assume that the prior $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ and likelihood $p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$ come from parametric families of distributions $p_{\boldsymbol{\theta}}(\mathbf{z})$ and $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$, and that their PDFs are differentiable almost everywhere w.r.t. both $\boldsymbol{\theta}$ and $\mathbf{z}$. Unfortunately, a lot of this process is hidden from our view: the true parameters $\theta^{*}$ as well as the values of the latent variables $\mathrm{z}^{(i)}$ are unknown to us.

So I am looking at these:

  • $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$
  • $p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$
  • dataset $\mathbf{X}=\left\{\mathbf{x}^{(i)}\right\}_{i=1}^{N}$

So I know the subscript for $\theta$ denotes those are the parameters for the pdf. It says "discrete variable $\mathrm{x}$", "unobserved continuous random variable $\mathbf{z}$", and "latent variables $\mathrm{z}^{(i)}$". In the top, where I wrote " discrete random variable $X$", seems like that's the equivalent of "discrete variable $\mathrm{x}$" in this paper.

So, it looks like they're writing the PDFs as a function of the random variables. Is my assumption correct? Because it is different than the typical conventions I see.

edit: looks like his other paper has a notation guide, in the appendix, though it seems like he's conflating both random vector and instances of vector in the notation? https://arxiv.org/pdf/1906.02691.pdf

",46842,,46842,,12/2/2021 18:00,12/3/2021 11:32,Are the authors of the VAE paper writing the PDFs as a function of the random variables?,,3,4,,,,CC BY-SA 4.0 32572,1,,,11/30/2021 22:37,,0,12,"

I have been given a task with a real transaction dataset. The task is to predict something using either logistic regression or simple binary classification.

The columns are as follow:

  • Transaction ID
  • Quantity purchased
  • Product name
  • Coupon code
  • Transaction Date
  • City (where transaction was made)
  • Delivery fee (if any)
  • Total amount spent

I am having a rough time figuring out what to predict using regression or classification given only these columns.

i.e: Given a full row how much is the total spent... etc

In other words I need help deciding what will be the label of my dataset and what would be the reason behind choosing that label.

",45499,,45499,,12/1/2021 6:40,12/1/2021 6:40,What to predict in a limited transaction dataset?,,0,4,,,,CC BY-SA 4.0 32573,1,,,11/30/2021 22:47,,0,84,"

Consider the following paragraph from section 5.4 Gradients fo Matrices of the chapter Vector Calculus from the textbook titled Mathematics for Machine Learning by Marc Peter Deisenroth et al.

Since matrices represent linear mappings, we can exploit the fact that there is a vector-space isomorphism (linear, invertible mapping) between the space $\mathbb{R}^{m \times n}$ of $m \times n$ matrices and the space $\mathbb{R}^{mn}$ of mn vectors. Therefore, we can re-shape our matrices into vectors of lengths $mn$ and $pq$, respectively. The gradient using these $mn$ vectors results in a Jacobian Matrices can be of size $mn \times pq$. .... In practical applications, it is often desirable to re-shape the matrix into a vector and continue working with this Jacobian matrix: The chain rule... boils down to simple matrix multiplication, whereas in the case of a Jacobian tensor, we will need to pay more attention to what dimensions we need to sum out.

What I understood from the paragraph is: There is always a one-one mapping(?) between $\mathbb{R}^{m \times n}$ and $\mathbb{R}^{mn}$. So, we use this property to replace any element in $\mathbb{R}^{m \times n}$ (matrix) to an element in $\mathbb{R}^{mn}$.

I have doubt on how the property allows us to replace the matrix by vector without any discrepancies?

",18758,,2444,,12/3/2021 11:36,1/28/2023 10:01,How the vector-space isomorphism between $\mathbb{R}^{m \times n}$ and $\mathbb{R}^{mn}$ guarantees reshaping matrices to vectors?,,2,2,,,,CC BY-SA 4.0 32575,1,,,12/1/2021 4:25,,0,379,"

Good day. I have a custom dataset for object detection, which has imbalance that each image has only one object annotation.

I trained the object detection model(Efficientdet-dx) on TensorFlow object detection API with this dataset. But the model predicts only one object in image, even though it has many trackable objects when the training has finished. It looks like the model learned in the wrong way that it should find only one object in a image.

Here's my question: In which way should I train the model, so the model find objects independently to the number of objects in the trainset?

It should be very helpful, if you help me.

What I tried:

  • Copy and Paste augmentation(CAP)
    • result: The dataset has no mask annotation unfortunately, so I used a deep learning model trained on open dataset for background subtraction, but the subtraction did not work well. As you can see, the edge of pasted objects is not clear. and some part of the objects are missing.

  • Mix another dataset like COCO, PASCAL VOC.
    • result: It wasn't that bad such like CAP. However, the mixed dataset becomes very big size and the unrelated labels also are in the predictable label list.
",51277,,,,,12/1/2021 4:25,Object detection: when there's only 1 object in each image,,0,4,,,,CC BY-SA 4.0 32577,1,,,12/1/2021 9:47,,3,90,"

I'm interested in the following graphs. A neural network was trained to recognise digits from the MNIST dataset and then the labels were randomly shuffled and the following behaviour was observed. How can this behaviour be explained?

What explains the apparent 'mirroring' of the graphs on the RHS, and the fact that training error on the RHS is approx. equal to validation error on the LHS?

It's evident from the graphs that the neural network is not learning noise - so what exactly is it learning?

",51284,,,,,12/1/2021 12:33,Can neural networks learn noise?,,1,2,,,,CC BY-SA 4.0 32581,2,,26928,12/1/2021 11:58,,1,,"

Note that, in knowledge representation and reasoning, common-sense knowledge is traditionally represented as sentences (in logic). For example, one possible sentence that you could store in a knowledge base is

The earth revolves around the sun.

This is common-sense knowledge (ignoring the fact that this was not the case in the past until Copernicus and Galileo came along in the 1500s). The programming language PROLOG is based on this type of knowledge, facts, and deduction.

Self-supervised learning (SSL) has been used to learn representations of data (which is often done in the context of natural language processing), but these representations may not be knowledge, in the sense that we may not understand what is encoded in these representations or whether they are related to our common-sense knowledge.

So, whether or not SSL can be used for acquiring/approximating knowledge depends on the definition of knowledge that you use.

If you use the traditional definition of common-sense knowledge, SSL is not usually used for knowledge representation, but it has been used in the context of knowledge graphs (see the SelfLinKG approach), which can be viewed as a graphical representation of a knowledge base. So, it's possible that SSL can also be useful for approximating common-sense knowledge.

To answer your question more directly, my impression is that knowledge graphs are promising and they are useful in practice. Google uses them in its search engine (and probably also in the Google Assistant). Whenever you search, for example, for a famous person (for example, Gandhi), on the right side, you should see a window describing certain details of that famous person. That's done using Google's knowledge graph.

Right now, people are trying to develop techniques to learn embeddings of the nodes or relations in a knowledge graph, with the goal of using these embeddings for discovering new knowledge. This area is called knowledge graph embedding (KGE). My other answer provides more details (but there are many surveys on KGEs that you can find, for instance, on Google Scholar).

Having said this, knowledge representation has been a big problem since the early days of AI (for example, see the frame problem) and there hasn't been much progress in this area (AFAIK). Most people are focusing on neural networks right now, but explainable AI techniques for neural networks could potentially play a role in knowledge representation.

",2444,,2444,,12/17/2021 15:53,12/17/2021 15:53,,,,0,,,,CC BY-SA 4.0 32582,2,,32577,12/1/2021 12:27,,1,,"

What explains the apparent 'mirroring' of the graphs on the RHS,

The model starts untrained and no better than random guessing (the baseline). As the training progresses, the model does better than random guessing on the training data, but does worse than initially on the validation data.

The decrease in performance is because the data it is being trained on is now deliberately labelled differently to the validation set. The task has been made impossible to generalise on, For example if there is a handwritten "8" labelled as 4 in the training database, then a very similar looking "8" in the validation database may be labelled 1 - if the trained model correctly matches up features, it will guess 4 consistently. This issue will occur frequently over the dataset, and similar issues will affect validation examples that are not close to training samples (in the feature space they will be "in-between" many training examples, most of which will be labelled differently to the validation label example).

The degree to which the curves on the random labels graph diverge is roughly affected by:

  • Size of the training data. A larger training dataset will cause less divergence. A near-infinite dataset would result in a flat line close to the baseline for both training and validation results.
  • Capacity of the model. A model with greater capacity to learn complex functions can better approximate the training data. The better it does this, the closer the training error will get to zero, and the larger the validation error will be as a result.

The mirroring effect between training and validation curves is not perfect, either in practice or "ideally". With a large dataset and very high capacity model capable of overfitting that dataset, then validation scores should be similar to random guessing and thus close to the baseline, whilst the training curve would tend towards zero error.

and the fact that training error on the RHS is approx. equal to validation error on the LHS?

That is a coincidence. Howver you can read it as roughly "despite the scrambled labels, the model does learn to predict labels on the training dataset, about as well as it generalised before on the unscrambled data".

It's evident from the graphs that the neural network is not learning noise - so what exactly is it learning?

It is learning a kind of noise - it is doing its best to approximate a function that returns a fixed set of incorrect, randomly assigned labels. When the random values are frozen in the dataset, they define a complex function that can be learned, even though it is not meaningful or useful for any other purpose.

The problem with validation error is that this noise function is not coherent, so there is no approximation that will ever do better than random chance when you look at ability to generalise. In fact it will usually do worse in rough proportion to how well it learns the training data.

If labels were somehow assigned using coherent noise e.g. Perlin noise in feature space (I'm not sure how you would do this, but it is feasible), then the model may perform better on the validation set.

If the labels were instead randomised each epoch, so that they were not fixed, then the training would learn nothing, and you would end up with a roughly flat line for both training and validation error.

",1847,,1847,,12/1/2021 12:33,12/1/2021 12:33,,,,0,,,,CC BY-SA 4.0 32583,1,,,12/1/2021 11:24,,0,57,"

My understanding of how CNN operates in image detection is through the use of kernels that slide through the image to detect features (edges and so on). So a single kernel could potentially be learning to detect an edge no matter where it is in the image. This is great for image recognition problems where an image of a dog shifted to the right or inverted is still an image of a dog. This article states "the features the kernel learns must be general enough to come from any part of the image". The article also states how using CNN for categorical data where the order in which data is organised is irrelevant can be "disastrous".

However, there are instances where it is desirable for the algorithm to be location-aware in order to classify better. Take the case of using CNN to train a network that will predict card play in the game of bridge (a version of double-dummy where all cards are laid out open - perfect information, deterministic). At the beginning of the game the cards dealt to the four could look (very unrealistically) something like this.

where Leader = the player playing the lead card in round 1, and the subsequent players organised as Leader.LeftHandOpponent, Leader.Partner and Leader.RightHandOpponent. Each player's cards are organised in four suits starting from the Trump_Suit and then the other suits in the original suit hierarchy. Cards go from highest value in the top 'A' to lowest value in the bottom '2'.

Here is a transpose of the image above.

This layout provides a lot of visual cues in terms of how the gameplay will proceed and who will end up winning how many tricks if viewed it from the perspective of control cards distribution within each suit and hand strength. So, the answer to the question of will CNN actually be able to process this data to provide good predictions is a resounding Yes (at least to me).

However, here is the problem - A regular CNN with a sliding kernel with a (4, 1) stride and no padding would make no distinction between the red boxes when in reality there is a massive difference between them.

Possible Solution? - A filter consisting of non-sliding kernels/kernels that only slide in one direction (perhaps horizontally or vertically) however would theoretically only seek to learn location-aware features and that could potentially improve accuracy? Just shooting arrow in the sky.

Has this been researched? Has anybody implemented this already? Could this work?

P.S: CNN has been used on AlphaGo Zero was great success. Obviously in the game of Go, patterns located in the top of the board carry the same weight as those located in the bottom. The gameplay does not change if the board is flipped 180 degrees. This however is not the case in the game of contract bridge. I am looking at ideas of how this can be resolved.

",51290,EveryFin,,,,7/29/2022 16:02,Non-sliding kernels for location-aware processing in Convolutional Neural Networks,,1,0,,,,CC BY-SA 4.0 32584,2,,32583,12/1/2021 12:18,,1,,"

As you observed, convolutions are "shift/translation equivariant". This is extremely useful and beneficial for image/video/audio processing where this "symmetry" exists in the underlying domain.
This is not the case in your settings. Each card from each suit carries a different meaning. You actually want to have a different (trainable) weight for each card for each player. A fully connected layer seems more suitable for this setting.

",51308,Shai,,,,12/1/2021 12:18,,,,0,,,,CC BY-SA 4.0 32585,1,,,12/1/2021 14:22,,0,197,"

I have a homework. The task is to decide, if the PRNG generated lottery is attackable/crackable or not.

Details:

Lottery: There is a lottery game where you have to choose 8 numbers between 1-20 for the field A and choose 1 number between 1-4 for field B. It is generated every 5 minutes(7:05 - 22:00), so there are ~64k draws/year.

For example:

  • A: [3, 5, 6, 7, 10, 13, 17, 18]
  • B: 2

Possible dependent variables: Timestamp, DrawNum (It is between 85-264 every day. 7:05 is 85 because there are 425 minutes between 00:00 and 7:05. (425/5=85))

Unfortunately we don't have too much dependent variable and there is no clue for the PRNG algorithm. I think this two dependent variable is not enough to predict the numbers. I am thinking on an LSTM to predict the next 1stNum based on the previous ones and use the same model for the other numbers.

What do you think? How would you predict the next set of numbers? Which ML algorithm is the best for this use case?

",51291,,,,,12/1/2021 14:22,Which ML algorithm is the best for predict the next PRNG generated numbers?,,0,6,,,,CC BY-SA 4.0 32586,2,,32388,12/1/2021 15:51,,0,,"

To be fair I struggle too, to see a reason for it to be included.

I'll try to argumentate: assume that the optimal path has length N and therefore you have costs $c_1,\dots,c_N$ all positive and bigger than some $\epsilon>0$. Such $\epsilon$ can't be bigger than the minimum between $c_1,\dots,c_N$.

Now, to need that $+1$ correction factor it should be possible that

$$\cfrac{c_1+\dots+c_N}{\epsilon}<N$$

Since $\epsilon<min\{c_1,\dots,c_N\}$ I could try to prove instead that

$$\cfrac{c_1+\dots+c_N}{min\{c_1,\dots,c_N\}}<N$$

But you can divide both members by N and multiply by $min\{c_1,\dots,c_N\}$ obtaining that

$$\cfrac{c_1+\dots+c_N}{N}<min\{c_1,\dots,c_N\}$$

On the left hand side you have the average of the costs, which of course can't be smaller than the minimum of the costs, therefore that has no solution. This means that even without the +1 factor you can't underestimate the length of the path to the optimal solution using $\lfloor C^\star/\epsilon\rfloor$.

On the flip coin by the way when you compute that $\lfloor C^\star/\epsilon\rfloor$ you're actually trying to find an upper bound for the length of the optimal path, so if you add +1 (equivalent to compute the ceiling instead of floor) you still get an upper bound. Also, since you're computing an $O(\circ)$ keep in mind that if a function is an $O(b^k)$ then it's an $O(b^{k+c})$ for every $c>0$ so although not necessary the book isn't technically wrong.

EDIT: Take all I said with a grain of salt, I'm a student, too and still have to take the AI exam

EDIT 2: I thought that another possible reason why you would expand (at most) b more nodes after reaching the depth of the optimal goal is that you perform the goal test the moment you extract the node from the frontier and not before insertion. This is necessary cause it grants us that when we extract a node from the frontier we reached it through the path with lowest cost from the initial state, but it will cause us to expand at most one more level before realizing a node in the frontier is indeed a goal.

",51293,,51293,,12/1/2021 17:34,12/1/2021 17:34,,,,0,,,,CC BY-SA 4.0 32587,2,,25986,12/1/2021 16:00,,2,,"

In Machine Learning "embedding" means taking some set of raw inputs (like natural language tokens in NLP or image patches in your example) and converting them to vectors somehow. The embeddings usually have some interesting dot-product structure between vectors (like in word2vec for example). The Transformer machinery then uses this embedding in the dot-product attention pipeline. The dimension of the embedding $D$ should stay constant throughout the transformer blocks due to ResNet skip connections.

The simplest idea in the case of the image patches would be to just take all the channels of all the pixels and treat them as a single vector. For example, if you've got (16, 16, 3) patches then you'll have 768-dimensional "embeddings". The problem with such a naive "embedding" is that dot-products between them won't make much sense. So we also multiply these vectors by a trainable matrix $W$ and add a bias vector.

For example, if you've got (16, 16, 3) patches and the transformer downstream uses $D=128$ dimensional embeddings, then you first flatten the patches into 16 * 16 * 3 = 768-dimensional vectors and then multiply by a 768$\times$128 matrix and add a 128-dimensional bias vector.

Looking at the code, it seems that the authors kept improving on that idea by adding one or several early convolutions with nonlinearities (the conv_stem branches in the code). The simplest execution branch before these embellishments seems to be here. And the matrix multiplication I've been talking about is here

",20538,,,,,12/1/2021 16:00,,,,1,,,,CC BY-SA 4.0 32588,2,,32181,12/1/2021 16:03,,2,,"

Residual Network are usually deeper and hence take more time to train. EfficientNet are trying to tackle this. However, the latest advice show that the architecture tend to play a crucial role in the performance of an RL algorithm, which might motivate you to do this.

There is recent work on Neural Architecture Search applied to RL tasks (cf https://arxiv.org/pdf/2106.02229.pdf)

However, I can advise you to look at Rational Activation Functions (cf https://arxiv.org/pdf/2102.09407.pdf) employed in RL which provide a huge boost to the agents, while adding negligible parameters and negligible extra training time.

",51294,,,,,12/1/2021 16:03,,,,0,,,,CC BY-SA 4.0 32589,2,,32571,12/1/2021 16:49,,4,,"

When it comes to notation/terminology, often, people in machine learning are (a bit?) sloppy, which causes a lot of confusion, especially for newcomers to the field or people not very math-savvy. I was also confused about this notation at some point (see my last questions here, which are all about this confusing topic). See also this answer.

In the VAE paper, $\mathbf{X}$ is a dataset, as the authors write.

Your confusion also arises because the authors vaguely use the term "probability distribution", rather than pdf or pmf, to refer, for example, to $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$, which thus does not refer to a pdf or pmf. In fact, the authors also write

their PDFs are differentiable almost everywhere w.r.t. both $\boldsymbol{\theta}$ and $\mathbf{z}$

The $\mathbf{z}$ can refer to

  1. a random variable, or
  2. an input to the function $p_{\boldsymbol{\theta}^{*}}$

If it's the first case, then $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ is the composition of 2 functions (because a rv is also a function).

If it's the second case, then $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ is the evaluation of $p_{\boldsymbol{\theta}^{*}}$ at $\mathbf{z}$.

I think the 2nd case is the most likely. In addition, people are being sloppy here and use the notation $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ (rather than just $p_{\boldsymbol{\theta}^{*}}$) to emphasize $p_{\boldsymbol{\theta}^{*}}$ is a function of some input variable (not random variable!), which we denote with the letter $\mathbf{z}$ to remind ourselves that $\mathbf{z}$ is associated with a random variable denoted with the same letter (and maybe also in bold and lowercase).

So, in this case, let's say we denote the random variable associated with $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ with $\mathbf{z}$, then we could refer to this associated prior more explicitly as follows $p_\mathbf{z}(\mathbf{z})$ (but that would even be more confusing). It would have been a better idea to use $\mathbf{Z}$, but then we may use the upper case letters to denote matrices or sets (like the VAE paper), so we end up with this mess (which is one of the 2 mythical difficult problems well-known in Computer Science, i.e. naming things), which we need to learn to deal with or just ignore.

Conclusion: when I look at $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$, which has been referred to as a probability distribution, I think there's also some associated random variable, which people, in that same context, will probably denote as $\mathbf{z}$ or $\mathbf{Z}$. There may also be some input variable (not a random variable), which we denote by $\mathbf{z}$ or $z$. If they are not mentioned, then I just ignore that. I never think that $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$ is the composition of 2 functions (even if that's the case), because that case was never useful in my readings.

",2444,,2444,,12/2/2021 8:14,12/2/2021 8:14,,,,2,,,,CC BY-SA 4.0 32590,1,,,12/1/2021 18:57,,2,140,"

Usually, when I read about Monte Carlo Tree Search, values between 0 and 1 (or values between -1 and 1) are backpropagated, depending on whether the simulation was a win or loss.

Now, suppose you have an AI which needs to play a game in which it is also important to score as high as possible. For example, it needs to score as many points as possible in the game of Carcassonne against one other player.

What kind of options are there for the values being backpropagated in such cases? Can you just backpropagate the number of points, and then, depending on the node, use the points of only the players in UCT? Or would that lead to the search converging to a worse move than the optimal move?

",51295,,2444,,12/2/2021 8:27,12/2/2021 8:27,Which value to propagate in Monte Carlo Tree Search in a non-zero-sum game?,,1,0,,,,CC BY-SA 4.0 32592,2,,32590,12/1/2021 20:39,,3,,"

In theory: yes, you can backpropagate any sort of scores you want to maximise. They don't have to be restricted to just a small, discrete set of values such as $\{-1, 0, 1\}$, and also do not have to be in any particular range like $[-1, 1]$ or $[0, 1]$.

However, in practice it may still be useful if you can normalise whatever "raw" scores you have down to some smaller range (like the ranges mentioned above). This can primarily be useful for hyperparameter tuning. For example, consider the UCB1 equation that we normally use in UCT during tree traversal:

$$a^* = {\arg\max}_a \left( Q(s, a) + C \sqrt{\frac{\ln(\sum_{a'}N(s,a'))}{N(s,a)}} \right).$$

We typically have a $C$ hyperparameter / constant in there, which controls our tradeoff between exploration and exploitation. Higher $C$ means more exploration, lower $C$ means more exploitation. The $Q(s,a)$ term is the average of the scores backpropagated so far for action $a$ in state $s$. In this equation, the $Q(s, a)$ and the $C$ terms are sort of "competing" against each other, so their optimal values are closely related to each other. If you take an existing problem and multiply all the score values by, say, $100$, you'd probably also want to multiply whatever your $C$ constant used to be by $100$ to get the same behaviour again from your tree search.

Actually, what really matters for this story is not even the raw magnitudes of your $Q$ values, but rather the magnitudes of the typical differences you see between $Q$ values for different actions in the same state. So, if your problem only has $Q$ values that lie in the range $[99, 101]$, you'll actually want a similar $C$ constant as when you have $Q$ values in the range $[-1, 1]$, but if your problem has $Q$ values in the range $[-100, 100]$, you'll likely want about $100$ times as big a $C$ value too.

If you can roughly normalise your values down to approximately a $[-1, 1]$ or $[0, 1]$ range, you can feel relatively confident that similar kinds of $C$ values as typically used by other people in zero-sum settings will also work fine (like, $C$ values somewhere in the range $[0, 2]$, more often than not below $1$.... but the ideal constant can also very often depend on lots of other factors).


Aside from all that, I usually also tend to prefer values that do not have a crazy high magnitude because it makes me feel like I won't have to worry about annoying things like numerical overflow when summing up lots of values from lots of different iterations through the same node.

",1641,,,,,12/1/2021 20:39,,,,0,,,,CC BY-SA 4.0 32593,1,,,12/1/2021 23:17,,0,687,"

Tensor is a multi-dimensional ordered collection of elements, which is used in deep learning to store and process data as well as intermediate steps.

We are aware of the trace of a two-dimensional tensor i.e. matrix. It is defined as the sum of the diagonal elements of the matrix.

Is there any definition for the trace of a tensor?

",18758,,2444,,12/5/2021 9:56,12/5/2021 9:56,What is the definition of a trace of a tensor?,,1,0,,,,CC BY-SA 4.0 32594,1,,,12/1/2021 23:40,,1,21,"

In general, the order of instances in the datasets that are used in machine learning is immaterial. But there are exceptions. Timeseries data is one such exception I know. Consider the following two excerpts

#1: From 4.3 Representing tabular data of the textbook titled Deep Learning with PyTorch by Eli Stevens et.al.

At first we are going to assume there’s no meaning to the order in which samples appear in the table: such a table is a collection of independent samples, unlike a time series, for instance, in which samples are related by a time dimension.

#2: From an answer

Moreover, there are datasets that contain elements whose order in the dataset can be relevant for the predictions, such as datasets of time-series data, while, in mathematical sets and multi-sets, the order of the elements does not matter.

I want to know the types of such data in which the order of instances does matter. Are there any other kinds of data except Timeseries in which the order of instances does matter?

",18758,,,,,12/1/2021 23:40,What are the types of data in which the order of instances does matter?,,0,0,,,,CC BY-SA 4.0 32595,1,,,12/1/2021 23:45,,2,79,"

I am learning NN algorithms because I'd like to create my own project. What I found on the internet, is that for my type of project which I have in mind CNN-LSTM neural network would be ideal.

But now I have a question - I don't know if it's against the rules of this forum or not. So pardon me if I violated them.

So, now I am learning NN algorithms from a couple of books that "classify" them like: Classification NN, LSTM, Convolutional - each neural network is a separate topic in each book.

But I am looking for a book that teaches the reader about Convolutional Long-Short Term Memory Neural Network. Does someone know such a book where such hybrid NN is the main topic?

",51216,,18758,,12/1/2021 23:50,12/27/2022 5:03,Textbook for CNN-LSTM networks of predictions of numerical data,,1,2,,,,CC BY-SA 4.0 32596,2,,32595,12/2/2021 2:18,,0,,"

It's difficult to find a textbook that contains all hybrid neural networks. Since the field is changing so frequently, even if someone writes a book about it, it will likely become obsolete in a few years.

There is actually a nice paper (with lots of citation) that developed the network you mentioned: Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting

You are more likely to find different kinds of neural network architectures in journal or conference papers. Hope this helps!

",12932,,,,,12/2/2021 2:18,,,,0,,,,CC BY-SA 4.0 32597,2,,32571,12/2/2021 6:21,,2,,"

Machine learning papers are often somewhat confused about the distinction between a distribution and its probability density. I would rewrite this

The process consists of two steps: (1) a value $\mathbf{z}^{(i)}$ is generated from some prior distribution $p_{\boldsymbol{\theta}^{*}}(\mathbf{z})$; (2) a value $\mathbf{x}^{(i)}$ is generated from some conditional distribution $p_{\boldsymbol{\theta}^{*}}(\mathbf{x} \mid \mathbf{z})$.

as follows

The process consists of two steps: (1) a value $\mathbf{z}^{(i)}$ is generated from some prior distribution. The probability density of selecting $\mathbf{z}^{(i)}$ is known and denoted as $p_{\boldsymbol{\theta}^{*}}(\mathbf{z}^{(i)})$. (2) a value $\mathbf{x}^{(i)}$ is generated from some conditional distribution. The probability density of selecting $\mathbf{x}^{(i)}$ given $\mathbf{z}^{(i)}$ is known and denoted as $p_{\boldsymbol{\theta}^{*}}(\mathbf{x}^{(i)} \mid \mathbf{z}^{(i)})$.

As for the uppercase/lower case notation, this notation is not used in machine learning. $z, \, x$ are both random variables. In this paper, the authors use $z^{(i)}$, $x^{(i)}$ to indicate specific realizations of the random variables $z, \, x$. Probably a notation like $x_i, z_i$ is more common in general.

The notation/explanation is quite bad, because they should not say $p_{\theta}(z)$ is the distribution. $p_{\theta}(z)$, or if you wanted to be more precise, $p_{\theta}(z^{(i)})$, refers to the probability density.

",47080,,,,,12/2/2021 6:21,,,,1,,,,CC BY-SA 4.0 32600,2,,26770,12/2/2021 10:33,,2,,"

This is to get the gradient to "skip" the quantization part.

The trick implements the red arrow in the original paper's diagram:

Simplified example: Rounding

Let's simplify this a bit and imagine we want to use rounding in our architecture:

import torch
x = torch.tensor([1.1, 2.1], requires_grad=True)
y = 2*x
z = torch.round(y)
r = z.sum()

The graph (torchviz.make_dot) looks like this:

We can look at the output:

r 
# tensor(6., grad_fn=<SumBackward0>)

All is looking good. However, when we try to compute the gradient, we get a tensor of zeroes:

r.backward()
x.grad
# tensor([0., 0.])

This makes sense: the rounding function has derivative zero almost everywhere:

However, it also means we cannot train our network.

To circumvent that, we could simply tell the gradient to skip the rounding network. To do that, we use detach, a function that tells PyTorch to detach a vector from the computational graph.

x = torch.tensor([1.1, 2.1], requires_grad=True)
y = 2*x
z = torch.round(y)
z = y + (z - y).detach() # Detach everything between z and y, including z
r = z.sum()

We still get the same answer for $r$

r 
# tensor(6., grad_fn=<SumBackward0>)

But we now also get reasonable gradients

r.backward()
x.grad
# tensor([2., 2.])

Thanks to the fact that the gradient now "skips" the rounding part:

Back to VQ-VAE

In VQ-VAE, we are replacing the output of the encoder (the input to the vector quantization layer) with the code vector. In some sense, this similar to "rounding", only this time we're "rounding" the encoder's output to the nearest code vector.

We therefore run into the same problem. Here's what the authors say about it

Note that there is no real gradient defined for [the quantization], however we approximate the gradient similar to the straight-through estimator and just copy gradients from decoder input $z_q(x)$ to encoder output $z_e(x)$

which is precisely what

quantized = inputs + (quantized - inputs).detach()

does in the notebook.

",51306,,51306,,12/2/2021 15:23,12/2/2021 15:23,,,,0,,,,CC BY-SA 4.0 32601,2,,32593,12/2/2021 11:26,,1,,"

The concepts of trace and tensor also appear in other contexts outside of machine learning (ML), like quantum computing, so an answer to your question may be given independently of ML, but that may not be useful, as these concepts may be defined and implemented differently in the context of ML, which seems to be the case.

The concept of trace, in mathematics, is apparently known as tensor contraction. I don't know if that definition is consistent with the definition(s)/implementation(s) of trace used in machine learning.

I found at least 2 (different) definitions of tensor trace in machine learning. The first definition is provided by the TensorFlow implementation of the trace of a tensor. For completeness, let me write here their definition of the trace.

trace(x) returns the sum along the main diagonal of each inner-most matrix in x. If x is of rank k with shape [I, J, K, ..., L, M, N], then output is a tensor of rank k-2 with dimensions [I, J, K, ..., L] where output[i, j, k, ..., l] = trace(x[i, j, k, ..., l, :, :]).

So, essentially, for each most inner 2d matrix in this tensor, you compute the trace (for that matrix), then return the result as another tensor, which has 2 fewer dimensions than the original tensor (because a matrix has 2 dimensions, and, by computing the tensor of a matrix, you reduce a matrix to a number, which is a 0-dimensional tensor).

They give these examples

x = tf.constant([[1, 2], [3, 4]])
tf.linalg.trace(x)  # 5

x = tf.constant([[1, 2, 3],
                 [4, 5, 6],
                 [7, 8, 9]])
tf.linalg.trace(x)  # 15

x = tf.constant([[[1, 2, 3],
                  [4, 5, 6],
                  [7, 8, 9]],
                 [[-1, -2, -3],
                  [-4, -5, -6],
                  [-7, -8, -9]]])
tf.linalg.trace(x)  # [15, -15]

I think this definition is easy to understand, but I don't remember having ever used it, but I could be wrong.

In fact, PyTorch does not seem to implement the trace for tensors, but only matrices. If you executed the following code, you should get an error that tells you that's not possible.

import torch  # install also numpy

x = torch.tensor([[[1, 2, 3],
                   [4, 5, 6],
                   [7, 8, 9]],
                  [[-1, -2, -3],
                   [-4, -5, -6],
                   [-7, -8, -9]]])

print(torch.trace(x))  # [15, -15]? No, you get an error.

So, as I was suspecting, the tensor trace may not be terribly useful in ML, at least, for tensors with more than 2 dimensions.

By the way, in the paper A Survey on Tensor Techniques and Applications in Machine Learning (2019) Yuwang Ji et al. (p. 6 of the pdf), you find another definition of tensor trace, which I don't think it's equivalent to the definition used by the TF implementation.

",2444,,2444,,12/2/2021 13:01,12/2/2021 13:01,,,,0,,,,CC BY-SA 4.0 32602,1,,,12/2/2021 15:38,,1,33,"

I was reading about genetic algorithms, and to my understanding a genetic algorithm (GA) is an algorithm that starts with an initial population of chromosomes, where each chromosome has associated with it a fitness score, and it evolves the population such that the chromosomes in the population have an on average better fitness score than the chromosomes in the initial population. A GA accomplishes this by selecting the chromosomes in the population with the best fitness scores, and then it combines those chromosomes using a crossover function to produce offspring chromosomes. Those offspring may or may not be mutated randomly according to a probability that the offspring will be mutated. This process of selection, crossover, and mutation is iterated (with each successive iteration of the population being called a 'generation') until the fitness score of the population as a whole is deamed satisfactory. Please correct me if any of this is wrong.

My idea is with regards to mutating the offspring produced during the crossover phase. In the relatively simple implementations of GAs that I've looked at on the internet, most seem to randomly mutate the offspring according to a mutation rate. For example, the offspring may have a ten percent chance of mutating. When the GA is ran, the fitness function generally improves a lot over the first several generations, but it often stagnates for a while when one chromosome takes over essentially the entire population and little mutation occurs. This problem can be partially solved by increasing the mutation rate, but that also increases the probability of having offspring with a lower fitness score than the parents if the mutation poorly affects the fitness score. My idea is to make it so that half of the offspring produced with each generation have a one hundred percent mutation rate, and the other half has a zero or relatively low mutation rate. This could potentially lower the number of generations necessary to reach a satisfactory fitness score.

Is this a good idea? Would it work? Would it be better than other methods of mutating the offspring of each crossover?

",51313,,51313,,12/2/2021 16:59,12/2/2021 16:59,Would it be a good idea to mutate half of the offspring of each GA generation 100% of the time and the other half 0% of the time?,,0,0,,,,CC BY-SA 4.0 32605,1,,,12/2/2021 17:49,,2,374,"

Theory 1 shows three axioms and two definitions, written in First Order Logic (FOL), that represents a fragment of a mereology theory. For this posting, it is important that the set of axioms is considered as a theory (i.e. a set of axioms together with theorems as a logical consequence of those axioms). In the context of this question, the particular axioms are not significant. Any other set of axioms forming a consistent theory would be equally acceptable.

Theory 1

Axioms

Reflexivity $\forall x : part(x,x)$

Antisymmetry $\forall x \forall y : ((part(x,y) \land part(y,x)) \implies (x = y))$

Transitivity $\forall x \forall y \forall z :((part(x,y) \land part(y,z)) \implies part(x,z))$

Definitions

Overlap : $\forall x,y \colon (overlap(x,y) \iff(\exists z \colon (part(z,x) \land part(z,y)))$

Proper Part : $\forall x,y \colon (proprPart(x,y) \iff (part(x,y) \land \neg part(y,x)))$

I am using CafeOBJ to represent the above logical axioms and definitions, shown in Listing 1:

Listing 1

   mod M{
    [E]
    preds overlap part properPart : E E
 -- axioms
 ax [M1] : \A[x:E] part(x,x) .  
 ax [M2] : \A[x:E]\A[y:E] ((part(x,y) & part(y,x)) -> (x = y))  .
 ax [M3] : \A[x:E]\A[y:E]\A[z:E]((part(x,y) & part(y,z))  -> part(x,z)) .
 -- definitions
 ax [DM1] : \A[x:E]\A[y:E] (properPart(x,y) <-> (part(x,y) & ~(part(y,x)))) .  
 ax [DM2] : \A[x:E]\A[y:E] (overlap(x,y) <->  (\E[z:E] (part(z,x) & part(z,y)))) .  
    }

Note that the logical theory is contained in a named module called M. The variables are over a domain of generic entities E, universal and existential quantification are denoted by \A[x:E] and \A[x:E] respectively. In CafeOBJ, named modules allow one to structure signatures, theories, sub/super theories, and models using the Theory of Institutions (TOI).

Below is my naïve attempt to present the axioms as a set of conceptual graphs (CG). My motivation for using CGs is that they provide an intuitive visualization of logic and have a direct relation to Common Logic (ISO zipped PDF).

The above CG was produced using CharGer software as Java zip file (manual).

My understanding of the above CGs is as follows:

  1. The variables are universally quantified, not default for CGs, but allowed in extended CG (ECG).
  2. The three graphs are all related by conjunction, which is default for GC.
  3. The arrow on graph representing reflexivity is bi-directional.
  4. Both antisymmetry and transitivity are represented by an IF-THEN contexts.
  5. Dotted lines are co-references.
  6. Equality (=) is actually commutative, but is represented as a directed relation .
  7. Each CG asserts a single proposition, labelled Proposition.

Question:

How do I present Theory 1 using CGs? Do I need some labeling that indicates that a set of concepts represent a theory. Or are theories represented by some enclosing special type of concept?

",48716,,48716,,1/11/2022 12:39,1/11/2022 12:39,How are theories represented using Conceptual Graphs?,,0,4,,,,CC BY-SA 4.0 32607,2,,32571,12/2/2021 19:23,,1,,"

You can read $X=\{x^{(i)}\}_{i=1}^N$ as $X$ represents the sequence of all values of $x$ from $x_i$ to $x_N$ where $i$ is all values from 1 to $N$.

To me, the notation is confusing since my experience tells me that curly braces are used for sets, but this seems to be the best interpretation.

",30426,,30426,,12/3/2021 11:32,12/3/2021 11:32,,,,2,,,,CC BY-SA 4.0 32608,2,,17031,12/2/2021 21:10,,2,,"

A recurrent neural network (RNN) depends on the previous hidden state from the previous time step. That is, an RNN is a function of both the data for the sequence at time $t$ and the hidden state from time $t-1$. This means that we cannot compute the $t$th hidden state without calculating the $t-1$th state, and the $t-1$th state without the $t-2$th state, and so on.

In contrast to this, a transformer is able to fully parallelise the processing of the sequence because it does not have this recursive relationship, i.e. a transformer is not a recursive function -- the recursive nature of the sequence is processed in other ways, such as through positional encoding. We can see this by the way self attention works.

If we first consider the general attention mechanism framework, then we have a query $q$ and a set of paired key-value tuples $\textbf{k}_1, ..., \textbf{k}_n$ and $\textbf{v}_1, ..., \textbf{v}_n$. In general, for each key, we will apply some attention function $\beta$ (such as a neural network) to obtain attention scores, $a_i = \beta(\textbf{q}, \textbf{k}_i)$. We then define an attention vector $\textbf{a}$ where the $i$th element is the $i$th attention score, and we take a softmax of this vector to obtain attention weights $\alpha_i$ where $\alpha_i$ is the $i$th element of $\mbox{softmax}(\textbf{a})$. The output of the attention mechanism for query $\textbf{q}$ is then the weighted sum $\sum_{i=1}^n \alpha_i \textbf{v}_i$.

Now that we have the necessary background for an attention mechanism, we can look at self attention which is the backbone of Transformer. If we have a sequence denoted by $\{\textbf{x}_1, ..., \textbf{x}_n\}$, then we can define a set of queries, keys and values to be these $\textbf{x}_i$ values. Note that previously we only had a single query, but here we will have multiple queries which is really how Transformer is able to parallelise the processing of the sequence. If we define $\textbf{Q}, \textbf{K}, \textbf{V}$ to be the matrices of the queries, keys and values (e.g. the $i$th row of $\textbf{Q}$ corresponds to the $i$th query, and similarly for the others). Self attention is as simple as performing attention over these query, key and values -- the name self comes from the fact that the queries, keys and values are all the same and represent the $i$th element of the sequence. Now, we can write the above attention mechanism as $a_{i, j} = \beta(Q, K)$ where we now have a matrix of attention scores (because we have $i$ queries and $i$ keys the matrix will be square), and we can take softmax row-wise to get the attention weights (again, this will be an $i\times i$ matrix). If we call the matrix of attention weights $\textbf{A}$ then the output of a self attention layer will be given by $\textbf{A} \textbf{V}$. As you can see, there is no recursive nature here and this is all parallelisable, e.g. it can be broken up and put onto multiple GPU's at the same time -- this would not be possible with an RNN as you would have to wait for the output of the previous layer.

",36821,,,,,12/2/2021 21:10,,,,1,,,,CC BY-SA 4.0 32609,1,,,12/3/2021 10:14,,0,22,"

I came across a Conv2D layer in a fully convolutional network, which used a kernel_initializer='zero' for regression. Why is a kernel_initializer of 'zero' used here?

In general, when are 'normal', 'uniform' and 'zero' initializers used?

",47943,,2444,,12/3/2021 10:20,12/3/2021 10:20,"In general, when are the normal, uniform and zero initializers used?",,0,2,,,,CC BY-SA 4.0 32610,1,32631,,12/3/2021 11:21,,1,96,"

I am looking at appendix C of the VAE paper:

It says:

C.1 Bernoulli MLP as decoder

In this case let $p_{\boldsymbol{\theta}}(\mathbf{x} \mid \mathbf{z})$ be a multivariate Bernoulli whose probabilities are computed from $\mathrm{z}$ with a fully-connected neural network with a single hidden layer: $$ \begin{aligned} \log p(\mathbf{x} \mid \mathbf{z}) &=\sum_{i=1}^{D} x_{i} \log y_{i}+\left(1-x_{i}\right) \cdot \log \left(1-y_{i}\right) \\ \text { where } \mathbf{y} &=f_{\sigma}\left(\mathbf{W}_{2} \tanh \left(\mathbf{W}_{1} \mathbf{z}+\mathbf{b}_{1}\right)+\mathbf{b}_{2}\right) \end{aligned} $$ where $f_{\sigma}(.)$ is the elementwise sigmoid activation function, and where $\theta=\left\{\mathbf{W}_{1}, \mathbf{W}_{2}, \mathbf{b}_{1}, \mathbf{b}_{2}\right\}$ are the weights and biases of the MLP.

C.2 Gaussian MLP as encoder or decoder

In this case let encoder or decoder be a multivariate Gaussian with a diagonal covariance structure: $$ \begin{aligned} \log p(\mathbf{x} \mid \mathbf{z}) &=\log \mathcal{N}\left(\mathbf{x} ; \boldsymbol{\mu}, \boldsymbol{\sigma}^{2} \mathbf{I}\right) \\ \text { where } \boldsymbol{\mu} &=\mathbf{W}_{4} \mathbf{h}+\mathbf{b}_{4} \\ \log \sigma^{2} &=\mathbf{W}_{5} \mathbf{h}+\mathbf{b}_{5} \\ \mathbf{h} &=\tanh \left(\mathbf{W}_{3} \mathbf{z}+\mathbf{b}_{3}\right) \end{aligned} $$ where $\left\{\mathbf{W}_{3}, \mathbf{W}_{4}, \mathbf{W}_{5}, \mathbf{b}_{3}, \mathbf{b}_{4}, \mathbf{b}_{5}\right\}$ are the weights and biases of the MLP and part of $\boldsymbol{\theta}$ when used as decoder. Note that when this network is used as an encoder $q_{\phi}(\mathbf{z} \mid \mathbf{x})$, then $\mathrm{z}$ and $\mathrm{x}$ are swapped, and the weights and biases are variational parameters $\phi$.

So, it seems like, for a Bernoulli decoder, it only outputs a vector $\mathbf{y}$, which gets plugged into the log-likelihood formula. But then, for the Gaussian decoder, it outputs both $\boldsymbol{\sigma}$ and $\mu$. So, is it like 2 parallel layers, one calculating $\boldsymbol{\sigma}$ one calculating $\mu$?

Similar to how we get the $\mu$ and $\sigma$ of the encoder (which I am assuming the encoder ones are different from the decoder ones)?

And we plug it into the formula I derived in this link here, the log-likelihood to get the reconstruction loss?

This is the intuition I am getting, but I haven't seen it explicitly all in one place.

",46842,,46842,,12/8/2021 7:34,12/8/2021 7:34,Do we use two distinct layers to compute the mean and variance of a Gaussian encoder/decoder in the VAE?,,1,2,,,,CC BY-SA 4.0 32611,2,,32573,12/3/2021 12:07,,0,,"

An isomorphism $T$ between vector spaces $V$ and $W$ over the same field $K$ (for example, $K = \mathbb{R}$) is defined as a bijective (i.e. 1-to-1 and onto, which makes a bijection an invertible function) transformation (a function) that preserves the 2 main properties

  1. scalar multiplication: $T(c\mathbf {u} )=cT(\mathbf {u} )$, where $c \in K$ and $\mathbf {u} \in V$
  2. vector addition: $T(\mathbf {u} +\mathbf {v} )=T(\mathbf {u} )+T(\mathbf {v})$, where $\mathbf {u}, \mathbf {v} \in V$

Linear maps preserve these 2 properties, but not all linear maps are bijective, so not all linear maps are isomorphisms. In fact, linear maps that are bijective are called linear isomorphisms. You can denote an isomorphism between $V$ and $W$ as follows (to emphasize that it's not just a linear map): $T: V {\overset {\sim }{\to }} W$.

The fact that an isomorphism is invertible means that there exists a function $T^{-1}$ that transforms any element of $W$ to any element of $V$.

So, if there's an isomorphism $T$ between $V = \mathbb{R}^{n \times m}$ and $W = \mathbb{R}^{nm}$ (note that I am not proving it here, but it's probably not complicated to prove), then it implies that, for each $\mathbf {v}_W \in \mathbb{R}^{nm}$, you can use $T^{-1}: \mathbb{R}^{nm} {\overset {\sim }{\to }} \mathbb{R}^{n \times m}$ to retrieve the corresponding vector $\mathbf {v}_V \in \mathbb{R}^{n \times m}$ (not that, in the context of vector spaces, matrices are also vectors, which is just the name used to denote objects in vector spaces).

This should prove why, if reshaping is an isomorphism, then you can perform operations first on $\mathbf {v}_W$ and then go back to $\mathbf {v}_V$.

",2444,,,,,12/3/2021 12:07,,,,1,,,,CC BY-SA 4.0 32612,2,,32491,12/3/2021 12:40,,0,,"

Most old school Expert System tools are text based and based on If/Then rules going forwards or backwards (chaining)

Some have graphical front ends (such as VisiRule (https://www.visirule.co.uk/), XpertRule, etc)

Some have now repositioned themselves as Business Rules tools and most of these have GUIs (InRule, Actico etc)

",51331,,,,,12/3/2021 12:40,,,,0,,,,CC BY-SA 4.0 32613,1,,,12/3/2021 14:54,,2,93,"

I am implementing the CV detection pipeline with the use of SIFT and KNN Matcher.

Image keypoints matched to the query keypoints produce the following image:

The matched objects have a lot of key points on them and there are some false matches. I would like to consider spots with a lot of matches as detections of the query object and ignore isolated points.

What would be an appropriate clustering method, where one can put a limit on points in a neighborhood of some radius to declater this set of points a cluster?

KMeans is not a good idea, since it takes a fixed number of points and doesn't throw outliers.

From the algorithms proposed in sklearn, seems like DBSCAN and Agglomerative clustering are a good choice, since they allow for variable number of clusters, unknown apriori and outlier removal.

Or there is better alternative?

",38846,,2444,,12/4/2021 0:31,12/4/2021 0:31,What would be a reasonable option for clustering for unknown number of clusters and a lot of outliers?,,0,0,,,,CC BY-SA 4.0 32620,2,,31729,12/4/2021 15:11,,0,,"

I think you can solve this problem with models trained for Named Entity Recognition. In that case, your entities are labels. To do this you can use spacy to train a NER model or more easily you can fine-tune a Distil-BERT for your task.

",32763,,,,,12/4/2021 15:11,,,,0,,,,CC BY-SA 4.0 32621,2,,20991,12/4/2021 15:30,,1,,"

In this problem, you can also use NER models in order to tag those numbers as Width, Height,... . You can also fine-tune a DistilBert model for your task.

if you want to train NER model by tok2vec you can use: Spacy Library

for fine-tuning DistilBERT you can use this. Hugging Face

",32763,,32763,,12/4/2021 18:17,12/4/2021 18:17,,,,1,,,,CC BY-SA 4.0 32624,1,,,12/4/2021 19:14,,0,79,"

Traditionally, Siamese Neural Networks have two inputs. With some tweaking, you can get them to accept any number of inputs. What I don't understand is how to get them to accept variable numbers of inputs. I've seen a couple of research papers (most notably this one) where they talk about doing this, but none explain exactly how.

Could someone please explain how to create a Siamese Neural Network with a variable number of inputs?

",22440,,22440,,12/7/2021 21:59,12/8/2021 23:21,How can Siamese Neural Networks accept a variable number of inputs?,,0,3,,,,CC BY-SA 4.0 32625,1,,,12/4/2021 19:31,,1,164,"

As far as I understand, gradients are supposed to tell us 1) the magnitude and 2) direction, to update a parameter such as to minimize the loss function.

Regarding saliency maps, which use gradients with respect to the input, do the gradients give us the same information?

Consider vanilla saliency maps [1] (i.e. gradients-only) and integrated-gradients [2] (using a baseline image), with grayscale images.

Do the (vanilla) gradients give us the amount and direction a pixel-value needs to change? OR does the magnitude tell us for the amount of in loss based on a minimal change in pixel-value?

In simpler terms: does magnitude signify:

  1. amount of change required in a pixel-value to have some change (in loss?) or

  2. amount of change in (loss?) based on a minimal/local change in pixel-value?

",51011,,2444,,12/7/2021 17:22,12/7/2021 17:22,What exactly do gradient-based saliency map tell us?,,0,2,,,,CC BY-SA 4.0 32626,1,32629,,12/5/2021 0:31,,0,48,"

I have a (5, 128, 768) matrix, that is, I have 5 embedding spaces of shape (128, 768). Since they all keep a relation, and for the sake of my model, I need to combine them into a unique output: (1, 128, 768*5). If I just concat them all along axis=-1, will I be losing some info?

Making that concatenation is the only way I can think of solving this. Is there any better option?

",50413,,2444,,12/5/2021 12:36,12/5/2021 12:36,Best way to resize 3d to 2d matrix,,1,1,,,,CC BY-SA 4.0 32629,2,,32626,12/5/2021 6:03,,0,,"

I think if you want to resize and reduce your matrix size, you can use one of the dimension reduction techniques.

Here there is a link that may be helpful to you.

",32763,,,,,12/5/2021 6:03,,,,2,,,,CC BY-SA 4.0 32631,2,,32610,12/5/2021 9:05,,1,,"

Yes, in the case of the Gaussian, you have two distinct layers (so weights and biases), one for the mean and the other for the variance, as the equations are telling us.

The mean is calculated with the weights $\mathbf{W}_{4}$ and bias $\mathbf{b}_{4}$ from $\mathbf{h}$ as follows

$$\boldsymbol{\mu} =\mathbf{W}_{4} \mathbf{h}+\mathbf{b}_{4},$$

while the variance (actually, equivalently, the log of the standard deviation) is calculated from $\mathbf{W}_{5}$ and $\mathbf{b}_{5}$ from $\mathbf{h}$ as follows

$$\log \sigma^{2} =\mathbf{W}_{5} \mathbf{h}+\mathbf{b}_{5}$$

Here you have a PyTorch implementation that uses 2 distinct linear/dense layers for doing this, but note that it is doing this only for the encoder to produce the latent vector $\mathbf{z}$.

Yes, generally, these layers (so the mean and variance) are not the same for the encoder and decoder. However, it would not be surprising to me if someone already tried to share some layers between the encoder and decoder for some specific task.

",2444,,,,,12/5/2021 9:05,,,,0,,,,CC BY-SA 4.0 32634,1,,,12/5/2021 12:29,,0,38,"

I am building a neural network for recognizing ship types based on a 1000-long series of location data (latitude-longitude, normalized to account for different km/longitude° metrics, so that vector difference yields a consistent distance). The dataset I use consists of around 100 000 distinct day-ship pairs, classified by a 10-valued labeling. The cardinality of the classes are: 30115, 26327, 12798, 10940, 5859, 4211, 4176, 3639, 3521, 2834

I tried two different approaches:

  1. A recurrent network (using LSTM): Dense[relu] -> LSTM -> Dense[relu] -> Dropout -> Dense[softmax]
  2. A 1D convolutional network: Dense[relu] -> Conv1D -> Dense[relu] -> Dropout -> Dense[softmax]

I experimented with the hyperparameters of the above networks, but they all converge to a 40% accuracy, where the model classifies all inputs as class 0 or 1 (choosing the most likely class of the output layer).

I could accept that the data is not well-defined and this kind of prediction is impossible, but the strange thing is that even if I give the same data as training and validation, the model stops getting better at the 40% accuracy mark. Shouldn't it go further, and "memorize" the classes in this case, resulting in ~100% accuracy on the training data?

",51367,,,,,12/30/2022 22:03,Neural network for recognizing ship types based on location series,,1,0,,,,CC BY-SA 4.0 32636,2,,32634,12/5/2021 14:53,,0,,"

There are a lot of questions to be asked about your test setup, data preprocessing, and model architecture. RNNs, or in your case LSTMs, can be tricky when it comes to implementation. I suspect your problem lies somewhere with the LSTM or preprocessing.

Are all lat/lon data points in one class the same? If not one thing you might want to do is to somehow round or perform some kind of grouping for the lat and longitude within specific classes. This could help simplify the problem for the network.

Perhaps It's also possible you don't have enough features in your model, i.e. too shallow/ not enough layers.

Have you also considered this problem without a RNN? Is it really needed in this case?

",43651,,43651,,12/5/2021 18:19,12/5/2021 18:19,,,,0,,,,CC BY-SA 4.0 32637,1,32642,,12/5/2021 16:43,,0,73,"

I often see Thompson Sampling in RL literature, however, I am not able to relate it to any of the current RL techniques. How exactly does it fit with RL?

",31755,,,,,12/5/2021 18:08,Why is Thompson Sampling considered a part of Reinforcement Learning?,,1,0,,,,CC BY-SA 4.0 32640,2,,7025,12/5/2021 17:35,,1,,"

This book is still relevant today!

It describes many ML concepts, such as linear regression, neural networks, support vector machines, Gaussian processes, probabilistic graphical models, variational inference, and hidden Markov models, which are still relevant today. If you follow any decent course on ML, it should cover most of these topics. In fact, during one course on ML that I had at university (a few years ago), we used this book as a reference.

Clearly, this book does not contain the description of the latest state-of-the-art models (for example, transformers), but it's a decent book for introducing many concepts in ML.

So, if you want to get a good overall knowledge of ML, then you can surely start with this book (provided that you have a minimal mathematical background to understand the ML concepts).

You may also want to take a look at this post.

",2444,,,,,12/5/2021 17:35,,,,0,,,,CC BY-SA 4.0 32642,2,,32637,12/5/2021 18:08,,3,,"

Thompson Sampling (TS) is used in the context of bandits, which is a special case of the RL problem.

You can also use TS for the full RL problem, but that can lead to inefficient exploration. To know more about this issue, you could read

",2444,,,,,12/5/2021 18:08,,,,0,,,,CC BY-SA 4.0 32643,1,,,12/5/2021 18:10,,0,111,"

I am trying to find a way to train a model to predict the correct placement of entities like a tree, dog and cat in a natural 3D environment. Any help regarding how I could use textual data to learn correct placement of objects in 3D space would be a great help. I am a little lost on how to approach this problem.

",51374,,,,,12/5/2021 18:10,Predict placement of an object in 3D space,,0,3,,,,CC BY-SA 4.0 32644,1,,,12/5/2021 18:27,,0,21,"

Suppose I have data I want to use for supervised learning, but there is a pretty bad target/class/labels imbalance. Should I:

  1. Limit the size of the training set to make sure there is a flat target/class balance distribution (the training set is designed such that there is an equal number of training samples for each class based on splitting the lowest-occuring class as high as possible). For example, if my lowest-occuring class appears only 50 times in my data, and I want an 80-20 train-test split, then I decide I take 40 of the samples for training, and for an even target balance, take 40 samples for all other samples in training, even if the highiest-occuring class appears 100,000 times, for instance.

  2. Ignore target balance and just focus on the ratio for the train and testing split. So, if it's 80-20, take 40 of the samples out of 50 for my lowest-occuring class, and 80% of 100,000 for my highest occuring class, and so on.

  3. Something else?

Let's suppose I can't just get more data. I know there's some stuff to be said regarding undersampling and oversampling, but what can I do to tell if either one is working better if model accuracy might be disingenuous?

",26250,,26250,,12/5/2021 18:39,12/5/2021 18:39,What's the best way to train data with unbalanced targets?,,0,2,,,,CC BY-SA 4.0 32646,1,,,12/5/2021 18:44,,1,39,"

I have a bunch of unique full names of users. I made pseudo-physical model to emulate misprints of desktop and mobile users (hence, fatfingering, jumpy fingers, accidentals touches of touch bar etc.)

So, I have pairs like John Snow - joh Snown

I tried first Recurrent networks, LSTM, like some kind of vocabulary to "translate" from bad words to good ones, but it return only known predicted result, and when I try to put unknown last names, it returns wrong results.

I wish to find some patterns in misspelled words, and to predict correct spelling.

Can you please advice some kind of NN to cope with the task, or maybe some contributions in that domain?

P.S. Yes, I know that there exist other AI methods to get things done

P.P.S. This vocabulary is not in English, just in case

UPDATE

LSTM nn works nice with known names and last names endings for new last names. Right now I use 2 different nn, first to correct mistypes, second to determine first, last and middle name.

UPDATE2

Sequence to sequence solution also can normalize name (put them in order), find sex of person, find probability of error, etc.

",51377,,51377,,3/28/2022 8:56,3/28/2022 8:56,What kind of NN to use to find misprints in test,,1,2,,,,CC BY-SA 4.0 32649,1,,,12/6/2021 8:48,,0,82,"

I'm trying to write an automatic assignment algorithm for the following problem:

I have $N$ tasks and $M$ users. For each task, I have a ranking for each user for "how related it is to that user". The ranking is a floating-point number in the [0, 1.0] range. The sum of all ranks is 1. I need to write an algorithm that will create the overall based assignment for all tasks. It has 2 constraints:

  1. Overall best correctness of assignment.
  2. Properly balanced - I can't have 1 user assigned 20 tasks while all others are assigned 1-2.

So far, from studying multiple input sets, I found that a task that has a user with a rank over 0.7 should be assigned to that user no matter what, because it's a really strong indicator of a correct assignment. After that, I tried to balance all the remaining tasks.

So, for the input of:

  1. tasks = [t1, t2, t3,... , tn]
  2. task_user_scores = [[s11, s12,...s1m], [s21, s22, ..., s2m],..., [sm1, sm2, ..., smm]]
  3. first_iteration_limit = float, [0, 1.0], I use 0.7, as I described earlier.
  4. per_user_limit = int, how many tasks to strive to assign per user.

I do:

Step 1

for ti in tasks 
    if some sij > 0.7 
        assign ti to user j

Step 2

for ti in unassigned tasks 
    compute user deviation(max deviation is abs(n - per_user_limit), udj is abs(assigned_to_j - per_user_limit) from per_user_limit(ud), 
    compute adjusted scores as follows sij * (1 - udj). 
    Select the user from the list using weighted random, if selected user already has more assigned tasks than per_user_limit, select again(the random select happens a finite number of times, I found that 10 is fine). 
    Otherwise, if the user didn't reach per_user_limit assign ti to user j.

Step 3

for ti in unassigned tasks
    assign ti to the user in the top 5 with the least number of tasks assigned to it.
",51390,,2444,,12/10/2021 9:21,12/10/2021 9:21,How to assign tasks to users with ranking?,,0,2,,,,CC BY-SA 4.0 32653,2,,32646,12/6/2021 12:40,,1,,"

I think it would be a hard task for identifying spell errors in first and last names but you can fine-tune a BERT and use Damerau–Levenshtein distance in your final prediction. for further information you can read the paper below.

Misspelling Correction with Pre-trained Contextual Language Model

",32763,,,,,12/6/2021 12:40,,,,0,,,,CC BY-SA 4.0 32662,1,,,12/7/2021 2:08,,0,147,"

I want to try self-supervised and semi-supervised learning for my task, which relates to token-wise classification for the 2 sequences of sentences (source and translated text). The labels would be just 0 and 1, determining if the word level translation is good or bad on both the source and target sides.

To begin, I used XLMRoberta, as I thought it would be best suited for my problem. First, I just trained normally using nothing fancy, but the model overfits after just 1-2 epochs, as I have very little data to fine-tune on (approx 7k).

I decided to freeze the BERT layers and just train the classifier weights, but it performed worse.

I thought of adding a more dense network on top of BERT, but I am not sure if it would work well or not.

One more thought that occurred to me was data augmentation, where I increased the size of my data by multiple factors, but that performed badly as well. (Also, I am not sure what should be the proper number to increase the data size with augmented data.)

Can you please suggest which approach could be suitable here and if I am doing something wrong? Shall I just use all the layers for my data or freezing is actually a good option? Or you suspect I am ruining somewhere in the code and this is not what is expected.

",51410,,2444,,12/10/2021 9:07,12/10/2021 9:07,Fine tuning BERT for token level classification,,0,3,,,,CC BY-SA 4.0 32665,1,,,12/7/2021 10:57,,1,15,"

I want to initialize a parameter, which is a single real number in my model. If you want the role of the parameter in the model, you can assume it as the parameter to multiply with the output of the neural network and the resultant product will be the final output.

Does initialization value matter? If yes, are there any guidelines on initializing the single parameter?

",18758,,18758,,12/7/2021 11:04,12/7/2021 11:04,Are there any recommendations on initialising a single parameter in deep learning?,,0,3,,,,CC BY-SA 4.0 32666,2,,21797,12/7/2021 11:27,,2,,"

What is a knowledge graph?

Appendix A.3 "Knowledge Graphs": 2012 Onwards of the survey Knowledge Graphs (which is probably the most extensive survey on KGs) states that knowledge graphs have been defined in different ways in recent years. Each of these definitions raises questions about the relationship between KGs and other related concepts, like graph databases, knowledge bases, and ontologies.

One definition of a KG is

a graph where nodes represent entities, and edges represent relationships between those entities. Often a directed edge labelled graph is assumed (or analogously, a set of binary relations, or a set of triples)

The question here is: what's the difference between KGs and graph databases (like Neo4j)? Graph databases have been used to build KGs, but is there any actual difference between these 2 terms?

Another definition of a KG is

a knowledge graph is a graph-structured knowledge base

So, according to this definition, a KG would be a type of knowledge base (KB).

What is a knowledge base?

In the same appendix, the authors write

The phrase "knowledge base" was popularised in the 70's (possibly earlier) in the context of rule-based expert systems [72], and later were used in the context of ontologies and other logical formalisms [68]

They conclude that a KB has also been defined in ambiguous ways in the past.

Norvig and Russell, in chapter 7 (p. 235) of their AIMA book (3rd edition), define a KB as a set of sentences/facts, for example, expressed in propositional logic. You then use inference techniques to derive new knowledge from this knowledge base. The programming language PROLOG is based on this definition of a KB.

What is the difference between a KG and KB?

So, there is not a single answer to your question because knowledge graphs (KGs) and knowledge bases (KBs) have been defined in multiple (often ambiguous) ways in the past. Some people say that KGs are different from KBs, while other people use the term KG as a synonym for KB or define it as a type of KB.

One possible answer

However, if we define a KG as a graph with nodes that represent entities (like city and country) and edges that represent the relations between those entities (which is a common definition of a KG), we can view a KG as a visual representation of a KB, defined as a set of sentences/facts (for example, expressed in propositional logic). To see why this is the case, consider the following simple KG.

Let's denote this KG by $K = \{E, R\}$, where $E = \{ \text{Santiago}, \text{Chile}, \text{Perú}\} = \{ S, C, P\}$ is the set of entities (with a property) and $R = \{ \text{capital}, \text{borders}\} = \{ c, b \}$ is the set of relations. In $K$, we have the following relations

  • Santiago is the capital of Chile: $(S, c, C)$
  • Chile borders Perú: $(C, b, P)$
  • Perú borders Chile: $(P, b, C)$

which can be put into a set of facts $F = \{(S, c, C), (C, b, P), (P, b, C) \}$, which is the KB associated with this specific KG.

In the context of KGs, we also have a task/problem similar to the inference (or knowledge reasoning) one in the context of KBs, which is known graph completion, which can be divided into other subtasks, like entity prediction, relation prediction and triple classification, which use knowledge graph embeddings.

Further reading

You should really read Appendix A and successive appendices of the mentioned survey to get more information about the issues concerning the history and definition of knowledge graphs and knowledge bases, and other related concepts, like schemas. You can also read chapter 7 (p. 235) of the AIMA book, 3rd edition to know more about knowledge bases.

",2444,,2444,,12/9/2021 12:21,12/9/2021 12:21,,,,0,,,,CC BY-SA 4.0 32667,1,,,12/7/2021 11:30,,0,72,"

As shown in the figure:

Why does token prediction work when "Socrates" is replaced with "Plato"?

From the point of view of symbolic logic, the above example effectively performs the logic rule:

∀x. human(x) ⇒ mortal(x)

How might we explain this ability? Moreover, how is this learned in just a few shots of examples?

I think this question is key to understanding the Transformer's logical reasoning ability.

Below are excerpts from 2 papers:

",17302,,17302,,12/24/2021 12:34,12/24/2021 12:34,"Why is BERT/GPT capable of ""for-all"" generalization?",,0,8,,,,CC BY-SA 4.0 32671,1,32673,,12/7/2021 17:55,,2,295,"

I am getting tripped up slightly by how specifically the gradient is calculated in policy gradient methods (just the intuitive understanding of it).

This Math Stack Exchange post is close, but I'm still a little confused.

In standard supervised learning, the partial derivatives can be acquired specifically because we want to learn about the derivative of the cost with respect to input parameters and then adjust in the direction of minimising this error.

Policy gradients are the opposite, as we want to maximise the likelihood of taking good actions. However, I don't understand what we are getting partial derivatives with respect to - in other words, what is the 'equivalent' of the cost function, specifically for $\nabla_\theta \log\pi_\theta$?

",34530,,2444,,12/8/2021 8:31,12/8/2021 8:45,What specifically is the gradient of the log of the probability in policy gradient methods?,,2,0,,,,CC BY-SA 4.0 32673,2,,32671,12/7/2021 20:56,,1,,"

I would recommend not trying to think of this in relation to supervised learning.

The policy $\pi(\cdot; \theta)$ is simply a function that is parameterised by theta. If we take a $\log$ of this function, it is still just a function. We want to take the (partial) derivative(s) of this function with respect to the parameters so that we can perform a gradient ascent step on the parameters.

A simple example can be shown by letting $\pi(a; \alpha, \beta) = \exp(\alpha + \beta a)$. In the policy gradient theorem we must first take a log of the policy which would give us $\log(\pi(a; \alpha, \beta)) = \alpha + \beta a$, and the partial derivatives wrt to the parameters are $\nabla_\alpha \log(\pi(a; \alpha, \beta)) = 1$ and $\nabla_\beta \log(\pi(a; \alpha, \beta)) = a$. We can then use these partial derivatives to perform a gradient ascent update in the direction of the gradient of our objective (the value function, which is of course what we want to maximise) for $\alpha$ and $\beta$ for a given return $G_t$ and action $a_t$ by \begin{equation} \alpha' = \alpha + G_t \times \nabla_\alpha \log(\pi(a_t; \alpha, \beta)) = \alpha + G_t \times 1 \; \\ \beta' = \beta + G_t \times \nabla_\alpha \log(\pi(a_t; \alpha, \beta)) = \beta + G_t a_t\;. \end{equation}

In practice, however, you're likely to need a much more complex function for the policy, typically a neural network of some description. However, everything translates to these more complex functions, you're just going to have many more partial derivatives to calculate.

",36821,,,,,12/7/2021 20:56,,,,1,,,,CC BY-SA 4.0 32675,1,,,12/8/2021 7:55,,1,13,"

I have my own data to train a logistic regression model (for a multi-class classification task), and I want to know how the distribution of weight parameters changes after each update with gradient descent.

For example, let's say that there are $f$ many features for each input, and the weight $W$, which is a $c \times f$ matrix, where $c$ is a number of classes, is initialized with uniform distribution $U(-1/\sqrt{f}, 1/\sqrt{f})$, which is LeCun uniform initialization.

For each step of gradient descent with Cross-Entropy loss, it will be updated as $$ W_{t+1} = W_{t} - \alpha \frac{\partial \mathcal{L}}{\partial W_{t}} $$ where $\alpha$ is a learning rate and the gradient is given $$ \frac{\partial \mathcal{L}}{\partial W_{t}} = \frac{1}{n} (\mathbf{y} - \mathbf{p})^{T}\mathbf{X} $$ where $\mathbf{X} \in \mathbb{R}^{n\times f}$ is an input matrix, $\mathbf{y} \in \mathbb{R}^{n \times c}$ is one-hot encoded labels, and $$\mathbf{p} = \mathrm{softmax}(\mathbf{X}W_{t}^{T}) \in \mathbb{R}^{n \times c}$$ is the model's output (predicted probability for each example & class).

What I want to know is how the distribution of $W_{t}$ changes as $t$ increases, if some information about $\mathbf{X}$ is known. More precisely,

  1. Is it possible to get a bound of $\mathbb{E}[||W_{t}||_{\infty}]$ in terms of $t, \alpha, ...$? We may assume that the input $\mathbf{X}$ is also bounded (in the sense that $||\mathbf{X}||_{\infty} \leq M$ for known $M$.) I have a really rough bound for $||W_{t}||_{\infty}$, but it is not good enough.
  2. Are there any works in this direction for other models, such as MLPs?

When I plotted the value $||W_{t}||_{\infty}$ for each $t$, then it seems that it increases sub-linearly in $t$, but

",30886,,2444,,12/9/2021 8:54,12/9/2021 8:54,How does the distribution of the parameters change in logistic regression?,,0,0,,,,CC BY-SA 4.0 32676,2,,32671,12/8/2021 8:45,,2,,"

Consider a function $f(x)$ where $x$ is a random variable, whose distribution depends on $\theta$. The objective is to minimize \begin{align*} \mathbb{E}_x[f(x)] = \int_x f(x) \pi(x, \theta) dx \end{align*} where $\pi(x, \theta)$ is the probability density of $x$ given the parameters $\theta$ (to be formal, you should use dummy variables in place of x). The gradient is then \begin{align*} \nabla_{\theta} \mathbb{E}_x[f(x)] = \int_x f(x) \nabla_{\theta}\pi(x, \theta) dx \label{1}\tag{1} \end{align*} Eq. (1) is essentially the policy gradient. I think when written like Eq. (1), it's clear to see what the gradient actually means.

To see why this is the policy gradient, first we have \begin{align*} \int_x f(x) \nabla_{\theta}\pi(x, \theta) dx = \int_x f(x) \nabla_{\theta}\pi(x, \theta) \dfrac{\pi(x, \theta)}{\pi(x, \theta)}dx = \mathbb{E}_x[ f(x) \nabla_{\theta}\log \pi(x, \theta)]. \end{align*} Now interpret $x$ as the actions, and interpret $f(x)$ to be the expected return. I omit the full details of this, but hopefully you get the idea.

The policy gradient is simply the gradient of the expected returns, with respect to the parameters of the action distribution.

",47080,,,,,12/8/2021 8:45,,,,1,,,,CC BY-SA 4.0 32677,1,,,12/8/2021 10:36,,3,175,"

On-policy distribution is defined as follows in Sutton and Barto:

On the other hand, state visitation frequency is defined as follows in Trust Region Policy Optimization:

$$\rho_{\pi}(s) = \sum_{t=0}^{T} \gamma^t P(s_t=s|\pi)$$

Question:: What is the difference between an on-policy distribution and state visitation frequency?

",46214,,46214,,12/9/2021 16:04,12/9/2021 16:04,What is the difference between an on-policy distribution and state visitation frequency?,,0,2,,,,CC BY-SA 4.0 32678,2,,28440,12/8/2021 10:45,,1,,"

You should not use the training data for hyper-parameter tuning. In other words, when doing the hyper-parameter tuning, you should not optimize the training objective. You should optimize an objective computed on a dataset that is different than the training dataset. This dataset is sometimes called validation dataset, which can also be used for early stopping, which is not a way of hyper-parameter tuning but to avoid over-fitting.

In the Keras Tuner, you can specify the validation data (which is passed to the fit method under the hood) and the objective of the hyper-parameter optimization. You should specify that the objective is computed on the validation data (e.g. val_loss or val_accuracy), which should be different than the training data. Here you have a complete example.

Once you have selected your hyper-parameters, you can train again the best model on a training dataset, which can or not be the same as the training dataset you used during hyper-parameter optimization, but you should use a test dataset to assess the generalization ability of your model. The main requirement is that the training, validation, and test datasets are disjoint in order to avoid bias.

If you use k-fold cross-validation, you will be training and testing your model with different parts of your whole dataset each time. So, if you have $k$ folds, you will use $k - 1$ folds for training and one for testing. You will do this $k$ times, each time with a different fold for testing and the rest for training. This means that you will be using all your data for training and testing, but, each time, the training and test datasets are separate. You use k-fold CV in order to compute an average (and/or variance) estimate of generalisation of your model (you do this especially when your dataset is small).

In principle, I don't see any big problem with doing k-fold cross-validation after you have selected the best hyper-parameters (e.g. architecture), as long as you used a separate validation dataset, which is disjoint from the data you used for k-fold CV. However, make sure to shuffle your data before hyper-parameter tuning (i.e. before selecting the validation dataset), so that the distribution of the training, validation, and test datasets is roughly the same.

If I understand your descriptions correctly, it seems that you used the validation data also during k-fold CV. I don't think this was a good idea, as you're training and testing a model using data that you used to select that model itself (so this can introduce some kind of bias). You also don't specify which metric you optimize during hyper-parameter optimization.

",2444,,,,,12/8/2021 10:45,,,,3,,,,CC BY-SA 4.0 32679,1,,,12/8/2021 11:57,,1,184,"

I have created a data set with 30.000 text documents (each text file is rather small with respect to its length), which are labelled with 0 and 1. Using this data set, I want to train machine learning and deep learning models in order to be able to classify new text files.

On the one hand, I want to use classical machine learning models (such as logistic regression, random forest, SVM, etc.) with the Bag of Words/TF-IDF approach. This requires extensive text pre-processing, such as tokenization, stemming, converting to lower case, removing of stopwords and punctuation, lemmatization, etc.

On the other hand, I want to use new deep learning models (such as RNN, LSTM, BERT, XLNET, etc.).

Which pre-processing steps are necessary/advantageous for these deep learning models? Should I also use tokenization, stemming, converting to lower case, removing of stopwords and punctuation, lemmatization, etc. or can I omit most of these steps?

",33511,,2444,,12/9/2021 9:02,12/17/2021 9:27,Which pre-processing steps are necessary for Deep Learning models to solve a document classification problem?,,1,0,,,,CC BY-SA 4.0 32680,1,,,12/8/2021 14:13,,0,107,"

Currently, I'm trying to optimize a training process of a neural net to improve final results. The problem I'm dealing with is multiclass segmentation on microscopic data.

The paradox is that the best (and not sufficient) result is giving the simplest U-Net architecture on the original dataset. If I try a deeper or more complex model (e.g. r2unet), the final segmentation is significantly worse. If I try on the fly augmentation - worse as well. Changing a complex model into a more shallow one didn't help either (just tried out the other way than making it more complex).

Now, I'm trying to make a custom loss function work to improve the segmentation.

Any ideas what might be a root cause? Or any other ideas that could improve the result?

To get more specific, here's an example of the data. The initial 4000x4000 images are cut to 512x512, which results in a little over 3700 images. Most of them don't include the classes and is just background, that's why I'm trying to make another loss functions work, as well as weighted classes.

So far, I'm using categorical cross-entropy as a loss function, however, dice, Jaccard, the focal losses seem could be more suitable and once I'll finish my computations I'll try to make these work again, so far my tries weren't really compatible with Keras, at least it seemed.

The size of U-Net

  • depth = 5
  • first conv layer has 64 filters, goes up to 1024.

R2U-Net

  • depth 5
  • first layer 64filter (tried also 32)
",47266,,2444,,12/14/2021 23:15,12/14/2021 23:15,Why is the simplest U-Net architecture giving the best (but not good enough) results on a multi-class segmentation on microscopic data?,,0,3,,,,CC BY-SA 4.0 32685,1,32722,,12/9/2021 7:21,,1,24,"

I'm trying to train a function for a industrial-process-control-like system. This is my first attempt at a custom training, so feel free to point out any invalid assumptions.

I've got one input and one controlled output, which I'm trying to optimise. I've reduced the problem with some values normalisation to:

  • the first half of input looks like a sin() raise from 0 to max value then dropoff - with lots of noise on top (let's say up to +/-10% at each measurement)

  • I don't know what the max is, but it's roughly predictable (input goes from 0 to between 0.5 and 2)

  • the output cannot go down (well, it can by a tiny bit, but I'd ignore that here), and cannot go higher than the input value

  • the goal is to get the output value as high as possible

Currently the best non-NN approach I've got is to start a few % below input and at each step run output = A*input + (1-A)*previous_output, so the result looks like this (input in blue, output in orange)

I wanted to check if some RNN can improve on this, so I'm planning to check an LSTM doing this instead. I'm struggling to come up with a loss calculation which is viable for training here.

I considered making the input and output for the network an absolute change from the previous value (or input as input-last_output difference), then as the loss using some kind of inverted value of sum(output changes) from step 0 to the crossover point period to reward higher maximums while ignoring the distance. (so discarding anything that happens past the crossover)

But... that doesn't relate to the input really, so tensorflow wouldn't be able to train based on this, if I understand it correctly. Am I going in the wrong direction? Are there some known ways to solve this problem?

",51438,,,,,12/12/2021 11:17,"Writing a loss function for ""how far can this output be pushed""",,1,0,,,,CC BY-SA 4.0 32686,1,,,12/9/2021 9:37,,7,176,"

Rotated MNIST is a popular dataset for benchmarking models equivariant to rotations on $\mathbb{R}^2$, described by $SO(2)$ group or its discrete subgroups like $\mathbb{Z}^{n}$:

It consists of all digits from 0 to 9 rotated on an arbitrary angle from $[0, 2 \pi)$. However, what makes me a bit puzzled is that digits $6$ and $9$ seem to be confused by any learning algorithms, since from the view of human perception $6$ rotated by 180 degrees is equivalent to $9$ and vice versa.

The original paper in the description of Rotated MNIST doesn't comment on this point at all, which is strange, since it is a very natural question to ask.

In the paper Oriented Response Networks - authors plot embeddings of rotated digits projected via t-SNE on a 2d plane. There is a clear separation between all rotated versions of 6 and the rotated version of 9 for ORN.

I do not understand how it can be achieved? Probably, the networks understand much more in writing the digit, there are some subtle features, inaccessible to humans, but recognizable by powerful classifier?

",38846,,2444,,12/10/2021 10:45,10/26/2022 14:40,How can a neural network distinguish a rotated 6 and 9 digits?,,2,0,,,,CC BY-SA 4.0 32687,2,,4279,12/9/2021 9:53,,1,,"

I think, the proposed in the other answer CNN and RNN is a bad choice for this particular problem.

The input is the unordered sequence of the features, corresponding to each runner, so the input is essentially a set structure without the notion of order and locality. If the runners are assigned a number in random order, there is no sense as runner 1 is close to 2, or runner 3 precedes runner 4.

Therefore, CNN or RNN seem to be a bad choice since the add an inductive bias irrelevant to the data.

The question is pretty old, and at that time Transformer architecture was just invented, but it is the case, where it can be particularly well applied. Some points to note:

  • Since there is no order, there is no need for positional embeddings
  • Sequences are short, hence $O(N^2)$ complexity with respect to the length of sequence is small
  • There is no need for decoder

Overall, I would suggest to stack several encoder blocks and predict ranks from the resulting embeddings.

As a loss, choose some reasonable choice from mentioned in the paper Ranking Measures and Loss Functions in Learning to Rank.

",38846,,,,,12/9/2021 9:53,,,,0,,,,CC BY-SA 4.0 32689,2,,3981,12/9/2021 14:49,,0,,"

What you are after is called "Class-incremental learning" (IL).

In this study they consider three classes of solutions:

  • regularization-based solutions that aim to minimize the impact of learning new tasks on the weights that are important for previous tasks;
  • exemplar-based solutions that store a limited set of exemplars to prevent forgetting of previous tasks;
  • solutions that directly address the problem of the bias towards recently-learned tasks.

Their main findings are:

  • For exemplar-free class-IL, data regularization methods outperform weight regularization methods.
  • Finetuning with exemplars (FT-E) yields a good baseline that outperforms more complex methods on several experimental settings.
  • Weight regularization combines better with exemplars than data regularization for some scenarios.
  • Methods that explicitly address task-recency bias outperform those that do not.
  • Network architecture greatly influences the performance of class-IL methods, in particular the presence or absence of skip connections has a significant impact.
",16363,,,,,12/9/2021 14:49,,,,0,,,,CC BY-SA 4.0 32696,1,32697,,12/9/2021 20:31,,1,231,"

In AlphaZero's attached pseudocode, they create a training target for the policy network in this way.

def store_search_statistics(self, root):
    sum_visits = sum(child.visit_count for child in root.children.itervalues())
    self.child_visits.append([
        root.children[a].visit_count / sum_visits if a in root.children else 0
        for a in range(self.num_actions)
    ])

In other words, the training target probability for a certain move is proportional to its visit count.

However, in the paper, they describe the usage of softmax sampling of visit counts with temperature. This temperate is equal to 1 for the first 30 moves (in this case the policy training target is the same as in the pseudocode above) and for subsequent moves they set infinitesimal temperature -> 0, which essentially means they are picking the move with the highest visit count.

Since these are 2 different things (if the game has more than 30 moves), my question is: which approach should be used for creating the training target for the policy?

",50898,,2444,,12/10/2021 8:52,12/10/2021 8:52,What is a policy training target in AlphaZero?,,1,2,,,,CC BY-SA 4.0 32697,2,,32696,12/9/2021 20:55,,3,,"

The training target for the policy is always exactly the one described by the pseudocode; distribution proportional to visit counts, without any other kind of scaling.

The softmax sampling with the temperature (different for first 30 moves than after that) is not used for the policy target, but is used for the "real" move selection by the agent in the self-play game (i.e. after it has run its search of 800 or 1600 or however many iterations it was for the root game state encountered in the self-play game).


Generally, in my experience, you really wouldn't want the policy target to become too close to deterministic, except if really your entire MCTS already is extremely convinced about one move being clearly the best one all by itself (without getting pushed towards something more deterministic by a low-temperature softmax). If your policy becomes too deterministic too quickly, it destroys the exploration that we normally want MCTS to do inside its selection phase. Using a low-temperature softmax for the policy training target would introduce quite a big risk of this happening.

",1641,,,,,12/9/2021 20:55,,,,0,,,,CC BY-SA 4.0 32699,1,32713,,12/10/2021 0:13,,0,189,"

I've watched Tesla AI Day 2021 and there was a question Tesla staff tried to answer, but I did not quite understand the question (Note: quote taken from autogenerated subtitles, I do not hear differently, but may you will):

or you'd be training a lot more complex models which would be potentially significantly more expensive to run at inference time on the cars

I've found a definition of "inference time" in How to Optimize a Deep Learning Model for faster Inference?

The inference time is how long is takes for a forward propagation

But what does "AT inference time on the cars" mean? Is it just badly worded, or does this "at" actually add proper meaning? Also, does it make sense to run training models on the cars themselves and what can that phrase mean? Overall I do not make sense of the question. Do you?

Note: I'm not a native English speaker.

",51461,,2444,,12/11/2021 12:43,12/11/2021 12:45,"What does ""at inference time"" on Tesla's cars mean?",,1,0,,,,CC BY-SA 4.0 32700,2,,8570,12/10/2021 7:05,,1,,"

AI is not only about neural networks.

Formal proof assistants (like Coq, or Frama-C) are in some circles considered as AI. Projects like DECODER have an AI flavor.

Symbolic AI systems (like RefPerSys) and more generally expert systems are advocated in AI books like Artificial Beings, the conscience of a conscious machine. That book (by Pitrat) don't mention much neural networks.

Neural networks are good for some problems (e.g. computer vision), but less adequate for other problems.

Autonomous robots (like Mars Pathfinder) don't use only neural networks.

Natural language processing is a subpart of AI, and is not only using a neural network approach.

",3335,,,,,12/10/2021 7:05,,,,0,,,,CC BY-SA 4.0 32707,2,,24069,12/10/2021 16:02,,0,,"

Meta Learning can mean many things, but at the core is about having a second layer of optimisation, beside the usual one needed to solve your task.

For instance, in RL for robotics you may have a SAC (Soft Actor Critic) agent that learns how to pick and place, by first initialising a random neural network and then learning which weights minimise a loss function related to successful picks. Given this architecture you can fix a meta-goal, for instance being precise (base-goal) and fast (meta-goal) at picking. Or maximising human safety, minimising robot wear, and so on.

Now you can meta-learn the best meta-parameters to achieve this meta-goal. Example of meta-parameters could be the network initialisation, the shape of the loss function, the network architecture, etc.

Check out Meta-Learning in Neural Networks: A Survey https://arxiv.org/abs/2004.05439

",16363,,,,,12/10/2021 16:02,,,,0,,,,CC BY-SA 4.0 32708,2,,16133,12/11/2021 5:14,,2,,"

I like to think of hidden states as intermediate representations of input within a neural system. The overall goal of the system is to re-represent an input in some specific way so that the system can produce some target output. Each layer within a neural network can only really "see" an input according to the specifics of its nodes, so each layer produces unique "snapshots" of whatever it is processing. Hidden states are sort of intermediate snapshots of the original input data, transformed in whatever way the given layer's nodes and neural weighting require.

The snapshots are just vectors so they can theoretically be processed by any other layer - by either an encoding layer or a decoding layer in your example.

",51014,,,,,12/11/2021 5:14,,,,0,,,,CC BY-SA 4.0 32712,1,33980,,12/11/2021 11:39,,0,99,"

While reading source code related to RNN encoders, I've come across the term mask as input to the encoder. What exactly is it?

",18758,,2444,,12/31/2021 13:07,1/2/2022 23:45,"What is a ""mask"" in the context o RNN-based encoders?",,1,0,,,,CC BY-SA 4.0 32713,2,,32699,12/11/2021 12:45,,2,,"

"At inference time" means "when you perform inference". If "inference" is a synonym for "forward pass" (aka "forward propagation") (which is not always the case in ML), then "at inference time", again, means "when you perform the forward pass". "At" is just a preposition in English and it's often associated with location or time.

So, the sentence

you'd be training a lot more complex models which would be potentially significantly more expensive to run at inference time on the cars

can be rewritten as follows

you'd be training a lot more complex models which would be potentially significantly more expensive to run when performing the forward pass (i.e computing the predictions, for example, whether the traffic light is green or red) on the cars

",2444,,,,,12/11/2021 12:45,,,,8,,,,CC BY-SA 4.0 32715,1,,,12/11/2021 14:29,,1,731,"

I am reading about RNN encoders. I came across the following line from this code. And I am facing difficulty in understanding the theoretical details regarding it.

emb = self.drop(self.encoder(input))

The input is a tensor of shape $[32, 100]$. Here 32 is the batch size and 100 is the length of the sentence. Hundred elements are indices to the words (from the dictionary) that are used in the sentence. We can observe that the output emb is later passed to the rnn (LSTM/GRU) layer.

output, hidden = self.rnn(emb, hidden)

So, to me, it looks like that self.encoder is the necessary step while using the RNN encoder. So, I am interested in what it actually does.

When we see about self.encoder, it is an Embedding layer. The description for this layer is as follows

A simple lookup table that stores embeddings of a fixed dictionary and size.

This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings.

When we see about self.drop, it randomly keeps zero in the embeddings.

During training, randomly zeroes some of the elements of the input tensor with probability p using samples from a Bernoulli distribution. Each channel will be zeroed out independently on every forward call.

The outputs for both self.encoder(input) and self.drop(self.encoder(input)) are $[32, 100, 3000]$.

I have doubt(s) on the bolded parts of the description of the Embedding layer. The description is saying the Embedding layer uses/contains(?) a lookup table. The description says Embedding layer stores and retrieves word embeddings.

The doubts are

  1. Generally, does an embedding layer calculate word embeddings or just store and retrieve them from the table? If it does not calculate them, then who will calculate the embeddings? If you can also comment on the specifics of PyTorch, I would appreciate it.

  2. What exactly is an embedding layer? Is it a collection of neurons or any other?

",18758,,2444,,12/13/2021 22:17,12/13/2021 22:36,What exactly is embedding layer used in RNN encoders?,,1,0,,,,CC BY-SA 4.0 32716,1,,,12/11/2021 15:40,,1,12,"

I have a question about the association between labels. Say my neural network performs multi-labeling in its output layer. Now, if one of the labels is for whether a person lives in city $X$, another for if they have a wiki article, and finally a third for if that person is a millionaire. Let us assume that these are highly correlated.

Is there some way for us to know if the neural network internally finds an association between these three labels?

",51486,,2444,,12/12/2021 21:32,12/12/2021 21:32,Is there some way for us to know if the neural network internally finds an association between labels?,,0,1,,,,CC BY-SA 4.0 32718,1,,,12/11/2021 21:53,,1,43,"

Suppose I have images of hand-written Japanese text. If I want to translate those images, would my ML algorithm be a 2-step model (for example, a CNN to convert the image into Japanese characters/tokens and then feed those tokens in an RNN)? Is this normally how it would be done, or is there an end-to-end solution?

",51490,,2444,,12/12/2021 11:24,1/11/2023 12:00,Is image machine translation done in two steps?,,1,0,,,,CC BY-SA 4.0 32719,1,32723,,12/11/2021 23:54,,2,201,"

GauGAN is a neural network architecture from NVIDIA that can create realistic images from semantic maps (and nowadays also textual descriptions).

",35727,,2444,,12/12/2021 9:19,12/13/2021 18:31,"What does ""Gau"" in GauGAN stand for?",,1,0,,,,CC BY-SA 4.0 32721,1,,,12/12/2021 8:33,,0,27,"

Suppose there are $m$ sentences in a text file and the number of distinct words is equal to $n$. The goal is to get word embeddings using RNN.

We know that it is impossible to pass any word, which is in text format, as an input to RNN. We need to convert each word into some number and then pass it to the RNN to get word embeddings.

I know only the following method, if correct:

  1. Assign an index to each word. So, the index ranges from $0$ to $n-1$.
  2. Use the indices as input to RNN.

Is it the only technique used in the literature? If not, what are the names of other techniques that are used in the context of RNN encoders?

",18758,,18758,,12/13/2021 22:24,12/13/2021 22:24,What are the types of inputs used for RNN in literature given sentences?,,0,3,,,,CC BY-SA 4.0 32722,2,,32685,12/12/2021 11:17,,0,,"

This turns out to be less about the loss function and more about the approach. There's a number of them implemented in the tf_agents package.

Choosing the DDPG agent for outputting the increments and implementing a PyEnvironment to run the simulation and return the highest reached output as a reward seems to work just fine. Very similar to the blackjack example.

",51438,,,,,12/12/2021 11:17,,,,0,,,,CC BY-SA 4.0 32723,2,,32719,12/12/2021 11:18,,2,,"

As you know, GauGAN is the following (from this post):

GauGAN was a Microsoft Paint-style platform that let uses create landscape images, with the model then able to turn them into photorealistic images.

So, it is a generative adversarial network (GAN) to create images. As it works like an artist, its authors named it GauGAN after "Paul Gauguin" who was a French Post-Impressionist artist (refer into this post).

",4446,,4446,,12/13/2021 18:31,12/13/2021 18:31,,,,0,,,,CC BY-SA 4.0 32727,1,33948,,12/12/2021 19:21,,0,185,"

I was reading the last version of the YOLO paper available in Arxiv, and I don't fully understand the output dimensions (I understand width and height, but not depth) of the first and second convolutional layers.

Shouldn't the output of the first layer be 112x112x64? Shouldn't the output of the second layer be 56x56x192? I thought (and this is the case after the 3rd layer) that the depth of the ouput of a conv layer is equal to the number of filters of this layer. This is why after the first conv layer (that contains 64 filters) I would expect an output depth of 64. And the same for the second conv layer that contains 192 filters, I would expect the output to have a depth of 192.

",51503,,51503,,12/13/2021 8:41,12/28/2021 20:01,Are the output dimensions of the first and second convolutional layer in YOLO paper correct?,,1,3,,,,CC BY-SA 4.0 32729,2,,27233,12/12/2021 20:06,,0,,"

After having read a few parts of the book that mention these terms, I don't think there's any practical difference between a performance standard and a performance measure. They both measure the performance of the agents.

They initially use the term performance measure to refer to rationality in section 1.1

The definitions on the left measure success in terms of fidelity to human performance, whereas the ones on the right measure against an ideal performance measure, called rationality.

Later, they define this performance measure in the context of rational agents in section 2.2.

If the sequence is desirable, then the agent has performed well. This notion of desirability is captured by a performance measure that evaluates any given sequence of environment states.

So, here, a performance measure evaluates a sequence of states.

In that same section (3rd edition), they explain that the performance measure should be designed in such a way that the agent achieves our desired goals.

Obviously, there is not one fixed performance measure for all tasks and agents; typically, a designer will devise one appropriate to the circumstances. This is not as easy as it sounds. Consider, for example, the vacuum-cleaner agent from the preceding section. We might propose to measure performance by the amount of dirt cleaned up in a single eight-hour shift. With a rational agent, of course, what you ask for is what you get. A rational agent can maximize this performance measure by cleaning up the dirt, then dumping it all on the floor, then cleaning it up again, and so on. A more suitable performance measure would reward the agent for having a clean floor. For example, one point could be awarded for each clean square at each time step (perhaps with a penalty for electricity consumed and noise generated). As a general rule, it is better to design performance measures according to what one actually wants in the environment, rather than according to how one thinks the agent should behave.

So, the usage of the term performance measure in this example is very analogous to the usage of the term reward function in RL. To be more precise, in RL, you could define a reward function as a function $r : \mathcal{S} \rightarrow \mathbb{R}$, so $r(s)$, where $s \in \mathcal{S}$, is the reward given to the RL agent when it enters or is in state $s$. This is consistent with their definition above (although they state that the performance measure evaluates a sequence of states, but I think this is due to the fact that, in a rational agent, you don't necessarily assume that the Markov property holds, like in an MDP and RL).

Later, they also use the term performance measure in the context of utility-based agents,

An agent's utility function is essentially an internalization of the performance measure. If the internal utility function and the external performance measure are in agreement, then an agent that chooses actions to maximize its utility will be rational according to the external performance measure

So, here, if you are familiar with RL, the distinction between the utility function and the performance measure should be clear. The utility function would be analogous to the value function, while the performance measure is again analogous to the reward function.

Later, in section 17.1.1, they write

In the MDP example in Figure 17.1, the performance of the agent was measured by a sum of rewards for the states visited. This choice of performance measure is not arbitrary

So, again, the performance measure is a synonym for the reward function in the context of RL.

In section 2.4, they write about the performance standard

The critic tells the learning element how well the agent is doing with respect to a fixed performance standard. The critic is necessary because the percepts themselves provide no indication of the agent’s success. For example, a chess program could receive a percept indicating that it has checkmated its opponent, but it needs a performance standard to know that this is a good thing; the percept itself does not say so. It is important that the performance standard be fixed. Conceptually, one should think of it as being outside the agent altogether because the agent must not modify it to fit its own behavior

That's why it appears outside the agent in the diagram.

On the next page, they write

In a sense, the performance standard distinguishes part of the incoming percept as a reward (or penalty) that provides direct feedback on the quality of the agent’s behavior.

In RL, this would be represented as $p = (o, r)$, i.e. the percept $p$ is composed of the observation $o$ and reward/performance $r$.

I don't really know why they used different terms for rational, utility-based, and learning agents, but, to me, I don't see any difference between the two terms.

",2444,,,,,12/12/2021 20:06,,,,0,,,,CC BY-SA 4.0 32732,1,32740,,12/12/2021 21:39,,2,87,"

I'm working on my thesis concerning a reinforcement learning problem and am trying to prioritise my time on different components of it:

  • Formalising the agent environment (like the design of state-, action-space and reward-structure)
  • Selection of learning algorithm
  • Selection of network architecture and size
  • Design of the training setup

It is an agent in a 3D environment with simulated physics (in Unity), the domain being a real-time strategical game. It is an environment with constraint training data, so sample efficiency is very important.

Now my question: I do anticipate that the design of the state- and action space will have a big impact on the training result, especially in this environment with little training data.

However, is there a way one can clearly prioritise what components will be the most important ones for an RL setting?

Time is limited, and, for me, as a beginner, it seems to be quite difficult to determine what component will be the most important one and needs the most focus. Testing only the hyper-parameters of a learning algorithm thoroughly will take in itself a long time. And obviously disregarding any component will result in bad results.

Is there a way to know on which component one should focus more?

",51505,,2444,,12/13/2021 9:45,12/13/2021 9:45,What components of reinforcement learning influence the result the most?,,1,0,,,,CC BY-SA 4.0 32734,1,,,12/13/2021 2:02,,0,52,"

I train my neural network on random points generated for a data set that theoretically consists of approximately $1.8 * 10^{39}$ elements. I sample (generate) tens of thousands of random points on each epoch with uniform distribution. For every model I try, it appears that it cannot get past $10-12\%$ accuracy on the data, even if I increase the size of the model to over ten million parameters.

Each feature of the data set is of two Rubik's cube positions, and the corresponding label is the first move of a possible solution to solve from the first provided position to the second provided position within twenty moves.

So, it's a classification model for $18$ distinct classes, one for each of the possible moves on a Rubik's cube. Seeing that it has $12\%$ accuracy (being greater than $1/18 \approx 5.6\%$) is nice because it does mean that it is learning something, just not enough.

I also notice that loss tends to go down to a hard minimum over many epochs, but accuracy stops increasing after only around ten epochs. On epoch 36, it reached a loss of $2.713$, and it repeatedly comes back down to $2.713$, but never any lower even after 2000 epochs.

I concatenated a convolutional layer with a fully connected layer to use it as the first layer of the model. Convolutional layers might not work for this as well as I'd hope, so I throw in the fully connected layer as a safeguard. Some Keras code below:

inp_cubes = Input((2,6,3,3,1))

x = TimeDistributed(TimeDistributed(Conv2D(1024,(2,2),(1,1),activation='relu')))(inp_cubes)

output_face_conv = TimeDistributed(TimeDistributed(Flatten()))(x)
flatten_inp_cubes = TimeDistributed(TimeDistributed(Flatten()))(inp_cubes)
x = TimeDistributed(TimeDistributed(Concatenate()))((output_face_conv, flatten_inp_cubes))

x = TimeDistributed(TimeDistributed(Dense(1024,'relu')))(x)
x = TimeDistributed(TimeDistributed(Dropout(0.3)))(x)
x = TimeDistributed(TimeDistributed(Dense(1024,'relu')))(x)
x = TimeDistributed(TimeDistributed(Dropout(0.3)))(x)
x = TimeDistributed(TimeDistributed(Dense(1024,'relu')))(x)
x = TimeDistributed(TimeDistributed(Dropout(0.3)))(x)
x = TimeDistributed(TimeDistributed(Dense(1024,'relu')))(x)
x = TimeDistributed(TimeDistributed(Dropout(0.3)))(x) # face logits
x = TimeDistributed(Flatten())(x)
x = TimeDistributed(Dense(1024,'relu'))(x)
x = TimeDistributed(Dropout(0.3))(x)
x = TimeDistributed(Dense(1024,'relu'))(x)
x = TimeDistributed(Dropout(0.3))(x)
x = TimeDistributed(Dense(1024,'relu'))(x)
x = TimeDistributed(Dropout(0.3))(x) # cube logits
x = Flatten()(x)
x = Dense(1024,'relu')(x)
x = Dropout(0.3)(x)
x = Dense(1024,'relu')(x)
x = Dropout(0.3)(x)
x = Dense(1024,'relu')(x)

outp_move = Dense(18,'softmax')(x) # solution logits

I tried using only one of the two types of input layers separately, and nothing quite worked.

Loss is measured as categorical cross-entropy. I make use of time-distributed layers so that each of the two Rubik's cube positions from the input is processed equivalently, except when determining how they relate. I'm making sure to scale my data, and all that stuff. It really seems like this should just work, but it doesn't.

Is there any way to increase the model's performance without using hundreds of millions of parameters, or is that actually necessary?

I would have thought that there would be some relatively simple correlation between positions and solutions, although it's hard for us to see as humans, so maybe this comes down to the Cayley diagram of the Rubik's cube group being innately random, as though they're prime numbers or something.

EDIT: I guess I really did just need a bigger neural network. This new on has 75 million parameters. The second image shows how that model is able to learn the data set quite easily. It takes a long time to process, though.

",27169,,27169,,12/14/2021 7:40,12/14/2021 7:40,"If the model always underfits, do I really need a larger model?",,0,2,,,,CC BY-SA 4.0 32740,2,,32732,12/13/2021 9:38,,1,,"

I don't think there's a strategy that applies to all cases. In some cases, the reward function may need to be carefully designed (e.g. a self-driving car), but, in other cases, the reward function might be quickly designed (e.g. chess) and other parts of the RL system may require more care.

Here you can find tips to approach an RL problem. For example, one tip that I find useful, although probably obvious once you hear of it, is to compare your policy with a random policy. See also these tips.

",2444,,,,,12/13/2021 9:38,,,,1,,,,CC BY-SA 4.0 32741,2,,16725,12/13/2021 10:21,,1,,"

The main reason should be that Bayesian algorithms naturally incorporate a form of regularisation (the prior), so they should be less prone to over-fitting the small dataset. Of course, the choice of the prior can affect your estimates.

You can view certain training regimes in machine learning and deep learning as an application of Bayesian statistics: for example, if you train a neural network with weight decay, this is equivalent to a MAP estimate.

I guess it might also be computationally easier to perform Bayesian inference with a smaller dataset (although I have never tried to show that this is true: my practical experience in this context is limited to Bayesian neural networks). Of course, the more data you have, in theory, the closer your estimates should be to the actual true values.

Here you have a similar question.

",2444,,2444,,12/13/2021 10:28,12/13/2021 10:28,,,,0,,,,CC BY-SA 4.0 32742,2,,8467,12/13/2021 12:07,,1,,"

I don't know if time/space estimation will be explicitly programmed into an AGI, but the estimation of the computational resources to perform a certain action is definitely useful. Humans (especially, rational humans) regularly do this.

In reinforcement learning, for instance, the agent may be able to discover what are the most efficient actions to achieve the same goal, without having a dedicated component for estimating the space and time requirements. However, if it had a dedicated component, it might be able to perform more efficiently.

If you think of heuristic-based state-space search algorithms, like A*, you can see that they explicitly incorporate a way to estimate the resources to achieve the goal (in this case, the cost of the path to the goal), i.e. the heuristic function $h$. This turns out to be very useful, provided the heuristic is also suitable for the problem.

In the Godel machine, which can be viewed as a theoretical model of AGI (or general problem solver), we also have this notion of proving (so not just estimating) that rewriting the code is useful before rewriting it.

",2444,,,,,12/13/2021 12:07,,,,0,,,,CC BY-SA 4.0 32743,2,,16224,12/13/2021 13:16,,2,,"

Another example where machine learning has been combined with symbolic AI is in the context of knowledge graphs (which can be viewed as a graphical/visual representation of a knowledge base), where people have been proposing ways to learn embeddings of the entities and relations of the graphs (known as knowledge graph embeddings), in order to be able to perform tasks like triple classification (i.e. given a triple $\langle s, r, o\rangle$ with a subject $s$, relation $r$ and object $o$, is this a real fact?).

",2444,,,,,12/13/2021 13:16,,,,0,,,,CC BY-SA 4.0 32744,1,,,12/13/2021 15:25,,1,195,"

I have some difficulties understanding the difference between Q-learning and SARSA. Here (What are the differences between SARSA and Q-learning?) the following updating formulas are given:

Q-Learning

$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma \max_aQ(s',a) - Q(s,a))$$

SARSA

$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma Q(s',a') - Q(s,a))$$

I know that SARSA is for on-policy learning while Q-learning is off-policy learning. So, in Q-learning, the epsilon-greedy policy (or epsilon-soft or softmax policy) is chosen for selecting the actions and the greedy policy is chosen for updating the Q-values. In SARSA the epsilon-greedy policy (or epsilon-soft or softmax policy) is chosen for selecting the actions and for updating the Q function.

So, actually, I have a question on that:

On this website (https://www.cse.unsw.edu.au/~cs9417ml/RL1/algorithms.html) there is written for SARSA

As you can see, there are two action selection steps needed, for determining the next state-action pair along with the first.

What is meant by two action selections? Normally you can only select one action per iteration. The other "selection" should be for the update.

",48758,,18758,,12/13/2021 21:58,12/13/2021 21:58,"What is meant by ""two action selections"" in SARSA?",,1,0,,,,CC BY-SA 4.0 32745,2,,32715,12/13/2021 15:36,,1,,"

If you look at the source code of PyTorch's Embedding layer, you can see that it defines a variable called self.weight as a Parameter, which is a subclass of the Tensor, i.e. something that can be changed by gradient descent (you can do that by setting the parameter requires_grad of the Parameter to True). In other words, the Embedding layer is not just a look-up table, but it's a layer where you have parameters (i.e. the embeddings, which are stored in self.weight) that can also be learnable. You can also initialize these embeddings (i.e. the self.weight parameter) from pre-trained ones using Embedding's method from_pretrained. In this case, you should set require_grad to False.

Generally, one can define an embedding layer $\mathcal{f}$ as a function that receives the raw inputs $\mathbf{i}$ (e.g. in the case of word embeddings, the raw inputs might be integers: one for each word) and transforms them to embeddings $\mathbf{e}$, which can be statically defined (e.g. from pre-trained embeddings or hardcoded), randomly initialized and/or learnable (during the training of the neural network). In other words, $f(\mathbf{i}) = \mathbf{e}$. So, that's why we pass $f(\mathbf{i})$, i.e. the embeddings, rather than $\mathbf{i}$.

",2444,,2444,,12/13/2021 22:36,12/13/2021 22:36,,,,1,,,,CC BY-SA 4.0 32746,2,,32744,12/13/2021 17:16,,3,,"

In my view, the best way to understand these algorithms is to read the pseudocode (multiple times, if necessary!).

Here's the pseudocode of Q-learning.

Here's the pseudocode of SARSA.

So, as you can see, in SARSA, we choose one action before the episode starts, and, during the episode, we choose (and take) again more actions. In both cases, we choose these actions with the same policy (e.g. $\epsilon$-greedy), which is derived from $Q$. In Q-learning, we do not choose an action before the episode starts. We only choose and take an action at each step of the episode (like in SARSA). Hence, in SARSA, we choose actions in two places (but only take an action at each step of the episode). Note the difference between choosing/selecting an action and taking an action in the environment (you may just choose an action to update the Q-function, i.e. without taking it into the environment!).

",2444,,2444,,12/13/2021 17:32,12/13/2021 17:32,,,,7,,,,CC BY-SA 4.0 32748,2,,17317,12/13/2021 18:59,,1,,"

You've asked a question which is basically one of the most important open questions about neural networks. The answer is a huge mystery - any response to this question which immediately opens with a purported explanation is basically ridiculous. We don't know.

As you pointed out, the issue is that the training set simply does not contain enough information to uniquely specify the target function. On an infinite input domain like $\mathbb R^n$, no finite number of samples is enough to uniquely determine a single function, even approximately. Even accounting for bounds and discretization of the input, and even for the symmetries our architectures impose on the output function, our training sets are microscopic compared to the sizes of our input domains. The problem that neural networks successfully solve every day should be impossible.

You can think about this in a low dimensional input space to get some intuition. Doing supervised binary classification on the unit square (that is, your input is a pair of numbers) is equivalent to trying to determine a monochrome image by seeing a random sample of some of its pixels. In terms of the size of the training set relative to the size of the input domain, what neural networks do on say an image classification task like MNIST is comparable to, say, guessing a 1000x1000 monochrome image almost perfectly by observing 20 random pixels, and even 20 is probably generous. The task is impossible - unless you know something about what the target image is. If you know that the image (the target function) is restricted to some set $H$ of functions, then you might be able to determine it approximately from a finite sample. Neural networks must in some sense be doing this implicitly, with some set of "nice" functions $H$ which, it seems, happens to contain (approximations to) a lot of the functions we actually want them to learn, like the "is a cat" function on the space of all images.

The study of such sets of "nice" functions, and in particular how small they need to be before learning is possible, is the subject of statistical learning theory. But I'm not aware of any plausible answers for what $H$ could be for neural networks.

",1931,,,,,12/13/2021 18:59,,,,0,,,,CC BY-SA 4.0 32750,1,,,12/14/2021 12:15,,1,86,"

Problem Statement: We are given an optimisation problem; with production centres, source airport, destination airports, transfer points and finally delivered to the customers. This is better explained in the following picture.

Objective function 1: Minimise costs = inventory costs + transportation costs + penalty costs + loading/unloading costs

  1. Inventory costs = inventory cost at source airport + inventory costs at distribution centres

  2. Transportation costs = cost of transporting cargo from production centre to source airport (via trucks) + cost of transporting cargo through itineraries (via flight) + cost of transporting cargo from distribution centre to transfer points (via trucks) + cost of transporting cargo from transfer point to customers (via drones)

  3. Penalty costs = cost of operating flight routes and delay penalty costs

  4. Loading/unloading costs = cost of loading cargo on trucks at production centres + cost of unloading cargo from trucks at the transfer point

Mathematical Solution (Using IBM CPLEX solver / Docplex): The complete python code (.ipynb file) with the formulation is present in this Google Drive Link. This gives an optimal solution.

Query: Is there any non-mathematical, non-formulation based method to solve this problem statement? Something on the lines of Reinforcement Learning? If any implementation is also provided, it will be icing on the cake.

",51528,,,,,1/28/2022 10:57,Reinforcement Learning applied to Optimisation Problem,,0,3,,,,CC BY-SA 4.0 32752,1,,,12/14/2021 13:26,,1,88,"

The universal approximation theorem says that MLP with a single hidden layer and enough number of neurons can able to approximate any bounded continuous function. You can validate it from the following statement

Multilayer Perceptron (MLP) can theoretically approximate any bounded, continuous function. There's no guarantee for a discontinuous function.

We can express any MLP in terms of algebraic expressions. And the expressions can be considered as symbolic-AI.

So, can I infer that symbolic AI algorithms can theoretically approximate any bounded continuous function?

If not, then why can't there be a one-one mapping between MLP and symbolic-AI algorithm?

",18758,,2444,,12/14/2021 16:43,12/14/2021 17:40,Are the capabilities of connectionist AI and symbolic AI the same?,,1,0,,,,CC BY-SA 4.0 32755,2,,18378,12/14/2021 14:51,,-1,,"

Neil answer is wrong, Dylan gives the correct one!

Optimal policies are invariant under positive affine transformations of the reward function. and the reason why it's not the case in your example is explained in Dylan's answer.

Reference: From the book Artificial intelligence a modern approach 4th edition 16.1.3

"the scale of utilities is arbitrary: an affine transformation leaves the optimal decision unchanged. We can replace U(s) by U'(s) = m U(s)+ b where m and b are any constants such that m > 0. It is easy to see, from the definition of utilities as discounted sums of rewards, that a similar transformation of rewards will leave the optimal policy unchanged in an MDP"

** Edit:

  • Since the answer has been updated, the answer is fully correct now :)
",51532,,51532,,12/26/2021 13:18,12/26/2021 13:18,,,,7,,,,CC BY-SA 4.0 32758,2,,32752,12/14/2021 17:32,,1,,"

Are the capabilities of connectionist AI and symbolic AI the same?

No, not usually. Why not usually? Neural networks (connectionist AI) are usually used for inductive reasoning (i.e. the process of generalizing given a finite set of observations), while symbolic AI is usually used for deduction (i.e. to logically derive conclusions from premises).

What is inductive reasoning? Let's say that all the birds that you have observed so far in your life fly, so your inductive thought is that all birds must fly, although you haven't seen all birds, so there could be exceptions (like penguins).

What is deductive reasoning? Let's say that you know that all humans are mortal (there's no exception). You know that Socrates is a human. So, you logically deduce that Socrates is mortal. If that was not the case, then either the premise was wrong or maybe Socrates is not a human.

We can express any MLP in terms of algebraic expressions. And the expressions can be considered as symbolic-AI.

So, can I infer that symbolic AI algorithms can theoretically approximate any bounded continuous function?

Now, if MLPs were a subset of symbolic AI, then we could conclude that symbolic AI can approximate bounded continuous functions. However, the definition of symbolic AI is usually restricted to knowledge-based and logic-based systems, so systems that write (e.g. using propositional logic) premises or facts (in knowledge bases) to deduce conclusions (other facts) from them. So, although $f(x) = \sigma(ax + b)$ (which can represent e.g. a perceptron) is a function with symbols $a$, $x$, $b$ and $\sigma(\cdot)$, these symbols are not used in the same way as the symbols in symbolic AI. They are variables that do not represent premises or conclusions.

If not, then why can't there be a one-one mapping between MLP and symbolic-AI algorithm?

I don't know if there can be a one-to-one mapping between some symbolic AI and neural networks.

However, it is possible to combine the two approaches. For example, in the context of knowledge graphs, which can be viewed as a way to represent facts and relations between those facts as a graph, we can learn embeddings (using machine learning), which can later be used to perform inductive reasoning on the knowledge graph.

There are other examples of attempts to combine symbolic AI with machine learning and neural networks. A famous example is the Markov Logic Network (MLN) (by Richardson and Domingos), which combines first-order logic with probabilistic graphical models. A related approach is used, for example, used in OpenCog, which is a software platform for AI and AGI. In fact, I think it is widely believed that inductive and deductive reasoning are both necessary for an AGI. The AGI needs inductive reasoning for situations that involve uncertainty (most cases) and deductive reasoning for the remaining cases (e.g. doing math, i.e. we expect an AGI to be able to prove theorems, as we do). Another example is a combination of knowledge graphs with MLNs.

",2444,,2444,,12/14/2021 17:40,12/14/2021 17:40,,,,0,,,,CC BY-SA 4.0 32759,1,,,12/14/2021 17:49,,0,79,"

I came across this problem, and am not sure where to start. What model would work best for this problem and why?

Imagine the digits in the test set of the MNIST dataset (http://yann.lecun.com/exdb/mnist/) got cut in half vertically and shuffled around. Implement a way to restore the original test set from the two halves, whilst maximizing the overall matching accuracy.

",37948,,18758,,12/14/2021 22:11,12/14/2021 22:11,What model to train to restore MNIST test dataset,,0,2,,,,CC BY-SA 4.0 32761,2,,16133,12/14/2021 19:26,,2,,"

The hidden state in a RNN is basically just like a hidden layer in a regular feed-forward network - it just happens to also be used as an additional input to the RNN at the next time step.

A simple RNN then might have an input $x_t$, a hidden layer $h_t$, and an output $y_t$ at each time step $t$. The values of the hidden layer $h_t$ are often computed as:

$h_t = f(W_{xh}x_t + W_{hh}h_{t-1})$

Where $f$ is some non-linear function, $W_{xh}$ is a weight matrix of size $h\times x$, and $W_{hh}$ is a weight matrix of size $h\times h$. I've left out the bias terms for simplicity.

Thus, the values of the hidden layer $h_t$ depend on the input $x_t$ as well as on the previous hidden state $h_{t-1}$ (literally, the previous values of the hidden layer).

",51537,,,,,12/14/2021 19:26,,,,0,,,,CC BY-SA 4.0 32763,1,,,12/14/2021 23:48,,0,232,"

According to the U-Net architecture image from the second page of the research paper (URL link) https://arxiv.org/pdf/1505.04597.pdf

How does the skip connection match its dimension to the same layer in the expansive path?

",51343,,2444,,12/15/2021 9:11,1/9/2023 14:08,How does the skip connection match its dimension to the same layer in the expansive path?,,1,2,,,,CC BY-SA 4.0 32764,2,,32763,12/15/2021 0:38,,0,,"

Output of each layer in the upscaling block is of the same size as the input of corresponding convolution layer in the downscaling block after cropping the input's feature maps.

This is how the network is defined. Each conv layer in the downscaling block has a corresponding layer in upscaling block to which the skip connection is made to.Except for the layer in the middle(sometimes called the latent layer). This is the layer that separates the downscaling block from upscaling block as seen in the original paper.

So in short it's just the way network is designed. I doesn't use the whole feature map from corresponding layer in down sampling block.

For a reference you can see Ternaus Net. There they had to crop randomly to support the VGG encoder in the UNet structure.

",51543,,51543,,12/15/2021 0:45,12/15/2021 0:45,,,,0,,,,CC BY-SA 4.0 32765,1,,,12/15/2021 1:07,,1,43,"

It is well-known that Godel's incompleteness theorems restricted the reachability of symbolic-AI, which is dependent on mathematical logic.

But, I am wondering whether it has any impact on the connectionist AI.

I don't think it has any impact on the capability of connectionist AI because of the following reasons I am aware of

  1. Connectionist-AI is more focused on generalization and is not about mathematical logic..
  2. Universal approximation theorem, contrary to Godel's incompleteness theorems says that connectionist-AI is capable of achieving all bounded-continuous functions. I am not sure about the implications of Godel's incompleteness theorems on either unbounded or discrete functions.

So, the incompleteness theorems seem to have no impact on the connectionist AI.

Do the theorems also restrict the reachability of connectionist-AI?

",18758,,,,,12/15/2021 1:07,Does Godel's incompleteness theorems restricts the scope of connectionist-AI?,,0,0,,,,CC BY-SA 4.0 32766,2,,11235,12/15/2021 3:00,,1,,"

The paper Undivided Attention: Are Intermediate Layers Necessary for BERT? should answer it.

In the abstract, they write

All BERT-based architectures have a self-attention block followed by a block of intermediate layers as the basic building component. However, a strong justification for the inclusion of these intermediate layers remains missing in the literature.

In the conclusion, they write

In this work we proposed a modification to the BERT architecture focusing on reducing the number of intermediate layers in the network. With the modified BERTBASE network we show that the network complexity can be significantly decreased while preserving accuracy on fine-tuning tasks.

",51547,,2444,,12/15/2021 9:25,12/15/2021 9:25,,,,2,,,,CC BY-SA 4.0 32767,1,32769,,12/15/2021 7:32,,0,150,"

For a variational autoencoder, we have input $x$ (assume 1 data point for now, like an image), a latent code sampled from the decoder, $z$, and an output $\hat{x}$.

If I were to draw a diagram for the VAE with the input, output, and latent code sample, is it appropriate to write those three as random variables/vectors? Or as instances of random variables/vectors?

I thought it was random variables/vectors, but I saw this discussion, where they talk about the dataset being instances.

",46842,,2444,,12/15/2021 10:04,12/15/2021 10:04,"For the VAE, should the input, output and latent variable code be random variables?",,1,0,,,,CC BY-SA 4.0 32768,2,,31693,12/15/2021 9:16,,0,,"

first we need to transform the distribution of the first term:

$\mathrm{argmin} \space \alpha \mathrm{E}_{s\sim D, a \sim D}[\frac{\mu}{\hat\pi_{\beta}}Q(s,a)] + \frac{1}{2}\mathrm{E}_{s,a \sim D}(Q(s,a)-\hat\beta^{\pi}\hat Q^k(s,a))^2 $

then its derivate respect to Q is:

$\alpha\frac{\mu}{\hat \pi ^{\beta}} + Q(s,a)-\hat\beta^{\pi}\hat Q^k(s,a) = 0$

then we have

$ Q(s,a) =\hat\beta^{\pi}\hat Q^k(s,a) -\alpha\frac{\mu}{\hat \pi ^{\beta}}$

",51551,,,,,12/15/2021 9:16,,,,1,,,,CC BY-SA 4.0 32769,2,,32767,12/15/2021 10:00,,1,,"

The VAE attempts to model a specific probabilistic (directed) graphical model (Bayesian network)

So, in this PGM, $\mathbf{z}$ and $\mathbf{x}$ are random variables. In principle, I think you could also model $\phi$ and $\theta$ as random variables (in Bayesian statistics, you can also model parameters as random variables and put priors on them).

In practice, the VAE attempts to learn a generative model given a dataset. In fact, the technical part of the VAE paper (section 2.1) starts with

Let us consider some dataset $\mathbf{X}=\left\{\mathbf{x}^{(i)}\right\}_{i=1}^{N}$ consisting of $N$ i.i.d. samples of some continuous or discrete variable $\mathbf{x}$. We assume that the data are generated by some random process, involving an unobserved continuous random variable $\mathbf{z}$.

Later, they use this dataset to define the likelihood

$$\log p_{\theta}\left(\mathbf{x}^{(i)}\right)=D_{K L}\left(q_{\phi}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right) \| p_{\theta}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right)\right)+\mathcal{L}\left(\boldsymbol{\theta}, \boldsymbol{\phi} ; \mathbf{x}^{(i)}\right)$$

and, consequently, also the objective function (the Evidence Lower BOund, aka ELBO).

$$ \mathcal{L}\left(\boldsymbol{\theta}, \phi ; \mathbf{x}^{(i)}\right)=-D_{K L}\left(q_{\phi}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right) \| p_{\theta}(\mathbf{z})\right)+\mathbb{E}_{q_{\phi}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right)}\left[\log p_{\theta}\left(\mathbf{x}^{(i)} \mid \mathbf{z}\right)\right] $$

Note that we use $\mathbf{x}^{(i)}$ in the formulas above, so the distributions are conditioned on the given samples/dataset and the likelihood function is defined in terms of the samples (that's what a likelihood usually is: a function of the parameters given the usually fixed data).

In the other more extensive paper about VAEs (by the same authors of the VAE), you also have a diagram of the VAE.

So, I think an answer to your question depends on what you actually want to show with the diagram. If you want to show how VAEs are trained, then you definitely need to show that we have a dataset. If your diagram is supposed to show the distributions that the VAE attempts to model, then you can probably use a PGM.

",2444,,,,,12/15/2021 10:00,,,,3,,,,CC BY-SA 4.0 32770,1,32776,,12/15/2021 11:23,,0,123,"

I want to write an algorithm that returns a unique directed graph (an adjacency matrix) that represents the structure of a given feedforward neural network (FNN). My idea is to deconstruct the FNN into the input vector and some nodes (see definition below), and then draw those as vertices, but I do not know how to do so in a unique way.

Question: Is it possible to construct such an algorithm, and if so, how would you formalize it?


Example [Shallow Feedforward Neural Network (SNN)]

To illustrate the problem, consider an SNN, defined as a mapping $f=\left(f_1(\mathbf{x}), \ldots, f_m(\mathbf{x})\right): \mathbb{R}^n\rightarrow\mathbb{R}^m$ where for $k=1,\ldots,m$

\begin{align} f_k(\mathbf{x}) &= \sum_{j=1}^{\ell} w_{j,k}^{(2)} \rho \left( \sum_{i=1}^n w_{i,j}^{(1)} x_i + w_{0,j}^{(1)} \right) + w_{0,k}^{(2)}, \quad \mathbf{x}=(x_1,\ldots,x_n)\in\mathbb{R}^n \end{align} and $w_{i,j}^{(k)}\in\mathbb{R}$ is fixed for all $i,j,k \in \mathbb{N}$ and $\rho:\mathbb{R}\rightarrow\mathbb{R}$ is a continuous mapping.

I want to determine the nodes that make up the FNN, where a node $N^{\rho}: \mathbb{R}^n\rightarrow\mathbb{R}$ is defined as a mapping \begin{align} \label{eq:node} && \quad && N^{\rho}(\mathbf{x}) &= \rho\left(\sum_{i=1}^n w_i x_i + w_0 \right), & \mathbf{x}=(x_1,\ldots,x_n)\in\mathbb{R}^n \end{align} where $\mathbf{w}=(w_0, \ldots,w_n)\in\mathbb{R}^{n+1}$ is fixed.

Clearly (to me) I can write each $f_k$ as

\begin{align} f_k(\mathbf{x}) &= \sum_{j=1}^{\ell} w_{j,k}^{(2)} N^{\rho}_j(\mathbf{x}) + w_{0,k}^{(2)}, \end{align} where $N^{\rho}_{j}$ is a node for $j=1,\ldots,\ell$. Now I see that $f_k$ is a node which takes as input the output of other nodes. But how can I formalise this in an algorithm? And does it generalize to Deep Feedforward Neural Networks?

",50595,,2444,,12/21/2021 9:52,12/21/2021 9:52,How to uniquely associate a directed graph with a feedforward neural network?,,1,4,,,,CC BY-SA 4.0 32771,1,32778,,12/15/2021 14:14,,0,45,"

Here is the pseudocode for SARSA (which I took from here)

Why are we choosing more than 1 action in SARSA? One for going into the next state and the other one for updating the Q function?

",48758,,2444,,12/15/2021 15:02,12/15/2021 15:02,Why are we choosing more than 1 action in SARSA?,,1,0,,,,CC BY-SA 4.0 32772,1,32775,,12/15/2021 14:15,,0,33,"

Here is the pseudocode for SARSA (which I took from here)

Do we only select one action at the very beginning and then we always choose the same action for each step? Does it really make sense to choose the same initially chosen action $a$ regardless of the state $s$?

",48758,,2444,,12/15/2021 14:56,12/15/2021 15:02,Are we choosing the same action in every step in SARSA?,,1,0,,,,CC BY-SA 4.0 32773,1,,,12/15/2021 14:16,,1,92,"

Here is the pseudocode for SARSA (which I took from here)

Are the two policies in SARSA for choosing an action equal? I guess yes, because it is called an on-policy learning algorithm. But could I, for example, also use different policies in this SARSA framework? For example an $\epsilon$-greedy policy and a softmax policy. Maybe the resulting algorithm would not be called SARSA anymore but it would be something similar.

",48758,,2444,,12/15/2021 15:03,12/15/2021 17:00,Are the two policies in SARSA for choosing an action the same?,,1,3,,,,CC BY-SA 4.0 32774,2,,32773,12/15/2021 14:26,,1,,"

For learning, it doesn't matter much how you choose the first action before starting the main loop. That is because the agent doesn't need to learn about transitions to the first state of an episode.

The thing that does matter is that the first action choice should cover all possible actions with probability greater than zero, in order to guarantee convergence. Conceptually, this is not much different to using exploring starts.

However, the usual practice is to have just one active policy for SARSA, typically $\epsilon$-greedy based on Q values learned so far. It is worth noting here, that as Q values change during learning - which happens on each time step - then the policy may also change (when the greedy action choice changes). So even when you use the same rules to derive the policy in SARSA, the actual policy used may vary, even in the middle of the loop. In that respect, the SARSA algorithm uses many policies, but typically only one approach to determining the current policy.

If you are using function approximation and also used a very different rule for the policy for the first action, it is possible you could affect the function approximator through sample bias (your training data has different distribution to target data). This is tricky to put a number on in RL, but is usually ignored in off-policy approximation, so should not put you off if you want to try out ideas of using a different first time step policy.

",1847,,1847,,12/15/2021 17:00,12/15/2021 17:00,,,,4,,,,CC BY-SA 4.0 32775,2,,32772,12/15/2021 14:33,,2,,"

Do we only select one action at the very beginning and then we always choose the same action for each step?

No.

The pseudocode is clear on this, by using the word Choose and referencing a policy.

If you were expected to take the same action again, then the pseudocode already has the previous action in variable a, so it would not need to state anything about making a choice or using a policy.

The $a \leftarrow a'$ notation is common way to describe copying values*, so the variable a is changed at the end of each loop.

Does it really make sense to choose the same initially chosen action $a$ regardless of the state $s$?

Not in this case. Some learning algorithms do use a form of "sticky" exploration where a single exploratory action is committed to for multiple time steps. It can be useful in some environments. But not basic SARSA as described in the question.


* It avoids the ambiguity of $=$ as assignment or equality operator. You may also see $:=$ for assignment as an alternative, but for instance Sutton & Barto use $\leftarrow$ consistently, and a lot of RL literature follows this convention.

",1847,,1847,,12/15/2021 15:02,12/15/2021 15:02,,,,0,,,,CC BY-SA 4.0 32776,2,,32770,12/15/2021 14:36,,0,,"

I think you can do this in multiple ways.

The easiest algorithm that comes to my mind right now produces a sparse (which is also some kind of block matrix) $N \times N$ adjacency matrix for a typical MLP/FFN with a total of $N$ neurons (including input and output neurons), where each neuron $n_l^k$ at layer $l$ has a directed edge that goes into all neurons at layer $l+1$.

This is the algorithm.

  1. Create an $N \times N$ matrix $G \in \{0, 1\}^{N \times N}$ with zeros

    • Comment 1: $G_{ij}$ is the element of the matrix at row $i$ and column $j$.

    • Comment 2: Indices $i$ and $j$ start at $1$ and end at $N$

    • Comment 3: if we set $G_{ij} = 1$, then there's a directed edge from neuron $i$ to neuron $j$ (but not necessarily vice-versa: for that to be true, we would also need $G_{ji} = 1$)

    • Comment 4: we need to create a mapping between the indices $i$ and $j$ and the neurons in the neural network; this is done below!

  2. Let $c(l)$ be the number of neurons at layer $l$

  3. For each layer $l = 0, \dots, L - 1$

    • Comment 5: $l = 0$ is the input layer and $L$ is the output layer
    1. For $k=1, \dots, c(l)$
      • Comment 6: for example, $n_l^k = n_2^3$ is the third neuron at the first hidden layer
      1. Let $M = \sum_{h=0}^{l-1} c(h)$
        • Comment 7: $M$ is the number of neurons processed so far in the previous layers (excluding the neurons in the current layer)
      2. $i = k + M$
      3. For $t = 1, \dots, c(l+1)$
        1. $j = t + c(l) + M$
          • $j$ is basically the index of the graph $G$ that corresponds to the neuron $t$ in the next layer $l+1$
        2. Set $G_{ij} = 1$
  4. Return the matrix $G$

The time complexity of this algorithm should roughly be $\mathcal{O}(L* {\max_l c(l)}^2)$. So, for example, for a neural network with 3 layers, 2 inputs, 5 hidden neurons, and 2 outputs, what would be the number of operations?

",2444,,2444,,12/15/2021 15:27,12/15/2021 15:27,,,,3,,,,CC BY-SA 4.0 32777,1,33842,,12/15/2021 14:36,,0,285,"

I have a question about how to update the Q-function in Q-learning and SARSA. Here (What are the differences between SARSA and Q-learning?) the following updating formulas are given:

Q-Learning

$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma \max_aQ(s',a) - Q(s,a))$$

SARSA

$$Q(s,a) = Q(s,a) + \alpha (R_{t+1} + \gamma Q(s',a') - Q(s,a))$$

How can we calculate the $Q(s', a')$ in both SARSA and Q-learning for updating the Q-function? After having taken an action $a$ at state $s$, we get the reward $r$, which we can observe. But we cannot observe $Q(s',a')$ from the environment as far as I see it.

Can anyone maybe think about a comprehensive example where you can see how it is done (or a link to a website)?

",48758,,2444,,12/20/2021 11:06,12/20/2021 11:34,"How is $Q(s', a')$ calculated in SARSA and Q-Learning?",,1,3,,,,CC BY-SA 4.0 32778,2,,32771,12/15/2021 14:37,,2,,"

Why are we choosing more than 1 action in SARSA?

There is never a state where more than one action is chosen.

The appearance of two Choose statements is an artifact of the loop design and variable management in the pseudocode.

One for going into the next state and the other one for updating the Q function?

Sort of. There is only ever one chosen action for each state.

However, you need to have two actions in scope - the current action (just taken and observed) and planned next action, in order to process the update rule. Hence there are two variables a and a' and the code needs to generate one or other outside the main loop, or have some other way to ensure that it has access to current and next values. This is also why at the end of the loop, the "next" values $s', a'$ get copied to the "current" values $s, a$.

",1847,,1847,,12/15/2021 14:52,12/15/2021 14:52,,,,0,,,,CC BY-SA 4.0 32780,1,32783,,12/15/2021 17:27,,1,334,"

I am interested in training a machine algorithm to convert the lyrics I give into a song by a particular singer.

My language is non-English (south Indian) The songs are mostly monophonic (very few instruments, if at all). I have data of a bunch of songs sung by this singer, I want to try new lyrics and imagine how to singer would have sung.

",49858,,2444,,12/16/2021 23:31,12/16/2021 23:31,How to train an ML model to convert the given lyrics into a song by a particular singer?,,1,0,,,,CC BY-SA 4.0 32782,1,32785,,12/15/2021 21:20,,1,133,"

In most of the reinforcement learning literature, I see that there is a shaded area in the graphs. I couldn't understand what it exactly represents?

For example, from the A3C paper:

Or another example from the PPO paper:

Is it for multiple runs or it's for something else? How I can reproduce such graphs (which library and what type of data from my training episode do I need)?

",51564,,2444,,12/15/2021 22:51,12/15/2021 22:51,What is the meaning of the shaded area in the reinforcement learning literature graphs?,,1,0,,,,CC BY-SA 4.0 32783,2,,32780,12/15/2021 21:59,,2,,"

OpenAI used a modified version of VQ-VAE-2 combined with sparse transformers to do something similar to what you want to do. Their approach, called Jukebox, is able to produce music by conditioning on certain styles, lyrics, and artists, which you might explore to do what you want. You can explore here the produced songs. For example, the model was able to produce a hip-hop song that uses the lyrics of Eminem's Lose Yourself, but with a Kanye West's style. You can find the original research paper here and the associated codebase here.

",2444,,,,,12/15/2021 21:59,,,,0,,,,CC BY-SA 4.0 32784,1,32787,,12/15/2021 22:10,,1,261,"

The minimax equation for generative adversarial networks

$$\min_G \max_D V(D,G) = \mathbb{E}_{\boldsymbol{x}\sim p_{data}(\boldsymbol{x})}[\log D(\boldsymbol{x})] + \mathbb{E}_{\boldsymbol{z}\sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log(1 - D(G(\boldsymbol{z}))] $$

Why do we use logarithms instead of just

$$\min_G \max_D V(D,G) = \mathbb{E}_{\boldsymbol{x}\sim p_{data}(\boldsymbol{x})}[ D(\boldsymbol{x})] + \mathbb{E}_{\boldsymbol{z}\sim p_{\boldsymbol{z}}(\boldsymbol{z})}[(1 - D(G(\boldsymbol{z}))] $$

",51566,,2444,,12/15/2021 22:13,12/16/2021 11:02,Why are logarithms used in GANs minimax equation?,,1,0,,,,CC BY-SA 4.0 32785,2,,32782,12/15/2021 22:19,,2,,"

They train the agent multiple times, then plot the mean +- standard deviation of the agent performance (the shaded region is representing the standard deviation).

",47080,,,,,12/15/2021 22:19,,,,0,,,,CC BY-SA 4.0 32786,1,,,12/15/2021 22:52,,1,50,"

For the following minimax equation for generative adversarial networks (GANs),

$$\min_G \max_D V(D,G) = \mathbb{E}_{\boldsymbol{x}\sim p_{data}(\boldsymbol{x})}[\log D(\boldsymbol{x})] + \mathbb{E}_{\boldsymbol{z}\sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log(1 - D(G(\boldsymbol{z}))] $$

Goodfellow mentions in his paper (https://arxiv.org/pdf/1406.2661.pdf, page 3, 1st paragraph under equation 1) that in practice this minimax game is implemented iteratively (algorithm 1 in the paper).

In one of his tutorials (1:31:52 - 1:33:16) he mentions that $\min_G \max_D V(D,G)$ is desirable, whereas $\max_D \min_G V(D,G)$ leads to mode collapse. Iterative gradient descent can "sometimes act" like the former, and sometimes like the latter.

I am confused about how the iterative method can sometimes act like min-max or max-min. What about gradient descent causes the change in behavior?

",51566,,51566,,12/20/2021 16:23,12/20/2021 16:23,"GANs: Why does iterative gradient descent sometimes optimise $\min_G \max_D V(D,G)$ and sometimes $\max_D \min_G V(D,G)$?",,0,0,,,,CC BY-SA 4.0 32787,2,,32784,12/15/2021 23:45,,2,,"

It's common in machine learning to do this log-trick, i.e. rather than optimizing $f(\mathbf{x})$, you optimize $\log f(\mathbf{x})$.

There are 3 main reasons why we (can) do this.

  1. When your objective function is the product of multiple probabilities (or, more generally, small numbers), i.e. $f(\mathbf{x}) = \prod_{i=1}^N p(x_i)$, then $\log f(\mathbf{x}) = \log \left( \prod_{i=1}^N p(x_i) \right) = \sum_{i=1}^N \log p(x_i)$ (see this), which is more numerical stable because we got rid of multiplication of possibly very small numbers, which can lead to underflow.

  2. For the same reason, we can also compute derivatives more easily, as we can simply compute the derivatives of each component (because derivatives are linear)

  3. The logarithm is monotonically increasing, so $\log f(\mathbf{x})$ has the same optima as $f(\mathbf{x})$ (simple proof here).

",2444,,2444,,12/16/2021 11:02,12/16/2021 11:02,,,,0,,,,CC BY-SA 4.0 33787,1,,,12/16/2021 2:54,,0,36,"

Bidirectional RNNs are used for generating the semantic vectors of the text at the sentence level and word level.

In order to train a CNN for the classification tasks, images, and labels/outputs are required in general.

If the data required is dependent on the algorithm used then please consider the algorithm that uses less possible types of data. Suppose if we need to perform a classification task on images: then in general, images with corresponding labels are expected. But, there may be algorithms that also work better with fewer labels available. But, the types of data in both the cases are the same: Images, labels.

I want to know such bare minimum types of data requirements for a bidirectional RNN. Afaik, a text file is enough for it to get semantic vectors since learning in text processing generally happens based on the distributional hypothesis.

Am I correct? If not, then what should be the other necessary data requirements for a bidirectional RNN to generate semantic embeddings?

",18758,,18758,,12/18/2021 9:24,12/18/2021 9:24,What is all necessary types of data for a bidirectional RNN to learn embeddings?,,0,16,,,,CC BY-SA 4.0 33788,1,,,12/16/2021 3:20,,0,80,"

I'm looking for a machine learning algorithm for clustering points based on their coordinates. Furthermore, I want to take into consideration the weights of each point. Suppose there is a weight in each point. Then we take the sum of the weight of all the points in a cluster. I want the sum in different clusters to be close and balanced. What algorithm is best for this? Any suggestion?

",51568,,2444,,12/16/2021 11:05,12/16/2021 11:05,"What is the best machine learning algorithm for clustering dots based on coordinates $(x,y)$ with consideration of weight of the points?",,0,9,,,,CC BY-SA 4.0 33790,1,33791,,12/16/2021 7:34,,1,313,"

In MLP, there are neurons that form a layer. Each hidden layer gives a vector of number that is the output of that layer.

In CNN, there are kernels that form a convolutional layer. Each layer gives feature maps that are the output of that layer.

In LSTM, there are cells that form a recurrent layer. Each layer gives a sequence that is the output of that layer.

This is my understanding of the basic terminology regarding MLP, CNN, and LSTM.

But consider the following description regarding the number of layers in LSTM in PyTorch

num_layers – Number of recurrent layers. E.g., setting num_layers=2 would mean stacking two LSTMs together to form a stacked LSTM, with the second LSTM taking in outputs of the first LSTM and computing the final results. Default: 1

The description uses the "number of recurrent layers" and the "LSTM" in a similar manner. How I can understand this? Is it costmary to consider a recurrent layer as an LSTM?

",18758,,18758,,12/16/2021 8:08,12/16/2021 8:55,Is a recurrent layer same as LSTM or single-layered LSTM?,,1,0,,,,CC BY-SA 4.0 33791,2,,33790,12/16/2021 8:49,,2,,"

Depending on the context, when people use the term LSTM, they either refer to

  • an LSTM layer,
  • an LSTM unit (like a recurrent unit in an RNN or neuron in an MLP), or
  • an LSTM neural network (i.e. an RNN that uses LSTM units or layers).

In TensorFlow, an LSTM is a layer, so you can stack multiple LSTMs to create deeper architectures.

In PyTorch, the class LSTM can create an LSTM layer or multiple LSTM layers stacked together. You also have an LSTMCell, which should be just one LSTM layer.

My answer here should also be useful.

",2444,,2444,,12/16/2021 8:55,12/16/2021 8:55,,,,0,,,,CC BY-SA 4.0 33799,1,,,12/17/2021 0:41,,1,38,"

Consider the following excerpt from a paragraph taken from chapter 10: Sequence Modeling: Recurrent and Recursive Nets of the textbook named Deep Learning by Ian Goodfellow et al regarding the advantages of RNN over full traditional MLP.

To go from multilayer networks to recurrent networks, we need to take advantage of one of the early ideas found in machine learning and statistical models of the 1980s: sharing parameters across different parts of a model. Parameter sharing makes it possible to extend and apply the model to examples of different forms(different lengths, here) and generalize across them. If we had separate parameters for each value of the time index, we could not generalize to sequence lengths not seen during training, nor share statistical strength across different sequence lengths and across different positions in time. Such sharing is particularly important when a specific piece of information can occur at multiple positions within the sequence.

The authors used the phrase "statistical strength". Do they mean the strength of RNN in learning the embeddings of a word based on its context rather than its position in input, if it occurs in several inputs? Or does it mean that RNN uses fewer parameters to generalize in a better way compared to a traditional MLP? Or do they mean something else?

",18758,,18758,,12/23/2021 10:50,12/23/2021 10:50,"What does ""statistical strength"" mean in this context?",,0,0,,,,CC BY-SA 4.0 33801,1,,,12/17/2021 1:29,,1,41,"

Consider the following paragraph taken from chapter 10: Sequence Modeling: Recurrent and Recursive Nets of the textbook named Deep Learning by Ian Goodfellow et al mentioning the connections of RNN to go backward in time.

For the simplicity of exposition, we refer to RNNs as operating on a sequence that contains vectors $x^{(t)}$ with the time step index $t$ ranging from 1 to $\tau$. In practice, recurrent networks usually operate on minibatches of such sequences, with a different sequence lengthτfor each member of the minibatch. We have omitted the minibatch indices to simplify notation. Moreover, the time step index need not literally refer to the passage of time in the real world. Sometimes it refers only to the position in the sequence. RNNs may also be applied in two dimensions across spatial data such as images, and even when applied to data involving time, the network may have connections that go backward in time, provided that the entire sequence is observed before it is provided to the network.

The paragraph says that the RNN can go back in time if and only if the entire sequence is provided. So, I am suspecting that it happens only during the backpropagation/backward pass.

Am I true? Or is it possible for an RNN to use those connections while forward pass also?

",18758,,,,,12/17/2021 11:46,When does an RNN use the connections that help in going backward in time?,,1,0,,,,CC BY-SA 4.0 33802,1,,,12/17/2021 5:47,,1,46,"

In page 21 here, it states:

General Idea of Amortization: if same inference problem needs to be solved many times, can we parameterize a neural network to solve it?

Our case: for all $x^{(i)}$ we want to solve: $$ \min _{q(z)} \mathrm{KL}\left(q(z) \| p_{\theta}\left(z \mid x^{(i)}\right)\right. $$ Amortized formulation: $$ \min _{\phi} \sum \operatorname{KL}\left(q_{\phi}\left(z \mid x^{(i)}\right) \| p_{\theta}\left(z \mid x^{(i)}\right)\right) $$

One thing I am trying to wrap my mind around is $q(z)$ in the 1st formulation vs $q_{\phi}(z \mid x^{(i)})$ in the second.

Why is one a conditioned on $x^{(i)}$ and the other is not? I know in the 1st formulation, we are trying to find a different $q(z)$ for each datapoint.

I also recall that in VAEs,which uses amortized inference we consider $q(z)$ to be aggregated posterior, like

$$q_{\phi}(z)=\int q_{\phi}(x, z) d x \quad \text { Marginal of } q_{\phi}(x, z) \text { on } z$$

$$q_{\phi}(x, z) \equiv p_{\mathcal{D}}(x) q_{\phi}(z \mid x)$$

(formulas taken from here)

",46842,,2444,,12/17/2021 10:37,12/17/2021 10:37,"Why do we use $q_{\phi}(z \mid x^{(i)})$ in the objective function of amortized variational inference, while sometimes we use $q(z)$?",,0,0,,,,CC BY-SA 4.0 33803,2,,32718,12/17/2021 9:18,,0,,"

As far as I know, this is always done in separate steps. The reason is the availability of training data. For the recognition task (scene text recognition or handwriting recognition), the available data are monolingual.

A solution would be generating synthetic visual data from parallel corpora. Very recently, there was a paper at EMNLP 2021 that did exactly this and show high robustness towards OCR errors. However, they do not evaluate the model on real-world images and still assume two steps: recognition and translation.

There is also a domain mismatch between what is typically in handwriting and how the usual data machine translation data look like. From the machine translation perspective, this could be best approached as a domain adaptation problem (e.g, using domain-specific back-translated data). This would quite unhandy with an end-to-end recognition and translation system.

",33139,,,,,12/17/2021 9:18,,,,0,,,,CC BY-SA 4.0 33804,2,,32679,12/17/2021 9:27,,1,,"

It depends on the type of model you use and the task, you are attempting to solve.

Almost all the preprocessing steps that you mention remove some information from the text. If you think the removed information is irrelevant (stopwords, suffixes, sentence boundaries), you can happily do it and use your favorite architecture. If you think, the linguistic information in the text is relevant (e.g., you need to care about negations), it is better not to remove any words and only tokenize the text (perhaps into subwords).

In general, lower-casing and stemming can help with data sparsity. However, if you work with a language for which a large pre-trained models exist (and you do not care much about efficiency), it is always better to use a pre-trained BERT-like model. The pre-trained models come with their own preprocessing and tokenization (e.g., WordPiece in case of BERT), and using a different pre-processing would just break them.

",33139,,,,,12/17/2021 9:27,,,,0,,,,CC BY-SA 4.0 33805,1,,,12/17/2021 10:17,,4,116,"

Recurrent neural networks, abbreviated as RNNs, are widely used in deep learning literature, especially for text processing.

Are they related to recursive neural networks in any way?

I am asking for the general/special relationship that enables us to view the one in terms of another if possible.

",18758,,2444,,12/20/2021 14:40,1/14/2023 17:01,Is there any relation between the recursive neural network and recurrent neural network?,,1,3,,,,CC BY-SA 4.0 33807,2,,33801,12/17/2021 11:36,,2,,"

The recurrent connections are also used during the forward pass. Take a look, for example, at the following equations that compute (during the forward pass) the hidden state $h_t$ of an LSTM layer.

\begin{align} f_t &= \sigma_g(W_{f} x_t + \color{red}{U_{f}} h_{t-1} + b_f) \\ i_t &= \sigma_g(W_{i} x_t + \color{red}{U_{i}} h_{t-1} + b_i) \\ o_t &= \sigma_g(W_{o} x_t + \color{red}{U_{o}} h_{t-1} + b_o) \\ \tilde{c}_t &= \sigma_c(W_{c} x_t + \color{red}{U_{c}} h_{t-1} + b_c) \\ c_t &= f_t \circ c_{t-1} + i_t \circ \tilde{c}_t \\ h_t &= o_t \circ \sigma_h(c_t) \end{align}

The letters in $\color{red}{\text{red}}$ represent the matrices associated with the $\color{red}{\text{recurrent connections}}$, so we use them in the forward pass. Of course, before training, these connections are useless, in the sense that, if we initialize them randomly, they do not initially provide any context. However, during the backward pass, we update them based on our (labeled) data, so, over time, they should capture the temporal/recurrent relations of the sequences.

",2444,,2444,,12/17/2021 11:46,12/17/2021 11:46,,,,0,,,,CC BY-SA 4.0 33811,1,,,12/17/2021 17:10,,1,45,"

I am attempting to train a neural network where I can say the following:

For most inputs, I know the sign of the relationship between that input and several specific outputs. I.e. whatever set of values the inputs are set to, I can point to individual input and say that so long as the other inputs are unchanged, the output should monotonically increase/decrease as I change this input's value.

This is not to say that the inputs don't interact with each other, they do, but the interaction never changes the sign (i.e. increasing vs decreasing) nature of the monotonic relationships between specific inputs and outputs.

I am wondering how I can use this information to help my network learn fast and/or perform well.

",27076,,18758,,12/17/2021 22:34,12/17/2021 22:34,Can I help my neural network if I know the sign of the relationships between inputs and outputs,,0,4,,,,CC BY-SA 4.0 33814,1,,,12/18/2021 1:33,,1,59,"

Consider the following excerpt paragraph taken from the section titled "Recurrent Neural Networks" of the chapter 10: Sequence Modeling: Recurrent and Recursive Nets of the textbook named Deep Learning by Ian Goodfellow et al on the computational graph of some computations.

The recurrent neural network of ..... is universal in the sense that any function computable by a Turing machine can be computed by such a recurrent network of a finite size. The output can be read from the RNN after a number of time steps that is asymptotically linear in the number of time steps used by the Turing machine and asymptotically linear in the length of the input. The functions computable by a Turing machine are discrete, so these results regard exact implementation of the function, not approximations. The RNN, when used as a Turing machine, takes a binary sequence as input, and its outputs must be discretized to provide a binary output. It is possible to compute all functions in this setting using a single specific RNN of finite size. The “input” of the Turing machine is a specification of the function to be computed, so the same network that simulates this Turing machine is sufficient for all problems. The theoretical RNN used for the proof can simulate an unbounded stack by representing its activations and weights with rational numbers of unbounded precision.

This paragraph clearly explains that RNN is capable of computing any computable function exactly and is the same as the Turing machine in terms of capability.

Afaik, MLP is capable of approximating any continuous, bounded function.

So, it seems to me that RNN is more powerful in terms of the capability of computing functions than MLP. RNN can learn any function that MLP can learn and RNN can learn more than that can be learned by any MLP in general.

Am I correct? Or is there any issue in my interpretation or do more details need to be considered?

",18758,,2444,,12/20/2021 11:39,12/20/2021 11:39,Is the capability of RNN more than the capability of MLP?,,0,2,,,,CC BY-SA 4.0 33818,1,33819,,12/18/2021 13:23,,-1,82,"

Many people have heard of Hinton, Bengio, and LeCun in recent years, given the popularity of deep learning and neural networks, and their contributions to this subfield of Artificial Intelligence. For their contributions, they have conjointly received the Turing Award in 2019 (although, in my view, a few other people could also have received this award for the same reasons).

In addition to them, which computer scientists have received the Turing Award specifically for their contributions to Artificial Intelligence?

For each of them, please, describe the specific reason why they were awarded and/or provide the links to the official site that announces this or the Turing lecture.

Why am I asking this question? Alan Turing is considered one of the fathers, if not the father, of Artificial Intelligence and Computer Science. In particular, in addition to the development of Turing machines, which are widely studied in Theoretical Computer Science and Theory of Computation, he's also published the famous paper Computing Machinery and Intelligence in 1950, where he proposed what was later called the Turing test, and asked one of the most fundamental questions in AI: "Can machines think?". The Turing Award is given to people that make significant contributions to CS or AI, so I think we should remember all these people that have contributed to AI.

",2444,,,,,12/18/2021 13:30,Which computer scientists have received the Turing Award specifically for their contributions to Artificial Intelligence?,,1,2,,,,CC BY-SA 4.0 33819,2,,33818,12/18/2021 13:23,,0,,"

The descriptions below are copied from Wikipedia or the ACM site.

Name Year Description Link
Marvin Minsky 1969 For his central role in creating, shaping, promoting, and advancing the field of artificial intelligence [1]
John McCarthy 1971 McCarthy's lecture "The Present State of Research on Artificial Intelligence" is a topic that covers the area in which he has achieved considerable recognition for his work [2]
Allen Newell 1975 In joint scientific efforts extending over twenty years, initially in collaboration with J. C. Shaw at the RAND Corporation, and subsequently with numerous faculty and student colleagues at Carnegie Mellon University, they have made basic contributions to artificial intelligence, the psychology of human cognition, and list processing [3]
Herbert A. Simon Idem Idem [3]
Edward Feigenbaum 1994 For pioneering the design and construction of large scale artificial intelligence systems, demonstrating the practical importance and potential commercial impact of artificial intelligence technology [4]
Raj Reddy Idem Idem [4]
Judea Pearl 2011 For fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning. [5]
",2444,,2444,,12/18/2021 13:30,12/18/2021 13:30,,,,0,,,,CC BY-SA 4.0 33820,2,,3287,12/18/2021 16:19,,0,,"

Given that you were also asking for a reference that describes in detail these operations, you should take a look at the paper A guide to convolution arithmetic for deep learning (2018), which describes in detail the arithmetic of many convolution operations used in convolutional neural networks. There's also the associated repo with the animations of the different operations.

",2444,,,,,12/18/2021 16:19,,,,0,,,,CC BY-SA 4.0 33821,1,,,12/18/2021 17:07,,1,50,"

I have an unusual but very interesting problem. I have a game that is very similar to Toon Blast (a puzzle mobile game). It's based on a Match-2 mechanic in which you can destroy 2 or more connected blocks and your goal is to complete all the required objectives (collect X color blocks, destroy 30 balloons, etc).

I have tons of levels and the ML solver seems to perform very well for all kinds of obstacles - except Quicksand.

Quicksand is a special object that replicates itself to a nearest tile whenever a user makes a move. If the user destroyed a quicksand in his turn, a quicksand won't be replicated. So basically the fastest way to destroy quicksand is to make sure you destroy as much quicksand as you can in each turn so it won't replicate and cover your board.

I use ML-Agents from Unity (https://github.com/Unity-Technologies/ml-agents) and I just give the agent reward=1f whenever it completes an objective (destroy 1 obstacle) and I subtract 1f from reward whenever it performs a move.

For simpler non-replicating obstacles it works perfectly. For example, you click 2 blocks next to a balloon - it will pop a balloon, add 1f as a reward and at the same time remove 1f for using a move.

This way the agent learns to make as few moves as possible.

Below you'll find how the Quicksand works and some simple obstacle - Balloon.

Quicksand preview (sorry for bad quality, 2mb max size)

Balloon preview

My issue is that no matter what, I can't teach it to solve quicksand. With the above rewarding approach, I think this strange creature learned that by actually REPLICATING quicksand it's gaining more reward because it can destroy it later (actually it's not because moves give -1f so it's more-less equal).

I've tried not giving reward for the quicksand, so it only loses rewards by using moves. But it doesn't work either, I'm not sure why.

Do you guys have any idea how this kind of things should be taught?

",3617,,2444,,12/20/2021 8:46,12/20/2021 8:46,How to teach Machine Learning Agent to destroy replicating objects in a puzzle game?,,0,2,,,,CC BY-SA 4.0 33824,1,33829,,12/18/2021 22:33,,7,648,"

Implementations of variational autoencoders that I've looked at all include a sampling layer as the last layer of the encoder block. The encoder learns to generate a mean and standard deviation for each input, and samples from it to get the input's representation in latent space. The decoder then attempts to decode this back out to match the inputs.

My question: How does backpropagation handle the random sampling step?

Random sampling is not a deterministic function and doesn't have a derivative. In order to train the encoder, gradient updates must somehow propagate back from the loss, through the sampling layer.

I did my best to hunt for the source code for tensorflow's autodifferentiation of this function, but couldn't find it. Here's an example of a keras implementation of that sampling step, from the keras docs, in which tf.keras.backend.random_normal is used for the sampling.

class Sampling(layers.Layer):
    """Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""

    def call(self, inputs):
        z_mean, z_log_var = inputs
        batch = tf.shape(z_mean)[0]
        dim = tf.shape(z_mean)[1]
        epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
        return z_mean + tf.exp(0.5 * z_log_var) * epsilon
",51646,,,,,6/4/2022 15:08,How does backprop work through the random sampling layer in a variational autoencoder?,,3,1,,,,CC BY-SA 4.0 33825,2,,32436,12/18/2021 22:38,,0,,"

Far from expert, but at least I can shed some light.

Your dataset is simply too small. Finetuning means you "interfere" with what the GPT-J model sees as important for the continuation of a prompt. Because that is what the model does; continue a prompt in a way that logically makes sense to what it has seen. Since your dataset is very small, especially since the sentences are not even full sentences, things go south.

For one, the create_tf_records.py (that I presume you used) script already filters OUT all senquences that are not long enough (I'm not sure, but this could be why you end up with 2 sequences?). Then the model now has 500 sentences (and maybe even just 2) it sees as the MOST IMPORTANT data to use for its continuation (I think this is where the term "fine-tuning" comes from, not that it's a small influence on the data, it requires "fine" precision 😂). I had to find this out myself the hard way - there is very limited documentation - and I'm still not 100% certain, but quite sure about this part; you trained the model into a brain dead zombie that only slightly remembers all of the knowledge that is stored deeply embedded behind your dataset.

Now, this is a far for complete or really correct explanation, but I hope it gives you some understanding. If you understand more of it or better - please don't forget to let me know :)

What you probably could do to actually get somewhere: Create a script, that based on the sentence you want to "inject", creates more data. So if your happy/positive sentence set is what you want to achieve, then create context around those sentences and spin them. Many times, combined in all kinds of ways. I would even suggest adding in some stuff like the binary contents of an image (look at CLIP, or the v-jax implementation of it by kingoflolz) if you know how to. Some chat generated by another model / chatbot - like GPT-2 or Clara - and just append your sentence at the end. Translate your sentences (deepl is recommended atm I think), mix those in. Pose a question before your sentences; "what would be a good way to cheer someone up or let them know you appreciate them? [insert item from dataset]" and spin those questions. Try to keep repeating yourself to a minimum (but different combinations are fine, small deviations also, but won't teach the model that much more). Paste entire news articles as though it's being quoted in a chat conversation and then put one of your sentences, also will work.

Those are my findings so far. Available for further discussing all of this; I'm also still figuring this thing out.

",51647,,,,,12/18/2021 22:38,,,,1,,,,CC BY-SA 4.0 33826,2,,5769,12/18/2021 23:13,,0,,"

Other answers claim that you have a different 2-dimensional kernel (i.e. a matrix) for each channel. This is not wrong, but it's just a conceptual interpretation of the 2d convolution, which emphasizes that different channels may provide different information during the convolution: this is the only advantage that I see of this interpretation!

Another interpretation (which I think has more advantages, as I will explain below) is to think of a kernel as a 3d multi-dimensional array that has the same depth as the input. So, for example, if you have an input (an image or feature map) of shape $H \times W \times D$, then a single kernel needs to have the shape $K \times L \times D$, where $D$ is the depth of both the input and kernel. If you have more than one kernel, they will all have the depth $D$. This interpretation is consistent with the excerpt that you are quoting.

This second interpretation is conceptually useful when you need to deal with 3d convolutions: if you follow this interpretation, a 3d convolution is just a convolution where kernels don't necessarily have the same depth as the input (and that would be the only difference between 2d and 3d convolutions). The other advantage of this interpretation is that your confusion would not have arisen in the first place: your confusion arouse because you thought that kernels are 2d arrays. Another advantage is that, if you have $N > 1$ kernels (which is the usual case), then you know immediately that $N$ is the depth of the output volume and you don't need to think about multiple channels of the input volume: you can simply think in terms of multi-dimensional arrays.

So, to clarify, in case it wasn't already super clear: no, you don't apply the same 2d matrix to multiple channels of the input!

",2444,,,,,12/18/2021 23:13,,,,0,,,,CC BY-SA 4.0 33828,2,,33824,12/19/2021 8:11,,2,,"

As already mentioned in the comment, the reason, why the does the backpropagation still work is the Reparametrization Trick.

For variational autoencoder (VAE) neural networks to be learned predict parameters of the random distribution - the mean $\mu_{\theta} (x)$ and the variance $\sigma_{\phi} (x)$ for the case on normal distribution. Here $\theta$ and $\phi$ are parameters, that can be learned during the backprogation.

At each sampling step, one generates random number $\varepsilon$ from the normal distribution with zero mean and unit variance $\mathcal{N}(0, 1)$ and then multiplies this number by $\sigma_{\phi} (x)$ and adds $\mu_{\theta} (x)$: $$ \mu_{\theta} (x) + \varepsilon \sigma_{\phi} (x) $$ The output of this operation has a normal distribution $\mathcal{N}(\mu_{\theta} (x), \sigma_{\phi} (x))$. You do not have to backpropagate through the epsilon, and this number is just multiplied by $\sigma_{\phi} (x) $.

The whole procedure is nicely differentiable with respect to the $\theta$ and $\phi$.

However, note , that the possibility to apply this trick is allowed due to the nice property of normal distribution: $$ \mathcal{N}(a, b) \sim a + b \mathcal{N}(0, 1) $$ Arbitrary distribution doesn't satisfy this property.

In more general case, you would need to apply some other trick to propagate the gradients.

Simple strategy is the Straight through estimator, when one copies the gradient from the output layer to the input through non-differentialbe operation. Or more accurate Gumbel-Softmax softmax.

",38846,,,,,12/19/2021 8:11,,,,0,,,,CC BY-SA 4.0 33829,2,,33824,12/19/2021 9:30,,3,,"

You do not backpropagate with respect to $\epsilon$, which is the random sample or random variable (depending on how you look at it). You backpropagate with respect to the mean $\mu$ and variance $\sigma$ of the latent Gaussian (the variational distribution). Note that, although $z$ is a random sample (and not just a sample), because it's computed as a function of $\epsilon$ (a random sample, once it's sampled from e.g. $\mathcal{N}(0, 1)$), $\mu$ and $\sigma$ are not: these are learnable parameters and are deterministic.

Having said this, note that we use the reparametrization trick in the VAE, so we compute the random sample as $z = g_\phi(\epsilon, x))$, where $g_\phi$ is a deterministic function (encoder) parametrized by $\phi$ (the weights of the neural network that represents the encoder). In case the random variable $z \sim \mathcal{N}(\mu, \sigma^2)$, then we can express the random variable (so also the random sample) as follows $z=g_\phi(\epsilon, x)) = \mu+\sigma \epsilon$. So, as you can see from the code, we sample $\epsilon$ from some prior $p(\epsilon)$ (e.g. $\mathcal{N}(0, 1)$), then we compute $z$ deterministically.

Why is this reparametrization trick useful? The authors of the VAE paper explain it.

This reparameterization is useful for our case since it can be used to rewrite an expectation w.r.t $q_{\phi}(\mathbf{z} \mid \mathbf{x})$ such that the Monte Carlo estimate of the expectation is differentiable w.r.t. $\phi$. A proof is as follows. Given the deterministic mapping $\mathbf{z}=g_{\phi}(\boldsymbol{\epsilon}, \mathbf{x})$ we know that $q_{\phi}(\mathbf{z} \mid \mathbf{x}) \prod_{i} d z_{i}=$ $p(\boldsymbol{\epsilon}) \prod_{i} d \epsilon_{i}$. Therefore $^{1}, \int q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x}) f(\mathbf{z}) d \mathbf{z}=\int p(\boldsymbol{\epsilon}) f(\mathbf{z}) d \boldsymbol{\epsilon}=\int p(\boldsymbol{\epsilon}) f\left(g_{\boldsymbol{\phi}}(\boldsymbol{\epsilon}, \mathbf{x})\right) d \boldsymbol{\epsilon}$. It follows that a differentiable estimator can be constructed: $\int q_{\phi}(\mathbf{z} \mid \mathbf{x}) f(\mathbf{z}) d \mathbf{z} \simeq \frac{1}{L} \sum_{l=1}^{L} f\left(g_{\phi}\left(\mathbf{x}, \boldsymbol{\epsilon}^{(l)}\right)\right)$ where $\boldsymbol{\epsilon}^{(l)} \sim p(\boldsymbol{\epsilon}).$

Note that $\mathbb{E}_{q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})} \left[ f(\mathbf{z}) \right] = \int q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x}) f(\mathbf{z}) d \mathbf{z} = \int p(\boldsymbol{\epsilon}) f\left(g_{\boldsymbol{\phi}}(\boldsymbol{\epsilon}, \mathbf{x})\right) d \boldsymbol{\epsilon} = \mathbb{E}_{p(\boldsymbol{\epsilon})} \left[ f\left(g_{\boldsymbol{\phi}}(\boldsymbol{\epsilon}, \mathbf{x})\right) \right]$, which can be estimated with $\frac{1}{L} \sum_{l=1}^{L} f\left(g_{\phi}\left(\mathbf{x}, \boldsymbol{\epsilon}^{(l)}\right)\right)$ where $\boldsymbol{\epsilon}^{(l)} \sim p(\boldsymbol{\epsilon})$. In other words, you can sample $L$ $z$ in the way we did in order to estimate $\mathbb{E}_{\color{blue}{q_{\boldsymbol{\phi}}(\mathbf{z} \mid \mathbf{x})}} \left[ \color{red}{f(\mathbf{z})} \right]$ (this are Monte Carlo estimates of the expectation).

In the case of the VAE, we want to optimize the ELBO, which is the following objective function

$$\mathcal{L}\left(\boldsymbol{\theta}, \boldsymbol{\phi} ; \mathbf{x}^{(i)}\right)=\underbrace{-D_{K L}\left(q_{\boldsymbol{\phi}}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right) \| p_{\boldsymbol{\theta}}(\mathbf{z})\right)}_{\text{KL divergence}}+ \underbrace{\mathbb{E}_{\color{blue}{q_{\phi}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right)}}\left[\color{red}{\log p_{\boldsymbol{\theta}}\left(\mathbf{x}^{(i)} \mid \mathbf{z}\right)}\right]}_{\text{likelihood}}$$

which we can estimate with Monte Carlo estimates of the second term (the likelihood term)

$$ \widetilde{\mathcal{L}}^{B}\left(\boldsymbol{\theta}, \boldsymbol{\phi} ; \mathbf{x}^{(i)}\right)=-D_{K L}\left(q_{\boldsymbol{\phi}}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right) \| p_{\boldsymbol{\theta}}(\mathbf{z})\right)+ \underbrace{\frac{1}{L} \sum_{l=1}^{L}\left(\log p_{\boldsymbol{\theta}}\left(\mathbf{x}^{(i)} \mid \mathbf{z}^{(i, l)}\right)\right)}_{\text{likelihood}} $$ where $$ \text { where } \quad \mathbf{z}^{(i, l)}=g_{\phi}\left(\boldsymbol{\epsilon}^{(i, l)}, \mathbf{x}^{(i)}\right) \text { and } \boldsymbol{\epsilon}^{(l)} \sim p(\boldsymbol{\epsilon}) $$

Here, the likelihood is computed with the neural network, for example, in practice, you use the cross-entropy of the output of your decoder, which gets as input the input to the decoder, hence $z$.

You can ignore the KL divergence now because, in the case of Gaussians, it can be computed analytically.

Now, what if we didn't use the reparametrization trick? Could we still backpropagate with respect to $\phi$? The authors of the VAE write

The usual (naïve) Monte Carlo gradient estimator for this type of problem is: $\nabla_{\phi} \mathbb{E}_{q_{\phi}(\mathbf{z})}[f(\mathbf{z})]=\mathbb{E}_{q_{\phi}(\mathbf{z})}\left[f(\mathbf{z}) \nabla_{q_{\phi}(\mathbf{z})} \log q_{\phi}(\mathbf{z})\right] \simeq \frac{1}{L} \sum_{l=1}^{L} f(\mathbf{z}) \nabla_{q_{\phi}\left(\mathbf{z}^{(l)}\right)} \log q_{\phi}\left(\mathbf{z}^{(l)}\right)$ where $\mathbf{z}^{(l)} \sim q_{\phi}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right) .$ This gradient estimator exhibits exhibits very high variance (see e.g. [BJP12])

So, the answer is yes, as opposed to what many people might say around, but with another estimator (which may remind you of some equations you have seen in reinforcement learning related to REINFORCE, if you are familiar with this), i.e. $$\frac{1}{L} \sum_{l=1}^{L} f(\mathbf{z}) \nabla_{q_{\phi}\left(\mathbf{z}^{(l)}\right)} \log q_{\phi}\left(\mathbf{z}^{(l)}\right) \tag{1}\label{1},$$ which has high variance.

So, in the end, the reparametrization trick can be viewed as a variance reduction technique. There are others, like control variates or Flipout (used e.g. in the context of Bayesian neural networks).

The first thing to note about \ref{1} is that we do not need to take the derivative with respect to $f$. The second thing is that $\nabla_{q_{\phi}\left(\mathbf{z}^{(l)}\right)} \log q_{\phi}\left(\mathbf{z}^{(l)}\right)$ is the score function.

Now, don't ask me how to calculate this gradient $\nabla_{q_{\phi}\left(\mathbf{z}^{(l)}\right)} \log q_{\phi}\left(\mathbf{z}^{(l)}\right)$ (because I am bad at math). However, note that $ \mathbf{z}^{(l)} \sim q_{\phi}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right) $, so we treat $\mathbf{z}^{(l)}$ as a fixed random sample. By the way, I am not sure whether they used $q_{\phi}\left(\mathbf{z} \mid \mathbf{x}^{(i)}\right)$ differently from $q_{\phi}\left(\mathbf{z} \right)$. I don't think so. I think they are the same and I think we can still back-propagate with respect to $\phi$, even if we use it to sample $\mathbf{z}^{(i)}$, because, once this latter is sampled, it can be treated as fixed (random) sample.

I also note that I think they meant $\frac{1}{L} \sum_{l=1}^{L} f(\mathbf{z}^{(l)}) \nabla_{q_{\phi}\left(\mathbf{z}^{(l)}\right)} \log q_{\phi}\left(\mathbf{z}^{(l)}\right)$, i.e. they forget to use $\mathbf{z}^{(l)}$ rather than just $\mathbf{z}$ as input for $f$ in the Monte Carto estimate. You can see in section 3 of [BJP12] and section 4.2. of [1] they do it like this, and it makes sense. So, the VAE paper has sloppy stuff in there.

My intuition of why this has high variance is because you sample something according to the variational distribution, but then you update this variational distribution, and you continuously do this. However, I am not sure this is the right intuition. In [BJP12], they say (in section 4) this MC estimate has high variance and that you need a lot of samples $\mathbf{x}$. I don't know exactly why this is the case because I didn't fully read this paper yet.

",2444,,2444,,6/4/2022 15:08,6/4/2022 15:08,,,,1,,,,CC BY-SA 4.0 33831,1,,,12/19/2021 13:30,,0,65,"

This topic has been introduced in "Pattern Recognition and Machine Learning, Bishop, 2006", section 5.4.1. I am a bit confused about this method and I have two questions.

  1. Why this method has attracted attention or has been developed? First, we want to compute the Hessian fast, so we try to approximate it in O(W) time, where W is the number of parameters. And then, we see that this matrix most of the time is heavily non-diagonal.

  2. My second question is how can we know whether we can approximate a Hessian or not? Is there a hint/clue in problems?

Thanks in advance!

",51660,,51660,,12/19/2021 13:49,12/19/2021 13:49,Why is there a Hessian diagonal approximation? And when can we use it?,,0,3,,,,CC BY-SA 4.0 33832,2,,32426,12/19/2021 17:09,,0,,"

Yes, the reparametrization trick can be useful in the context of variational Bayesian neural networks, although other more effective variance reduction techniques are more commonly used (in particular, the flipout estimator). See this implementation of BNNs that uses Flipout, but TensorFlow Probability, the library used to implement that example, also provides layers that implement the reparametrization trick.

Note that the reparametrization trick is used in the context of variational auto-encoders (VAEs) (so not in the context of deterministic auto-encoders). VAEs and BNNs have a lot in common: both are based on stochastic variational inference (i.e. variational inference combined with stochastic gradient descent). So, whenever you have some sampling or some stochastic operation, the reparametrization trick could turn out to be useful. However, right now, I am only familiar with these two types of models that use it.

",2444,,,,,12/19/2021 17:09,,,,0,,,,CC BY-SA 4.0 33833,2,,16740,12/19/2021 17:17,,2,,"

I also walked into that trap the first few times. The difference is the following:

  • $N$ is the number of expanded nodes
  • $b^*$ is the effective branching factor
    • $b^*$ depends on the depth $d$ of the goal and the number of generated nodes, lets call that $M$
    • $b^*$ is the solution to $M+1=1+b^*+(b^*)^2+(b^*)^3+...+(b^*)^d$

So, you could argue that instead of comparing $b_1^*$ and $b_2^*$ of two algorithms, you can also directly compare $M_1$ and $M_2$, because $b_1^*>b_2^*\Leftrightarrow M_1>M_2$.

But you can imagine an algorithm $A_2$ that expands fewer nodes than $A_1$ (so $N_1>N_2$), but also different nodes so that it generates more nodes (so $M_1<M_2$). Since the cost is defined by the number of generated nodes, comparing $N$ might give the wrong result.

The effective branching factor is more general than the number of generated nodes, because you can average $b^*$ for one algorithm over many search problems, but averaging over the number of nodes (which might differ greatly) is not possible or rather nonsensical.

",51664,,,,,12/19/2021 17:17,,,,3,,,,CC BY-SA 4.0 33838,1,,,12/19/2021 22:35,,1,48,"

In Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes Jürgen Schmidhuber describes, using the idea of using compression progress on perceived data as an intrinsic reward signal:

[Artists] create action sequences yielding interesting inputs, where interestingness is a measure of learning progress, for example [...] the saved number of bits needed to encode the data.

Has such a (purely generative) system been created before? Say, one that creates 2d images?

",42922,,2444,,12/20/2021 9:06,12/20/2021 9:06,Generative systems based on Schmidhuber's compression framework,,0,4,,,,CC BY-SA 4.0 33840,1,,,12/20/2021 5:48,,0,29,"

Is there a deep learning architecture where outputs of the same model with two different inputs are used for error calculation (backpropagation)?

Workflow:

Input1 -----> Model ------> Output1

Input2 -----> Model ------> Output2

Loss = criterion(Output1, Output2)

Backpropagation(Loss)

",48019,,18758,,12/20/2021 7:27,12/21/2021 9:02,Deep Learning Architecture where outputs from two different inputs are used for error calculation,,1,1,,,,CC BY-SA 4.0 33841,1,,,12/20/2021 8:22,,1,33,"

RNN has the same capability as a universal Turing machine. But I am confused whether RNN holds the same capabilities if we use teacher forcing.

Consider the following excerpts from paragraphs taken from the section titled "Teacher Forcing and Networks with Output Recurrence" of the chapter 10: Sequence Modeling: Recurrent and Recursive Nets of the textbook named Deep Learning by Ian Goodfellow et al.

The network with recurrent connections only from the output at one time step to the hidden units at the next time step is strictly less powerful because it lacks hidden-to-hidden recurrent connections. For example, it cannot simulate a universal Turing machine. Because this network lacks hidden-to-hidden recurrence, it requires that the output units capture all the information about the past that the network will use to predict the future....... Models that have recurrent connections from their outputs leading back into the model may be trained with teacher forcing.

The quoted portion says that the RNN in which recurrent connections only exist from the output at one time step to the hidden units at the next time step are less powerful and are not as capable as a universal Turing machine. And those neural networks can be trained with teacher forcing. Even though they are not trained using teacher forcing, they are not as capable as the universal Turing machines. But, I want to get clarity on the relation between the capability of RNN trained using teacher forcing and the capability of universal Turing machine.

Is it true that if an RNN is trained with teacher forcing then it cannot simulate a universal Turing machine?

",18758,,2444,,12/20/2021 11:38,12/20/2021 11:38,Can teacher forcing in RNN ensure Turing completeness?,,0,0,,,,CC BY-SA 4.0 33842,2,,32777,12/20/2021 10:53,,1,,"

It seems that your problem is that you think that we must know the true value of $Q(s', a')$ in order to perform the SARSA update. This is not the case! SARSA is a reinforcement learning algorithm, not a supervised learning algorithm (although you can also view RL as a form of SL).

If you are familiar with supervised learning (SL), then you know that, to train a model, you need the ground-truth labels. The typical SL example is that of binary classification of dogs and cats. So, you are given an image of a dog or cat $x$, you pass it to your neural network $f$, which produces a prediction $\hat{y} = f(x)$. Now, if $x$ is a dog but $\hat{y}$ is cat, the neural network $f$ made a mistake. So, we need to change the weights of this model so that $\hat{y} = \text{dog}$ when $x$ is an image of a dog (of course, this reasoning also applies to the case when $x$ is an image of a cat). A typical way to solve this problem in SL is to use a loss function that computes some notion of distance between $\hat{y}$ (the prediction) and $y$ (the true label). The usual loss function, in this case, is the binary cross-entropy, but you don't need to know the details now.

In reinforcement learning, you don't really have ground-truth labels, but you have experience, which is just the tuples $\langle s_t, a_t, r_{t+1}, s_{t+1} \rangle $, where

  • $s_t$ is the state of the agent/environment at time step $t$
  • $a_t$ is the action that the agent takes at time step $t$ in state $s_t$
  • $r_{t+1}$ is the reward the agent receives after having taken action $a_t$ in $s_t$; this reward indicates how good that action is, but it doesn't tell whether you took the correct/optimal (or ground-truth) action or not (this is the main difference between reinforcement learning and supervised learning!)
  • $s_{t+1}$ is the state the agent ends up in after having taken $a_t$ in $s_t$.

Now, in reinforcement learning, there are many problems that you may want to solve. However, the main goal of an RL agent is to maximize expected reward in the long run (known also as expected return), so you could say that your objective function is $$\mathbb{E} \left[ \sum_{t=0}^\infty R_t \right],$$ where $G = \sum_{t=0}^\infty R_t$ is the so-called return (i.e. the cumulative reward or reward in the long run). The goal is to maximize this expectation.

In practice, what you do is ESTIMATE a so-called (state-action or just action) value function. In the case of SARSA, it's defined as $q: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, where $\mathcal{S}$ and $\mathcal{A}$ are respectively the set of states and actions of the environment (aka MDP).

Why do you want to estimate a value function? In the case of $q$ (so SARSA and Q-learning), $q(s, a)$, for some $s \in \mathcal{S}$ and $a \in \mathcal{A}$, is defined as the expected cumulative reward that you will get from taking action $a$ in the state $s$. So, if you know that you will get more reward by taking action $a_1$ rather than action $a_2$ in state $s$, then $q(s, a_1) > q(s, a_2)$, so you will take action $a_1$ when in state $s$. In fact, you can also define $q(s, a)$ as follows $q(s, a) = \mathbb{E}\left[ G \mid s, a \right]$, where $G$ is our cumulative reward, aka return (for simplicity, I ignore a few details).

HOWEVER, we do not (usually) know $q$. That's why we need Q-learning and SARSA, i.e. to estimate the state-action value function. So, in SARSA, you know $s'$ and $a'$ (read the pseudocode!), but we do not know the true value of $q(s', a')$. So, you say, but then why do we use it in the update of SARSA?

The reason is: initially, SARSA uses possibly wrong estimates of $q$ to learn $q$ itself. We denote these estimates with the capital letter $Q$. So, we don't know the true value of $a'$ in $s'$. Or, more precisely, at the beginning of SARSA, if $q$ is implemented as a 2d array (or matrix), then $Q[s', a']$ is not a good estimate of the true value of $a'$ in state $s'$, i.e. $q(s', a')$. In other words, $Q[s', a'] \approx q(s', a')$.

Now, you ask: why can we use a possibly wrong estimate, $Q(s', a')$, to compute $Q(s, a)$ (another estimate)? The idea of using possibly wrong estimates of the state-value function to update other estimates of the value function is present in all temporal-difference algorithms (including Q-learning): this is called bootstrapping. However, the specific reason why tabular SARSA converges to the true estimates is a different (although related) story (more info here).

Now, if you didn't understand this answer, then you really need to pick up a book and read it carefully from the beginning. It takes time to understand RL at the beginning, but then it becomes easy. The most common textbook for RL is Reinforcement Learning: An Introduction by Sutton and Barto. You can find other books here.

",2444,,2444,,12/20/2021 11:34,12/20/2021 11:34,,,,4,,,,CC BY-SA 4.0 33843,2,,11243,12/20/2021 13:51,,0,,"

(This answer is based on info that you can find in the paper $\varepsilon $-MDPs: Learning in Varying Environments, 2002, by István Szita et al. and [Szepesvári and Littman(1996)], the paper that proposed generalised MDPs. I just adapted the notation to be more consistent with Sutton & Barto's book and provided additional info and links).

What is an MDP?

Let's first start with the usual definition of an MDP.

An MDP is defined by the tuple $\langle \mathcal{S}, \mathcal{A}, R, \color{blue}{P} , \gamma \rangle$, where

  • $\mathcal{S}$ is the set of states
  • $\mathcal{A}$ is the set of actions
  • $R: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}$ is the reward function; so $R(s, a, s')$ is the reward for taking action $a \in \mathcal{A}$ in state $s \in \mathcal{S}$ and ending up in $s' \in \mathcal{S}$.
  • $\color{blue}{P}: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow [0, 1]$ is the transition model; so $\color{blue}{P}(s, a, s')$ is the probability of arriving in state $s'$ after having taken action $a$ in the state $s$. You could also define the transition model to include the reward, denoted by $\color{blue}{P}(s', r \mid s, a)$ to emphasize that it's a conditional probability distribution), but let's ignore this for now.
  • $0 \leq \gamma < 1$ is a discount factor, which can be useful for infinite-horizon problems

Given this formulation of a decision problem, the goal is to maximize the expected reward. The objective function can be defined as follows $$ \mathbb{E}\left[ \sum_{t=0}^{\infty} \gamma^{t} R_{t} \right] = \mathbb{E}\left[ G_{t} \right], $$ where $R_{t}$ is the reward that we get at time step $t$.

There are multiple ways to solve this problem. The most common is probably Q-learning.

Q-learning

In Q-learning, we try to estimate the state-action (or action) value function. The optional value function is defined as follows

$$ Q^{*}(s, a)=\sum_{s'} \color{blue}{P}(s, a, s')\left(R(s, a, s')+\gamma \color{red}{\max} _{a^{\prime}} Q^{*}(s', a^{\prime})\right) \label{1}\tag{1}, $$ for all $s \in \mathcal{S}$.

So, it's a function of the form $Q^{*}: \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$. To emphasize the main purpose of this function, it can be denoted/defined as $Q^{*}(s, a) = \mathbb{E}\left[G_t \mid a, s \right]$, so it's the expected return ($G_t$ is the return or cumulative reward) for taking action $a$ in state $s$ under the optimal policy $\pi^*$ (hence the $*$).

The equation (\ref{1}) is a recursive equation because $Q^{*}(s, a)$ is defined in terms of itself, i.e. we use $Q^{*}(s', a^{\prime})$ to define it. In this context, this type of recursive equation is also known as Bellman equation (due to Richard Bellman which contributed to the theory of dynamic programming, which is related to MDPs, as dynamic programming algorithms can be used to solve MDPs, given a transition model).

One thing to keep in mind is that the optimal policy can be derived from the optimal state-action value function by acting greedily with respect to it.

What is a Generalized MDP?

Now, there's one reason why I colored the transition model and the max operation in the Bellman equation \ref{1} in $\color{blue}{\text{blue}}$ and $\color{red}{\text{red}}$, respectively, and this is related to generalized MDPs.

The Bellman equation in \ref{1} is defined with respect to

  • the transition model $\color{blue}{P}$, which describes the dynamics of the environment,

  • $\color{red}{\max}$, which is related to the assumption that the optimal agent acts greedily with respect to the optimal value function (keep in mind that the max operation is non-expansive)

A generalised MDP [Szepesvári and Littman(1996)] is a generalisation of multiple versions/definitions of MDPs that circulate around in the literature, including the definition above. More specifically, 2 concepts are generalised: transition model and the max operator.

Mathematically, we can define a generalised MDP as a tuple $\langle \mathcal{S}, \mathcal{A}, R, \color{blue}{\oplus}, \color{red}{\otimes}, \gamma \rangle$, where $\mathcal{S}, \mathcal{A}$ and $R$ are defined as above and

  • $\color{blue}{\oplus}: (\mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathbb{R}) \rightarrow(\mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R})$ is a "expected value-type" operator that generalises $\color{blue}{P}$; the intuition of this operator is the same as the input of $\color{blue}{P}$ that I described above

  • $\color{red}{\otimes}:(\mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}) \rightarrow(\mathcal{S} \rightarrow \mathbb{R})$ is a "maximization-type" operator that generalises $\color{red}{\max}$; the intuition of $\color{red}{\otimes}$ is also the same as the intuition of $\color{red}{\max}$ (i.e. it represents how the optimal agent would behave).

So, these are operators (like the gradient operator or the Bellman operator) because they take as input a function and produce another function.

Now, if we set

  • $(\color{blue}{\oplus} U)(s, a)=\sum_{s'} \color{blue}{P}(s, a, s') U(s, a, s')$ (so, in this case, it takes as input the function $U$; note that, in this case, $\color{blue}{\oplus}$ is an expected value operator, with respect to the probability distirbution $\color{blue}{P}(s, a, s')$)

  • $(\color{red}{\otimes} Q)(s)=\color{red}{\max}_{a} Q(s, a)$ (note that this just uses the $\color{red}{\max}$)

Then we get the usual expected-reward MDP model.

What are the advantages of Generalised MDPs?

In the literature, there are different versions of MDPs, such as

Generalised MDPs generalise all these MDPs for different values of $\color{blue}{\oplus}$ and $\color{red}{\otimes}$.

The paper [Szepesvári and Littman(1996)] (that introduced GMDPs) provides more info about how we should set $\color{red}{\otimes}$ and $\color{blue}{\oplus}$ to get these different MDPs (see table 1).

Bellman equations in Generalised MDPs

The Bellman equation for the state value function can be expressed as follows in the context of GMDP

$$ V^{*}=\color{red}{\otimes} \color{blue}{\oplus} \left(R + \gamma V^{*}\right) $$

For the state-action value function, you can have the following Bellman equation

$$ K Q=\color{blue}{\oplus} (R+\gamma \color{red}{\otimes} Q) $$

or the one that relates $Q^{*}$ to $V^{*}$

$$ Q^{*}=\color{blue}{\oplus} \left(R+\gamma V^{*}\right) $$

",2444,,2444,,12/20/2021 14:07,12/20/2021 14:07,,,,0,,,,CC BY-SA 4.0 33844,2,,33805,12/20/2021 14:32,,0,,"

Yes, recursive neural networks (recursive NN) are related to recurrent neural networks (RNNs), because they generalize the latter (at least, structurally), as stated in section 10.6 of the Deep Learning book, similar to the way that a tree generalizes a list/array.

So, in a list/array, element $e_1$ is connected to element $e_2$, which is connected to element $e_3$, and so on. In a tree, element $e_1$ can be connected to more than one other element (node), and the same applies to the other elements, provided there are no cycles, otherwise, you get a graph, which is a generalization of a tree.

If these explanations weren't yet clear, the best way to understand the difference between the two is to look at the diagrams of an RNN and a recursive NN.

Here's an RNN.

The caption should describe the diagram (in case it is not clear). The important thing to note is that, on the left, we have an RNN, which looks like a list.

Here's a recursive NN.

which looks like a tree.

There are many variations of RNNs (e.g. Bidirectional RNNs, multi-layered RNNs, LSTMs, or GRUs) and recursive NNs (e.g. that attempt to have balanced trees, similar to the way that a Red-Black Tree is a balanced binary search tree), so we cannot give all the details about the differences between these two approaches.

However, a few things should be kept in mind

  • recursive NNs might be used in the context of parse trees
  • recursive NNs have been used to process data structures as input to neural nets
  • nodes in recursive NNs don't necessarily perform a linear operation followed by a non-linearity
",2444,,2444,,12/20/2021 14:39,12/20/2021 14:39,,,,0,,,,CC BY-SA 4.0 33845,1,,,12/20/2021 14:40,,0,93,"

I've explored tools like amazon personalize, etc. for generating recommendations. It seems like amazon personalize is appropriate when all the content is with the company/a single entity. For example, in Netflix, all the content (the catalogue of movies, tv shows, etc.) is with them and they generate personalized movie/tv show recommendations.

But what if there's a platform similar to Youtube, TikTok, where users can:

  • post content (users are continuously generating content)
  • view other users content and interact (like, share, repost, comment)
  • follow other users

When there is user-generated content like this and users follow other users (meaning they probably want recommendations from users they follow), how do we give recommendations? What algorithms and tools can be used?

Lots of content - handling the cold start problem

And when there is user-generated content, there is going to be lots of content being generated every minute. So, how do we handle the cold start problem (i.e. how do we decide who to recommend all of this new influx of content too)? Usually, we might experiment with this new content, like recommend it to some users, see how they're responding and appropriately decide how to recommend this content. But when there is a very high frequency of content being created, how do we reduce the amount of time it takes to give recommendations/push the new content to users quickly?

And does anybody know if the questions mentioned above can be addressed using Amazon Personalize itself (to some extent maybe)?

",51680,,2444,,12/22/2021 8:58,12/31/2022 16:05,How do we give recommendations when users create/post content (like in YouTube)?,,1,0,,,,CC BY-SA 4.0 33847,2,,22331,12/20/2021 18:46,,1,,"

That notation should mean to go from time step $T$ to time step $1$ by a negative step $-1$, i.e. backward, so $T$, then $T-1$, then $T-2$, and so on until $1$. If you know Python, this should be familiar. However, note that this is just a guess, because I am not familiar with this algorithm.

",2444,,,,,12/20/2021 18:46,,,,0,,,,CC BY-SA 4.0 33848,1,33883,,12/20/2021 22:03,,-1,58,"

I have a dataset where I am recording temperature after every 4milliseconds till 500 and another feature "conductivity value". The length of the dataset is around a 1000 rows. I need to find the conductivity value based on the temperature pattern.

t1 t2 t3 .... t5 conductivity
90 91 93 .... 96 0.34
92 91 93 .... 95 0.36

I am bit confused on how to use the dataset in a time series model such as LSTM because I have all the time sequence in columns and I don't know the conductivity values in between as in t2,t3,t4.

I think the dataset becomes a classification problem with the current format.

Can you guys help me out?

",51691,,51691,,12/23/2021 18:48,12/23/2021 21:19,Is my dataset a time series dataset? and should I use an LSTM?,,1,1,,,,CC BY-SA 4.0 33852,1,33855,,12/21/2021 5:54,,0,413,"

Below are the two tensors

[ 73.,  67.,  43.]    
[ 91.,  88.,  64.],
[ 87., 134.,  58.],
[102.,  43.,  37.],
[ 69.,  96.,  70.]

[ 56.,  70.],
[ 81., 101.],
[119., 133.],
[ 22.,  37.],
[103., 119.]

These are the weight that are added

Weights and biases

 w = torch.randn(2, 3, requires_grad=True)
 b = torch.randn(2, requires_grad=True)

I am not able to understand how the size of tensors are decided for weight and biases. Is there common rule that we should follow while adding weight and biases for our model

",41619,,,,,12/21/2021 9:42,Not able to understand Pytorch Tensor (Weight & Biases) Size for Linear Regression,,1,1,,,,CC BY-SA 4.0 33853,2,,6401,12/21/2021 6:33,,0,,"

The first thing to think about is how you might identify a likely swapper without a computer, and what characteristics might identify that swapper, e.g. what role they have, how many times they swapped in the last year, 6 months, 6 weeks etc, how many times they've taken each shift, how long they've been around, do they have a family, etc. In this initial step you don't need to figure out HOW those characteristics identify the swapper, just that they might help to identify them.

Once those "features" have been identified, you are ready to build a model of how you might identify a likely swapper. You would take a few examples of instances where two individuals swapped shifts, and map out the characteristics of each individual. This will identify a number of instances of your positive class. Flesh our your dataset with all other combinations of individuals and shifts which are, of course, instances where no switch occurred. These are your negative class.

Before doing any modeling, check to see if any of your features are highly correlated with your target variable. If there are a couple of features that are highly correlated, you may not even need to use machine learning. Remember, machine learning is just using an algorithm to learn a pattern. If you can learn the pattern without an algorithm, you're done.

If you're still looking at several features that may be involved in a complex model to identify who might swap shifts, then you can start trying various model algorithms. To start with you could try logistic regression, random forest, naïve bayes, and SVM. Split your data and try and overfit on your training set. If you manage to overfit, you've got a chance. Start scaling back the features and try and find something that generalizes.

",51695,,,,,12/21/2021 6:33,,,,0,,,,CC BY-SA 4.0 33854,2,,33840,12/21/2021 9:02,,1,,"

There are known architectures to implement this idea, namely, seamese networks, and also a training strategy, known as contrastive learning, that relies on the idea of comparing the output of neural networks. I will explain both of them briefly.

The idea of seamese networks is exactly what you mentioned. You have a single model, $m$, that receives two inputs, $x_{1}$ and $x_{2}$. Uppon the application of $m$, one has $h_{i} = m(x_{i})$.

Now, seamese networks have been employed in at least two situations that I know of. The first one is known as Contrastive Learning, and seeks to learn a classifier from pairs of examples. A pair $(x_{i}, x_{j})$ is said to be a positive pair if they belong to the same class, otherwise it is called a negative example. The idea is to encode both examples with $m$, then comparing the learned representations through a similarity measure $sim(h_{i}, h_{j})$. As an example of similarity measure, take the inner product,

$$sim(h_{i}, h_{j}) = \sum_{k=1}^{p}h_{ik}h_{jk}$$

Using this, the contrastive loss reads as,

$$\ell(h_{i}, h_{j}) = \dfrac{exp(sim(h_{i}, h_{j}) / \tau)}{\sum_{n=1}^{N}exp(sim(h_{i}, h_{n}) / \tau)}$$

where $\tau > 0$ is known as the temperature, in allusion to Statistical Mechanics. The principle behind this type of learning strategy is the the model $m$ will learn to encoder inputs in such a way that positive pairs (samples from the same class) are mapped close together in the latent space, whereas negative pairs will be far apart. For reference you can consult [He et al., 2020]

The second application that I know of is metric learning. In this case, suppose that you have a metric $d$ that is useful, but very costly to calculate over the input space (e.g. $d(x_{1}, x_{2})$ takes a lot of time to calculate). This is the case, for instance, with the Wasserstein distance. The idea is to precompute a bunch of values $d_{ij} = d(x_{i}, x_{j})$, and to train a seamese network so that $\lVert h_{i} - h_{j} \rVert \approx d_{ij}$, that way we may measure $d_{ij}$ by computing the Euclidean distance over the latent space. This is roughly the idea behind the work of [Courty et al., 2017]

References

[He et al., 2020] He, Kaiming, et al. "Momentum contrast for unsupervised visual representation learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.

[Courty et al., 2017] Courty, Nicolas, Rémi Flamary, and Mélanie Ducoffe. "Learning wasserstein embeddings." arXiv preprint arXiv:1710.07457 (2017).

",48703,,,,,12/21/2021 9:02,,,,0,,,,CC BY-SA 4.0 33855,2,,33852,12/21/2021 9:42,,2,,"

The size of the parameters tensor is depended on what type of layer that you want to build. Convolutional, fully connected, attention or even custom layer, each layer has a difference in the way it treats input, reading the documents is the good way to start (CS231n of Stanford University describes in detail each layer's properties).

In your case, the layer name is the fully-connected layer (or dense, linear layer in other documents) which converts an m-dimension input vector to an n-dimension output vector. All m nodes are mapped to n nodes by a $m\times n$ matrix (that's why its name is fully-connected) and the bias vector, in short, is to help the learned function be more flexible.

Therefore, the rule to decide the size of weight and bias is the size of input and target vector. Below is a simple example that builds a Linear layer from scratch:

import torch
class Example_Linear(torch.nn.Module):
    '''
    An example of Linear layer:
    Args:
        :param in_d: number of input dimensions
        :param out_d: number of output dimensions
        :param bias: whether to use bias with the linear layer
    '''
    def __init__(self, in_d, out_d, bias=True):
        super().__init__()
        self.weight = torch.nn.Parameter(torch.randn(out_d, in_d))
        self.bias = None
        if bias:
            self.bias = torch.nn.Parameter(torch.FloatTensor(out_d).fill_(0))
        
    def forward(self, x):
        out = torch.matmul(x,self.weight.T)
        out += self.bias
        return out

in_dimension = 10
out_dimension = 1
model = Example_Linear(in_d=in_dimension, out_d=out_dimension, bias=True)

Or you can simply implement fully-connected layer by one line:

model = torch.nn.Linear(in_dimension, out_dimension, bias=True)
",41287,,,,,12/21/2021 9:42,,,,2,,,,CC BY-SA 4.0 33856,1,33860,,12/21/2021 10:02,,2,2598,"

I understand that FLOPS means floating-point operations per second, and throughput is the number of inputs (for example, images) per second. If a model has higher FLOPS, it means it performs faster.

However, in the article Container: Context Aggregation Network, they show that:

The container has higher FLOPS and less throughput, while the container-light has lower FLOPS and higher throughput.

What is the reason for that?

",25141,,2444,,12/21/2021 10:43,12/21/2021 20:33,Does higher FLOPS mean higher throughput?,,1,0,,,,CC BY-SA 4.0 33858,1,,,12/21/2021 13:20,,1,26,"

I'm trying to implement a solution in python to detect skin in an image.

I'm evaluating the Mask R-CNN model to create a mask on the skin (not on clothes). The problem is that every solution I have encountered using Mask R-CNN uses it to classify objects. I'm afraid that using it trying to classify texture might be a problem. Is it the case?

My dataset is actually pretty good, composed of the

  1. original image,
  2. precise mask on the skin, and
  3. grounding box.

Can I use a Mask R-CNN to detect a skin texture?

",51699,,2444,,12/22/2021 10:41,12/22/2021 10:41,Can I use a Mask R-CNN to detect a skin texture?,,0,0,,,,CC BY-SA 4.0 33859,1,,,12/21/2021 19:32,,0,241,"

The Mereology Theory below contains three first-order axioms that represent a part of a mereology theory. For this posting, it is important that the set of axioms should be considered as a theory.

Mereology Theory

Reflexivity $\forall x : part(x,x)$

Antisymmetry $\forall x \forall y : ((part(x,y) \land part(y,x)) \implies (x = y))$

Transitivity $\forall x \forall y \forall z :((part(x,y) \land part(y,z)) \implies part(x,z))$

Here is my naïve attempt to present the theory as a set of conceptual graphs (CG).

My understanding of the above CGs is as follows:

  1. The variables are universally quantified, not default for CGs, but allowed in extended CG (ECG).
  2. The inner graphs are all related by conjunction, which is default for GCs and I assume for ECGs.
  3. The arrow on graph representing reflexivity is bi-directional.
  4. Both antisymmetry and transitivity are represented by an IF-THEN contexts.
  5. Dotted lines are co-references.
  6. Equality (=) is actually commutative, but is represented as a directed relation .
  7. Each inner graph asserts a single proposition, labelled Proposition.
  8. The outer graph is labeled MereologyTheory, I am not sure that this is correct ECG syntax.

Below is a possible model of the above theory:

Mereology Model Mathematical notation

$Entities = \{ a,b \}$

$Relations = \{part(a,a),part(a,b),part(b.b)\}$

Obviously there are many other possible models. MereologyModel below is my attempt to visualize this model as a CG. I am not sure that putting the label MereologyModel is correct CG syntax or denotes it as a model of MereologyTheory .

Question:

In CG visual notation, how are theories and models related? It seems to me that basic CGs can represent FOL sentences and the relation between such sentences. According to Chein and Mugnier the subsumption relation is defined by graph homomorphisms between CGs. Is the model/theory relation for CGs also defined in terms of graph homomorphisms? I am aware that in general a model satisfies a theory i.e. $M \vDash T$. Does the graph homomorphism provide the necessary syntactic mapping to enable model and theory to be related?

Note CGs can be represented in Common Logic (ISO zipped PDF), which would permit a formal proof that all the axioms of the theory are satisfied in the model.

",48716,,48716,,1/5/2022 17:28,10/2/2022 18:02,How do I show the relationship between theories and models using Conceptual Graphs?,,1,0,,,,CC BY-SA 4.0 33860,2,,33856,12/21/2021 20:33,,3,,"

In the context of Deeplearning:

  • FLOPS: Floating Point Ops per Second
  • FLOPs: Floating Point Ops

FLOPS, refers to the number of floating point operations that can be performed by a computing entity in one second. It is used to quantify the performance of a hardware.

FLOPs, simply means the total number of floating point operations required for a single forward pass. The higher the FLOPs, the slower the model and hence low throughput.

This thread on stack overflow might help to get a deeper insight: https://stackoverflow.com/questions/58498651/what-is-flops-in-field-of-deep-learning

",21229,,,,,12/21/2021 20:33,,,,0,,,,CC BY-SA 4.0 33862,1,,,12/21/2021 22:25,,1,79,"

If an RNN is trained using only the teacher forcing, then the network takes the actual output from the previous time step as input to the hidden state the next time step.

We know that the actual outputs cannot be given to the model while testing, then what information passes from a time step to the next time step in the test phase?

",18758,,2444,,12/22/2021 14:36,12/22/2021 17:01,How to do testing for an RNN that was trained with teacher forcing only?,,1,0,,,,CC BY-SA 4.0 33863,1,,,12/21/2021 22:49,,1,57,"

I have some data with ground truth that looks like a binary step function, where part of it is 0 and part is one. An example for the GT can be like 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0

or something like this

I have a hard time to come up with a loss function that can optimize this problem, the simplest option would be something like CrossEntropy or BinaryCrossEntropy, but I am wondering if there is any other loss that I try.

Something that can take into account the property that when the GT is one (1) it is continuous 1 and when it is zero it is continuous.

To give a little more information, for example, I will never have a GT that be like this 0 0 1 0 1 0 1 also I will never have a GT like this 0 0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0. In other words, I will have one time ones in a continuous way (it can be at the start or middle or end) but it wont be two discontinuous 1s. Is there any loss function that take into account this properties?

",49699,,49699,,12/22/2021 14:59,12/22/2021 14:59,a loss for binary step function data,,0,0,,,,CC BY-SA 4.0 33867,2,,28288,12/22/2021 4:44,,3,,"

The $L_{CE}$ that you provided is binary cross-entropy, the factor $y$ and $(1-y)$ is because $y$ is binary $({0,1})$, careful with the name next time. The cross-entropy loss should have form:

$$L_{CE}=-\displaystyle\sum_{i=1}^C y_i\log(\hat{y_i})$$

Where $C$ is the number of classes. Normally, the $y_i$ factor only is 1 when $i$ is the index of the correct class. Therefore, with each class, the function is just:

$$f(x)=-log(x)$$

Now, to prove this one is convex, we have multiple ways, but my favorite one is computing the derivative and second derivative.

$$\frac{\partial L}{\partial x}=-\frac{1}{x}\Rightarrow \frac{\partial^2 L}{\partial x^2}=\frac{1}{x^2}>0 \text{ for all }x\in(0,1]$$

This proves that $f(x) = -\log(x)$ is convex, given that, if the second derivative of a function is positive, then the function is convex (more info and an example here).

For the case of multiple samples, we also need to prove that the sum of a convex function is a convex function.

Based on the definition of convex function, a function $f:X\rightarrow \mathbb{R}$ will be convex if: $$f(tx_1+(1-t)x_2) \le tf(x_1) + (1-t)f(x_2)$$ where $0<t<1$ and $x_1,x_2\in X$.

Now, let's assume $h(x) = f(x) + g(x)$ where both $f$ and $g$ are convex. We have: $$f(tx_1+(1-t)x_2) \le tf(x_1) + (1-t)f(x_2)$$ $$g(tx_1+(1-t)x_2) \le tg(x_1) + (1-t)g(x_2)$$ $$\Rightarrow f(tx_1+(1-t)x_2) + g(tx_1+(1-t)x_2) \le tf(x_1) + (1-t)f(x_2) + tg(x_1) + (1-t)g(x_2)$$ $$\Rightarrow f(tx_1+(1-t)x_2) + g(tx_1+(1-t)x_2) \le t(f(x_1)+g(x_1)) + (1-t)(f(x_2)+g(x_2))$$ $$\Rightarrow h(tx_1+(1-t)x_2) \le th(x_1) + (1-t)h(x_2)$$ $\Rightarrow h$ is the convex function $\Rightarrow$ the summation of convex functions is a convex function.

",41287,,2444,,12/23/2021 11:36,12/23/2021 11:36,,,,0,,,,CC BY-SA 4.0 33870,1,33903,,12/22/2021 14:50,,1,54,"

Say I have a game of blackjack, and I am trying to teach a single forward-pass neural network to approximate the Q value of the current state and action.

There are 3 inputs: The current card in hand, the cards in the deck, and the cards in the pile. It outputs the Q-value of two actions, namely, holding or adding the current card to the pile.

My loss function is $$L(Q,Q_E)= \sum({Q(s,a_i)- Q_E(s,a_i) )^2},$$

where $Q_E$ is the estimated Q-value of the current state from the policy network. And $Q$ is the target function, which is calculated using the Bellman equation.

As I understand it, the Bellman equation assumes the setting to be deterministic, meaning that, if you're in state $s_t$ as you take action $a_1$, you should always reach the same $s_{t+1}$.

Of course, in blackjack, this is not the case, as the state $s_{t+1}$ is purely dependent on the card you draw, which is a stochastic process.

Would it be possible to omit some of this noise or "stochasticity" by enforcing the same $s_{t + 1}$ between the model's estimate and the target function's Q-value?

In other words, say we're in state $s_t$, and the target function picks action $a_1$ and draws a 10 reaching state $s_{10}$ as the next state. For the training of the policy network, it loads the state $s_t$ from the experience replay, it also picks action $a_1$ and draws a 7 reaching state $s_7$ as the next state.

Would it somehow ruin the training, if I then just hardcoded it, such that the next state reverted to state $s_{10}$ if the policy network picked action $a1$? Are there any counter-productive consequences to this?

",48018,,2444,,12/24/2021 0:13,12/25/2021 13:05,Would it be possible to enforce the same $s_{t + 1}$ between the model's estimate and the target function's Q-value?,,1,0,,,,CC BY-SA 4.0 33871,2,,33862,12/22/2021 16:56,,1,,"

I don't think there's any difference between making predictions when you use or not teacher forcing during training. So, let me describe one way of doing that.

During testing, as you noticed, you don't know the ground-truth labels, so the only way of predicting a sequence is to feed into the model the predictions for the previous time-steps.

So, let's say that your model is denoted by $f$, which attempts to approximate the language model $p(x_t \mid x_{1:t-1})$, a conditional probability distribution over characters.

To perform inference, you first need to provide the initial character of the sequence, usually, a special character, like $\langle \text{start} \rangle$, so your model predicts the first real character of the sequence as follows $$f([\langle \text{start} \rangle]) = \hat{x}_1.$$ Afterward, you feed to it $\hat{x}_1$, in order to produce the next character, i.e. $$f([\langle \text{start}\rangle, \hat{x}_1]) = \hat{x}_2,$$ and so on, until the model $f$ produces another special character that denotes the end of the sentence $\langle \text{end}\rangle$.

Now, in practice, rather than all the previously predicted characters, you may just feed the character $x_{t-1}$ to predict $x_t$, and, at the same time, pass the previous last state of the RNN, so something like $$f (\hat{x}_{t-1}, h_{t-1}) = \hat{x}_t.$$

The following diagram illustrates the concept.

Here the first char is $T$ and the model predicts that the next char is $e$. Then we pass $e$ and the previous state of the model to the model to predict $n$, and so on.

Here you have a TensorFlow example that generates text with an RNN, uses teacher forcing, and then performs inference in the way I described (of course, there are other details, like using a temperature).

",2444,,2444,,12/22/2021 17:01,12/22/2021 17:01,,,,3,,,,CC BY-SA 4.0 33872,2,,32393,12/22/2021 17:06,,1,,"

This is formally known as a semi-gradient method.

What we would like to do is to minimize $\big(v(S) - \hat v(S, w)\big)^2$, where $v(S)$ is the true value function. This would give the gradient descent update \begin{align*} w \leftarrow w + \alpha[v(S) - \hat v(S, w)]\nabla \hat v(S, w) . \tag{1} \end{align*} Of course we don't have access to $v(S)$. So instead, we could use the monte carlo return (the observed, discounted, episodic return). The other option is to use a bootstraped estimate of $v(S)$, e.g. the estimate $r + \gamma \hat v(S', w)$, which would give the update

\begin{align*} w \leftarrow w + \alpha[r + \gamma \hat v(S', w) - \hat v(S, w)]\nabla \hat v(S, w) \tag{2} \end{align*}

As you correctly point out, Eq. 2 is no longer a true gradient descent method. To directly quote Sutton and Barto, section 9.3 page 165,

This step [Eq. 1] would not be valid if a bootstrapping estimate were used in place of v(S). Bootstrapping methods are not in fact instances of true gradient descent (Barnard, 1993). They take into account the effect of changing the weight vector w on the estimate, but ignore its effect on the target. They include only a part of the gradient and, accordingly, we call them semi-gradient methods.

The 'estimate' here is refering to $\hat v(S, w)$, whereas the 'target' is $v(S)$ (or its approximation). Indeed the parameter update you give would be the true gradient descent update for minimizing $\big(r + \gamma \hat v(S', w) - \hat v(S, w)\big)^2$

",47080,,,,,12/22/2021 17:06,,,,0,,,,CC BY-SA 4.0 33873,1,,,12/22/2021 17:36,,1,71,"

ImageNet dataset is an established benchmark for the measurement of the performance of CV models.

ImageNet involves 1000 categories and the goal of the classification model is to output the correct label given the image.

Researchers compete with each other to improve the current SOTA on this dataset, and the current state of the art is 90.88% top-1 accuracy.

If the images involved only a single object and background - this problem would be well-posed (at least from our perceptual point of view). However, many images in the dataset involve multiple objects - a group of people, a person, and an animal - the task of classification becomes ambiguous.

Here are some examples.

The true class for this image is bicycle. However, there is a group of people. The model that recognizes these people would be right from the human point of view, but the label would be wrong.

Another example is the fisherman with fish called tench. The model could have recognized the person, but be wrong.

So, my question is - how does much the performance of the best models of ImageNet reflect their ability to capture complicated and diverse image distribution, and how much the final result on the validation set is accidental. When there are multiple objects present on the image, the network can predict any of them. Prediction can match the ground truth class or can differ. And for the model, that happens to be luckier, this benchmark will show better performance. The actual quality can be the same, in fact.

",38846,,38846,,12/24/2021 4:56,1/19/2023 10:03,Validity of ImageNet for measurement of the model performance,,1,10,,,,CC BY-SA 4.0 33874,1,,,12/23/2021 1:15,,2,28,"

Now I am working on building a deep learning model for a regression problem. I used 50 inputs and try to add one new categorical input. The problem is that this one input is much more important than other inputs. I want to make it more influential than others and all I can think of now are the following three.

  1. just add at first layer as other inputs
  2. Add new categorical input to each layer (Now model has 5 layers)
  3. Fit it to the embedding layer first and increate its dimension and concatenate it with other inputs.

Do these seem fine and are there any other ways to give more power to one input?

",51728,,32410,,12/24/2021 4:50,12/24/2021 4:50,Is there any way to force one input have more effect on model?,,0,1,,,,CC BY-SA 4.0 33875,1,,,12/23/2021 1:15,,1,18,"

Suppose there are some objects with features, and the target is parametric density estimation. Density estimation is model-based. Parameters are obtained by maximizing log-likelihood.

$LL = \sum_{i \in I_1} \log \left( \sum_{j \in K_i} \theta_j \right) + \sum_{i \in I_2} \log (1 - \sum_{j \in L_i} \theta_i)$

Assume that parameters $\theta_j$ are probabilities, i.e. $0 < \theta_j < 1$, and that $\sum_{j\in L_i} \theta_i < 1$. From practical perspective, it seems natural to make parameters $\theta_j$ themselves functions of features, i.e. $\theta_j = F(x_j^1, \ldots, x_j^m)$.

Is there any known standard method or heuristic to optimize such objective with a decision tree, i.e. we assume that our function $F$ is a decision tree?

Any related results are welcome.

",51730,,40434,,12/24/2021 18:59,12/24/2021 18:59,Optimize parametric Log-Likelihood with a Decision Tree,,0,0,,,,CC BY-SA 4.0 33876,2,,36,12/23/2021 6:47,,1,,"

I would say that the answer to your question, is yes. I could write a short book on the subject (that might be a good idea actually) but I will keep this response brief, although I am happy to answer any further questions you may have to which I possess the answers in the comments!

The primary reasons that I believe the newly blossoming field of Quantum Information will have a massive impact on the field of Machine Learning in general, are as follows:

The most simple reason for my belief is that the primary goal of Machine Learning is to create an entity that is capable of coherent, self-aware thought much like we exhibit as human beings. We know that the brain is what allows us to be capable of such feats, and thus I view the field as something like brain counterfeiting. Without going into esoteric detail, there are many subtleties of the brain's workings that are thought to be quantum mechanical in operation, and thus would suggest that the path of least resistance to replicating the system would require a quantum mechanical computational medium.

The second primary rationale which solidifies my position is the efficiency gained when mapping linear operations into a qubit-based formalism. Which is primarily due to the quantum phenomena referred to as Super Position, which allows for a multiple qubit gate to not only work with the options of 00, 01, 10, and 11 (assuming a two-qubit gate); but to also work with any combination in-between, during the computation.

This illustrates the concept of a Bloch Sphere representation in a more concrete manner.

Although when the result is obtained (this is what is referred to as collapsing the wave function) you will only still have a resulting state space with 2^n possibilities, where n is the number of qubits. This being said, there are very clever ways, by which one can design their algorithms to make full use of this technically infinite computational space before observing the final results.

Conclusion:

I hope that my answer is helpful to you in some way, although I am aware that it is not a very in-depth answer, I feel that it hits the primary reason why my personal belief is that there will be a wall which is hit, in the pursuit of a general AI, while we are limited to classical computation faculties; and thus will require quantum-based computation before we are able to truly mimic the brain's most well-kept secrets! The next couple of decades should be VERY interesting in the fields, keep a close eye on the latest happenings!

",30631,,32410,,12/24/2021 17:45,12/24/2021 17:45,,,,2,,,,CC BY-SA 4.0 33877,2,,28220,12/23/2021 12:52,,1,,"

I don't know if $N(X)$ has a name or has any applicability in AI, but I can comment on how this function varies as the $H(X)$ based on your equation

$$N(X) = \dfrac{1}{2^{H(X)}}$$

which looks correct to me (just apply the $\log_2$ to both sides).

In the case of a Bernoulli random variable (which is a categorical r.v. that can take 2 values, $0$ or $1$, which is a special case of your categorical r.v., if you set $k=2$), then this is the relationship between the probability that this random variable $X = 1$ and the entropy of this r.v.

So, the entropy is $1$ when the probability is $0.5$ and decreases as the probability tends to $0$ or $1$, which makes sense, because the entropy quantifies the uncertainty about the r.v. Here, the entropy is computed in bits because we use the logarithm $\log_2$.

So, for $k=2$ (in your example), then, if $H(X) = 1$,

$$N(X) = \dfrac{1}{2^{H(X)}} = \frac{1}{2} = \frac{1}{2}^{\frac{1}{2}} * \frac{1}{2}^{\frac{1}{2}}$$

which is equal to $P(X = 1) = 1 - P(X = 0)$.

Now, as $H(X)$ decreases to $0$, then $\dfrac{1}{2^{H(X)}}$ increases, because the denominator $2^{H(X)}$ becomes smaller, where the smallest value is $H(X) = 0$

$$N(X) = \dfrac{1}{2^{H(X)}} = 1 = 1^1 * ? $$

This is already problematic because $0^0$ is not well-defined. So, I already see a problem with $N(X)$.

For $k > 2$, I think the same reasoning applies, but now the maximum value of $H(X)$ should be $\log_2 k$.

So, I don't think that $N(X)$ is of any practical value as it can lead to expressions like $0^0$.

However, this type of problem also arises in the original formulation of the entropy, because $\log_2 0$ is not defined. The convention for when $P(X) = 0$ is to set $P(X) \log P(X)$ to zero [1].

Anyway, it seems to me that one way to interpret $N(X)$ is as the (average?) probability that $X$ takes one of the values, and it could be that the information content is what you are looking for. The definition can be found in [1].

",2444,,2444,,12/23/2021 13:03,12/23/2021 13:03,,,,0,,,,CC BY-SA 4.0 33878,1,,,12/23/2021 12:52,,1,146,"

This is regarding the details stated in Wikipedia.

I am reading optical flow in Computer Vision. I understood the Horn–Schunck method as such, but did not get how it is related to the aperture problem, and how it is solved using Horn–Schunck method.

Also, why is Horn–Schunck method invented/used where a simpler "Lucas–Kanade method" is already there (Reference)?

",49216,,2444,,12/25/2021 19:25,1/20/2023 1:08,How does Horn–Schunck method for Optical Flow solve the aperture problem?,,1,0,0,,,CC BY-SA 4.0 33881,1,,,12/23/2021 15:56,,2,128,"

My question is about the relevance of concept size to the polynomial-time/example constraints in efficient PAC-learning. To ask my question precisely I must first give some definitions.

Definitions:

Define the input space as $X_n = {\left\{ 0,1\right\} }^n$ and a concept $c$ as a subset of $X_n$. For example, all vectorized images of size $n$ representing a particular numeral $i$ (e.g. '5') collectively form the concept $c_i$ for that numeral. A concept class $C_n$ is a set of concepts. Continuing our example, the vectorized numeral concept class $\left\{ c_0, c_1, \dots c_9\right\}$ is the set of all ten vectorized numeral concepts for a given dimension $n$.

As an extension to include all dimensions we define $\mathcal{C} = \cup_{n \geq 1} C_n$. A hypothesis set $H_n$ is also a fixed set of subsets of $X_n$ (which might not necessarily align with $C_n$) and we define $\mathcal{H} = \cup_{n \geq 1} H_n$.

The following definition of efficient PAC-learnability is adapted from An Introduction to Computational Learning Theory by Kearns and Vazirani.

$\mathcal{C}$ is efficiently PAC-learnable if there exists an algorithm $\mathcal{A}$ such that for all $n \geq 1,$ all concepts $c \in C_n$, all probability distributions $D$ on $X_n$, and all $\epsilon, \delta \in \left(0,1\right)$, the algorithm halts within polynomial time $p{\left( n, \text{size}{\left(c\right)}, \frac{1}{\epsilon}, \frac{1}{\delta}\right)}$ and returns a hypothesis $h \in H_n$ such that $$ \underset{x \sim D}{\mathbb{P}}{\left[ h{\left(x\right)} \neq c{\left(x\right)} \leq \epsilon\right]} \geq 1 - \delta.$$

Question:

Now, in the polynomial $p{\left(\cdot, \cdot, \cdot, \cdot \right)}$, I understand the dependence on $\epsilon$ (accuracy) and $\delta$ (confidence). Additionally, I understand why the polynomial should depend on $n$ - the concept of learnability should be invariant to the time burden incurred from increasing the dimension of the input space (e.g. increasing the resolution of the image). What I do not understand is why the dependence on the size of the target concept (which I believe is usually taken to mean the smallest encoding of the target concept)?

",51737,,2444,,1/10/2022 10:01,10/14/2022 15:03,What is the relevance of the concept size to the time constraints in PAC learning?,,1,0,,,,CC BY-SA 4.0 33883,2,,33848,12/23/2021 21:19,,0,,"

I don't think time series model necessarily makes sense if you have one conductivity value to predict for each time series.

A regression like setup makes more sense here: you could model this by letting the vector of time points represent the input. So you'd end up with a $n \mbox{ x } t$ matrix as input to predict the conductivity value.

",9469,,,,,12/23/2021 21:19,,,,0,,,,CC BY-SA 4.0 33884,2,,17942,12/23/2021 23:35,,0,,"

It's been almost two years now but since I had the same question and still found this post, I will also post how I plan to solve it: I have not implemented or tested this solution yet.

We will think of the input to the neural network in several blocks.

  1. block: the player's hand
  2. block: information about which cards have been played so far
  3. block: other information

Now some more detail:

1. Block:

Represent every card with two input neurons: one for color, one for value .then make block 1's size as large as it can possibly be. In a game with three players, there will never be more than 20 cards on the player's hand, each one needs two neurons, so we get 40 neurons in block 1. (Less, if the AI is trained for fewer players) color can also be split into five neurons instead of one (4 colors + 1 neutral color)

2. Block:

60 neurons that have values 0 or 1 depending on whether the card has been played yet or not.

3. Block:

• Some information about how many tricks the AI should still win. For example using a single neuron with input value: "Number of tricks to be won by AI" divided by "number of tricks still available" • color the AI has to serve -> 1, 4 or 5 neurons

Optional Inputs:

• value of currently winning card -> 1 neuron • color of currently winning card -> 1, 4 or 5 neurons • ratios of total points each player has scores so far to the average number of points • tricks wanted to tricks available ratio (as above) for the opponents

One could also combine blocks 1 and 2 into 60 neurons with an extra possible state:

  • not played yet -> 0
  • played -> 1
  • in own hand -> -1
",51744,,,,,12/23/2021 23:35,,,,0,,,,CC BY-SA 4.0 33885,1,33889,,12/24/2021 1:31,,12,1179,"

In AI literature, deterministic vs stochastic and being fully-observable vs partially observable are usually considered two distinct properties of the environment.

I'm confused about this because what appears random can be described by hidden variables. To illustrate, take an autonomous car (Russel & Norvig describe taxi driving as stochastic). I can say the environment is stochastic because I don't know what the other drivers will do. Alternatively, I can say that the actions of the drivers are determined by their mental state which I cannot observe.

As far as I can see, randomness can always be modeled with hidden variables. The only argument I came up with as to why the distinction is necessary is Bell's inequality, but I don't think that AI researchers had this in mind.

Is there some fundamental difference between stochasticity and partial observability or is this distinction made for practical reasons?

",51745,,2444,,1/7/2022 18:09,1/7/2022 18:09,Is there a fundamental difference between an environment being stochastic and being partially observable?,,2,3,,,,CC BY-SA 4.0 33886,1,33925,,12/24/2021 5:21,,1,36,"

I'm trying to do multi-agent reinforcement learning on the grid world navigation task where multiple agents try to collectively reach multiple goals while avoiding collisions with stationary obstacles and each other. As a constraint, each agent can only see within a limited range around itself.

So on a high level, the state of each agent should contain both information to help it avoid collision and information to guide it towards the goals. I'm thinking of implementing the former by including into the agent's state a matrix consisted of the grid cells surrounding the agent, which would show the agent where the obstacles are. However, I'm not sure how to include goal navigation information on top of this matrix. Currently I just flatten the matrix and append all relative goal locations at the end, and use this as the state.

For example, for a grid world as shown below (0 means empty cell, 1 means agents, 2 means obstacles, and 3 represents goals):

[[0 0 0 0 0 0 2 2 0 0]
 [0 0 0 0 0 0 0 0 0 0]
 [0 0 2 2 0 0 0 0 0 0]
 [0 3 2 2 0 0 0 0 0 2]
 [0 0 0 0 0 0 0 0 0 2]
 [0 0 0 0 1 0 0 0 0 2]
 [2 0 0 0 0 2 2 0 3 0]
 [2 0 0 0 0 2 2 0 0 0]
 [0 0 0 0 0 0 0 0 0 0]
 [0 0 2 0 0 1 0 0 0 0]]

The agent at row5 col4 sees the following cells that are within distance1 around it:

[[0. 0. 0.]
 [0. 1. 0.]
 [0. 0. 2.]]

flattened, the matrix becomes:

[0,0,0,0,1,0,0,0,2]

The location of the goal at row3 col1 relative to the aforementioned agent is (5-3=2, 4-1=3)

The location of the goal at row6 col8 relative to the aforementioned agent is (5-6=-1, 4-8=-4)

So after appending the relative locations, the state of the agent becomes:

[0,0,0,0,1,0,0,0,2,2,3,-1,-4]

(Similar process for the other agent)

Is this a reasonable way of designing the state? My primary concern is that the flattened grid matrix and the relative goal locations need to be handled quite differently, but it can be hard for RL to figure out the difference.

Thanks in advance!

Edit: To validate my concern, I trained an agent using PG REINFORCE algorithm. As I feared, the agent learned to avoid obstacles but otherwise just moved randomly without navigating towards the goals.

",51748,,,,,12/27/2021 5:39,How to mix grid matrix and explicit values when designing RL state?,,1,0,,,,CC BY-SA 4.0 33889,2,,33885,12/24/2021 11:50,,12,,"

I think the distinction is made more for conceptual reasons, which has practical implications, so let me review the usual definitions of a stochastic and partially observable environment.

A stochastic environment can be modeled as a Markov Decision Process (MDP) or Partially Observable MDP (POMDP). So, an environment can be

  • stochastic and partially observable
  • stochastic and fully observable

The stochasticity refers to the dynamics of the environment and, more specifically, to how the environment stochastically moves from one state to the other after an action is taken (basically, a Markov chain with actions and rewards). In other words, in a stochastic environment, we have the distribution $p(s' \mid s, a)$ (or, in some cases, the reward is also included $p(s', r \mid s, a)$). If $p(s' \mid s, a)$ gave a probability of $1$ to one of the states and $0$ to all other states, we would have a deterministic environment.

The partial observability refers to the fact that we don't know in which state the agent is, so we can think of having or maintaining a probability distribution over states, like $b(s)$. So, in the case of POMDP, we not only are uncertain about what the next state $s'$ might be after we have taken $a$ in our current state $s$, but we are not even sure about what $s$ currently is.

So, the difference is made so that we can deal with uncertainties about different parts of the environment (dynamics and actual knowledge of the state). Think about a blind guy that doesn't have the full picture (I hope this doesn't offend anyone) and think about a guy that sees well. The guy that sees well still isn't sure about tomorrow (maybe this is not a good example as you can argue that this is also due to the fact that the guy that sees well doesn't know the full state, but I hope this gives you the intuition).

Of course, this has practical implications. For example, it seems that you cannot directly apply the solutions that you use for MDPs to POMDPs. More precisely, for an MDP, if you learn the policy $\pi(a \mid s)$, i.e. a probability distribution over actions given states, if you don't know the state you are in, this policy is quite useless.

To deal with the uncertainty about the state the agent is in, in POMDPs, we also have the concept of an observation, which is the information that the agent gathers from the environment about the current state (e.g., in the example of a blind guy, the observations would be the sounds, touch, etc.), in order to update its belief about the current state. In practice, some people tried to apply the usual RL algorithms for MDPs to POMDPs (see e.g. DQN or this), but they made a few approximations, which turned out to be useful and successful.

If the difference wasn't still clear, just take a look at the equation that can be used to relate the belief state and the transition model (dynamics) of the environment

$$ \underbrace{b^{\prime}\left(s^{\prime}\right)}_{\text{Next belief state}}=\alpha \underbrace{P\left(o \mid s^{\prime}\right)}_{\text{Probability of observation }o \text{ given }s'} \sum \underbrace{P\left(s^{\prime} \mid s, a\right)}_{\text{Transition}\\ \text{model}} \underbrace{b(s)}_{\text{Previous belief state}} $$

So, in a POMDP, the policy, as stated above, in theory, cannot depend on $s$, but needs to depend on $b(s)$, the belief state, i.e. a probability distribution over states.

If this answer wasn't still satisfactory, although you probably already did it, you should read the section 2.3.2 Properties of task environments of the AIMA book (3rd edition). Their description of stochastic and partially observable environment seems to be consistent with what I wrote here, but maybe their description of a stochastic environment is not fully clear, because they say

If the next state of the environment is completely determined by the current state and the action executed by the agent, then we say the environment is deterministic; otherwise, it is stochastic

The unclear part is completely determined. They should have said deterministically determined (which you can use for a rap song).

However, they later clarify their definition by saying

our use of the word "stochastic" generally implies that uncertainty about outcomes is quantified in terms of probabilities

In addition to that, they call an environment that is either stochastic or partially observable uncertain. It makes sense to do this because uncertainty makes the problems harder, so we can differentiate between certain and uncertain environments.

To be honest, I don't know if there's some kind of mathematical formalism that doesn't differentiate between stochastic or partially observable environments, but I am not sure how useful it might be.

",2444,,2444,,12/25/2021 22:02,12/25/2021 22:02,,,,3,,,,CC BY-SA 4.0 33890,2,,33885,12/24/2021 13:36,,8,,"

A few points I'd like to add (without repeating the info already provided by nbro's answer):


  1. I think you're half-right, in that indeed we can probably always model randomness as hidden information (e.g., as the hidden random seed in a software implementation of an environment). However, the other way around does not work; we can not always model any partially observable environment as a stochastic one!

So we probably have a subset relation here, not an equivalence relation. Personally, I often find stochastic environments to be somehow "easier" to handle than partially observable ones. So, it is generally beneficial to simply treat them as such, rather than unnecessarily casting them into the often-more-difficult format of partially observable environments.


  1. In stochastic (but fully observable) environments, there always exists an optimal deterministic policy, but in partially observable environments an optimal policy may need to be non-deterministic. As a corollary, I would say that this indeed implies that there really is some fundamental difference between the two.

If an environment is just stochastic, but fully observable, non-deterministic policies may still also be optimal, but there is also always guaranteed to be at least one fully deterministic policy; a policy $\pi$ that, for any state $s$ assigns a probability of $\pi(a \mid s) = 1$ to just one action $a$ (and $0$ to any other action for that same state). Sometimes it may be fine to be indifferent (and distribute the probability mass over more than a single action), but this is never strictly required for optimality.

In a partially-observable environment, it is possible that an optimal policy must be non-deterministic. Consider this "drawing" of an environment, consisting of a few squares, with the current position of the agent labelled as $A$, and the position of the goal labelled as $G$. The possible actions are to move either left or right.

$$\square A \square \square G \square \square \square$$

Suppose that this environment is partially observable to the extreme extent that the agent never has any idea where it is, i.e. all states are aliased (look the same to the agent). A deterministic policy would then always go left or always go right, but, depending on whether the agent is currently to the left or to the right of the goal, one of those two deterministic policies would never reach the goal. And the agent has no way at all to tell which one would be the bad policy. The optimal policy in this environment is to simply go left with probability $0.5$, and go right with probability $0.5$. Eventually, the agent will get lucky and end up in the goal position. The partially observable nature of this environment really makes it necessary to follow a stochastic policy.


  1. In partially observable environments, we often consider the possibility that there can be actions that allow us to obtain new information. These are not necessarily actions that directly lead to any rewards or future returns, but really only allow us to observe things that we could not observe before (and hence possibly allow us to, with more certainty, follow a better policy afterwards). This idea does not really exist in fully observable stochastic environments.

  1. In multi-agent environments (consider, for example, many card games), other agents (often opponents but could also be agents with similar goals to our own) may have access to different information/observations from our own agent. This means that, for example through counterfactual reasoning or even explicit communication, we may gain full information or at least update our beliefs with respect to unobservable (to us) parts of the state. For example, based on the actions of an opponent in a card game, we may infer that it is very likely or very unlikely for them to have certain cards, because otherwise they very likely would not have acted in the way that they did. This sort of reasoning does not apply to fully observable stochastic environments.

",1641,,,,,12/24/2021 13:36,,,,9,,,,CC BY-SA 4.0 33891,1,,,12/24/2021 18:17,,0,42,"

The following are the two types are projections that are generally used in image processing

  1. Affine transformation
  2. Projective transformation

Affine transformation is a backbone operation in neural networks also. It is expressed as

$$\mathbf{wx+b}$$ where $\mathbf{w, x, b}$ are matrices. In general, $\mathbf{x}$ is treated as an image in image processing.

Projective transformation is also a type of transformation on images and it may be different from affine transformation. I want to know whether it can be represented in terms of mathematical expression.

If yes, what is the expression for projective transformation?

",18758,,2444,,12/25/2021 11:02,12/30/2021 10:03,What is the expression for projective transformation?,,1,1,,,,CC BY-SA 4.0 33892,1,,,12/24/2021 19:52,,0,71,"

Neural networks consist of so many parameters. Researchers could create as many possible neural networks as they wish. So I want to ask a general question. Could we devise an evolutionary algorithm which learns an efficient structure without optimization?

Are there some important works in this area? If we look at sparse neural networks, it seems that there are so many topologies that perform as well as a dense network.

So a single task has so many solutions which differ slightly. So getting rid of optimization for many problems shouldn't be hard at all.

Edit: I add some more information. I want to know whether we could find sparse topologies by mutating them like adding layers and changing the connections without optimizing the loss function directly?

",35633,,35633,,12/25/2021 9:26,12/25/2021 11:18,Is it possible to find a good neural network structure without training it?,,0,4,,12/25/2021 11:18,,CC BY-SA 4.0 33899,2,,33873,12/25/2021 8:16,,0,,"

This question is studied in a recent research - Re-labeling ImageNet: from Single to Multi-Labels, from Global to Localized Labels.

It is quite strange, that community has paid so little attention to this issue.

They have relabeled the original ImageNet with the help of crops and changed the task to the multilabel.

This strategy turned out to be quite beneficial and improved accuracy of ResNet50 on validation set. At that time 80.2% accuracy with ResNet50 was the best result reported.

",38846,,,,,12/25/2021 8:16,,,,0,,,,CC BY-SA 4.0 33900,2,,33891,12/25/2021 9:54,,1,,"

I explain in this answer what a projective transformation (aka projectivity or homography) is. It's a function $h$ of the form $$h: \mathbb{P}^2 \rightarrow \mathbb{P}^2,$$ where $\mathbb{P}^2$ is a projective space, so, essentially, a 3-dimensional Euclidean space of homogenous vectors.

You can also represent a homography as a $3 \times 3$ matrix $\mathbf{H}$, so that, when we apply this projective transformation to some input $\mathbf{x} \in \mathbb{P}^2$, we get $\mathbf{x}' \in \mathbb{P}^2$, so we can represent a projective transformation as follows.

$$\mathbf{H}\mathbf{x} = \mathbf{x}'$$

So, basically, a projective transformation is a linear transformation between projective spaces.

You can generalize these ideas to higher-dimensional projective spaces, i.e. $\mathbb{P}^n$.

Although you can represent a projective transformation as a matrix multiplication, there's more to it. In fact, it's a linear transformation, with 8 degrees of freedom, between projective spaces. You can also view a homography as a generalization of other transformations, like isometries, similarities, and affinities. This is explained more in detail in chapter 2 of the book Multiple View Geometry in Computer Vision (2nd edition) Richard Hartley and Andrew Zisserman.

",2444,,2444,,12/30/2021 10:03,12/30/2021 10:03,,,,0,,,,CC BY-SA 4.0 33901,1,35651,,12/25/2021 11:51,,1,241,"

I am trying to build a CNN model based on the concepts of Contrastive Learning. In specific based on Triplet loss.

I have 5 different class labels and I create triplets such that in a triplet, two images are from the same class and the third one is from another class.

I have a CNN model which takes one input from a triplet at a time and generates its corresponding embedding in 128 dimensions. All three embedding embeddings from a triplet are used for calculating loss. The loss is based on the Triplet loss.

Further, the loss is backpropagated and training is carried out stochastically.

The idea is to use the trained model to generate one embedding for an input image which can be further used for multi-class classification problems.

My question is, is this method of 3 forward passes and 1 backward pass valid in Tensorflow?

Here is a fragment of my code that I am using for training:

def cnn():
    model_input = layers.Input(shape=(112, 112, 3))
    x = layers.Conv2D(filters=16, kernel_size=3, padding='same', name='Conv1')(model_input)
    x = layers.MaxPool2D()(x)
    x = layers.BatchNormalization()(x)
    x = layers.ReLU()(x)

    x = layers.Conv2D(filters=32, kernel_size=3, padding='same', name='Conv2')(x)
    x = layers.MaxPool2D()(x)
    x = layers.BatchNormalization()(x)
    x = layers.ReLU()(x)

    x = layers.Conv2D(filters=64, kernel_size=3, padding='same', name='Conv3')(x)
    x = layers.MaxPool2D()(x)
    x = layers.BatchNormalization()(x)
    x = layers.ReLU()(x)

    x = layers.Conv2D(filters=128, kernel_size=3, padding='same', name='Conv4')(x)
    x = layers.MaxPool2D()(x)
    x = layers.BatchNormalization()(x)
    x = layers.ReLU()(x)

    x = layers.Conv2D(filters=256, kernel_size=3, padding='same', name='Conv5')(x)
    x = layers.MaxPool2D()(x)
    x = layers.BatchNormalization()(x)
    x = layers.ReLU()(x)

    x = layers.GlobalAvgPool2D(name='GAP')(x)
    output = layers.Dense(128, activation='tanh', name='Dense1')(x)
    origin = tf.zeros_like(output, dtype=float)
    unit_vector = tf.divide(output, tf.sqrt(tf.reduce_sum(tf.square(output-origin)))) # Normalize vector(L2_Norm)
    shared_model = Model(inputs=model_input, outputs=unit_vector)
    shared_model.summary()
    return shared_model
 
def triplets_loss(anchor_sample, positive_sample, negative_sample, alpha=0.2):
    anchor_pos_dist = tf.sqrt(tf.reduce_sum(tf.square(anchor_sample - positive_sample))) # distance between positive pairs
    anchor_neg_dist = tf.sqrt(tf.reduce_sum(tf.square(anchor_sample - negative_sample)))# distance between negative pairs
    triplet_loss = tf.maximum(((anchor_pos_dist - anchor_neg_dist) + alpha), 0.000001) # triplet loss
    return triplet_loss
 
def train(train_data_dir, training_batch=4, lr=1e-4, epochs=100,margin=0.2):
    model = cnn()
    ### creating triplet data loader object ###
    train_data_util_instance = TripletFormulator(data_path_dictionary=train_data_dir, 
    batch=training_batch)
    train_data_array_dict, data_count = train_data_util_instance.data_loader()
    majority_class = max(data_count, key=data_count.get)
    ######
    optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
    for epoch in range(epochs):
        total_train_loss = 0
        start_time = time.time()
        batch_no = 0
        for majority_class_batch in train_data_array_dict[majority_class]:
            batch_loss = 0
            train_batch_dict = {}
            train_batch_dict['A'] = next(iter(train_data_array_dict['A']))
            train_batch_dict['B'] = next(iter(train_data_array_dict['B']))
            train_batch_dict['C'] = majority_class_batch
            train_batch_dict['D'] = next(iter(train_data_array_dict['D']))
            train_batch_dict['E'] = next(iter(train_data_array_dict['E']))
            train_triplets = train_data_util_instance.triplet_generator(train_batch_dict)
            for triplets in train_triplets:
                with tf.GradientTape() as tape:
                     anchor = model(tf.reshape(triplets[0], [-1, 112, 112, 3]))
                     positive = model(tf.reshape(triplets[1], [-1, 112, 112, 3]))
                     negative = model(tf.reshape(triplets[2], [-1, 112, 112, 3]))
                     if np.isnan(anchor).any() or np.isnan(positive).any() or np.isnan(negative).any():
                         print('NAN FOUND')
                     else:
                         loss = triplets_loss(anchor_sample=anchor, positive_sample=positive,negative_sample=negative, alpha=self.alpha)
                     total_train_loss += loss
                     batch_loss += loss
                grads = tape.gradient(loss, model.trainable_weights)
                optimizer.apply_gradients(zip(grads, model.trainable_weights))
             print(epoch, batch_no, batch_loss)
             batch_no += 1
        end_time = time.time()
        print('Training_loss: ', total_train_loss, 'Time_taken: ', end_time - start_time)

When I start training, I can see the training loss converging. But the overall idea is confusing about the number of forward passes per backward pass. Also, is this method a concept of weight sharing?

I would be very eager to discuss this topic as I do not see many problems similar to this.

In most cases, the CNN model takes multiple input and generate multiple outputs and further, the embeddings are used for binary classification that is if the inputs are the same or not.

I would be waiting for your comments and suggestions.

",51764,,32410,,12/26/2021 7:17,5/25/2022 14:13,Triplet Loss- Three forward pass and one backward pass(Propagation),,2,0,,,,CC BY-SA 4.0 33903,2,,33870,12/25/2021 13:05,,0,,"

As I understand it, the Bellman equation assumes the setting to be deterministic, meaning that, if you're in state $s_t$ as you take action $a_1$, you should always reach the same $s_{t+1}$.

This is not correct, which is a good thing for you. The Bellman equation for action values for an arbitrary policy $\pi(a|s): \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R} = \mathbf{Pr}\{A_t=a|S_t=s \}$ looks like this:

$$q_\pi(s,a) = \sum_{r,s'}p(r,s'|s,a)(r+\gamma\sum_{a'}\pi(s',a')q_{\pi}(s',a'))$$

The environment model in this equation is $p(r,s'|s,a)$ which is the probability of observing reward $r$ and next state $s'$, given current state $s$ and action $a$. This is stochastic.

When using a model-free approach learning from experience, you cannot use the full equation each time, because you only have one sample and do not know anything about $p()$. However, things will still work, because if you take many samples, on average they will follow the probability distribution for $p()$, and your update rule will converge to the average/expected return for each action value.

The same sampling also applies to the policy, which means the above Bellman equation can be expressed as:

$$q_\pi(s,a) = \mathbb{E}_{A_{t+1} \sim \pi}[R_{t+1} + \gamma q_{\pi}(S_{t+1},A_{t+1})|S_t=s, A_t=a]$$

which in turn means you can use samples of $r, s'$ and $a'$ (or for Q-learning, the maximising $a'$), and an update rule that averages between the values you observe. You can do this with both deterministic and stochastic environments, because the theory behind it works with both.

Would it somehow ruin the training, if I then just hardcoded it, such that the next state reverted to state $s_{10}$ if the policy network picked action $a1$? Are there any counter-productive consequences to this?

For some environments, and some setups of the hardcoding, it may work. However, you should not do this because it is not necessary, and in general it is a bad idea, as it may impact the sampling in a way that causes incorrect learning.

",1847,,,,,12/25/2021 13:05,,,,0,,,,CC BY-SA 4.0 33904,1,,,12/25/2021 14:41,,1,426,"

EfficientDet outputs classes and bounding boxes. My question is about both but specifically I am interested in the class prediction net part. In the paper's diagram it shows 2 conv layers. I don't understand the code and how it works. And what's the difference between the 2 conv layers of classification and box prediction?

",51768,,38846,,12/26/2021 19:12,12/26/2021 19:23,How does the classification head of EfficientDet work?,,1,0,,,,CC BY-SA 4.0 33905,2,,33878,12/25/2021 15:38,,0,,"

The estimation of optical flow by the Horn-Schunk method can be written in matrix form as (for the sake of clarity, assume no regularization, that is, $\alpha = 0$)

$$ \left( \begin{array}{ll} I_x^2 & I_xI_y \\ I_xI_y & I_y^2 \\ \end{array} \right) \left( \begin{array}{ll} u \\ v \end{array} \right) = \left( \begin{array}{ll} -I_x I_t \\ -I_y I_t \end{array} \right) $$

where $(u, v)$ is the estimated displacement between two frames.

You can solve this linear system of equations if the matrix on the left is not singular, i.e. its eigenvalues are not zero. But this means that gradients in both directions are not zero. And this is what the aperture problem is about: when the matrix is singular.

As an example, consider a black square aligned along the coordinates axis, moving horizontally (along x axis) on a white background, and you want to estimate the displacement of the square from the optical flow. Furthermore, you do not consider the whole image, but just some rectangular area of the image.

Now, if your rectangle does not contain any of the edges perpendicular to the movement direction, you will not be able to estimate the movement because your gradient estimates will be $I_x = I_y = 0$.

",51436,,51436,,12/25/2021 19:20,12/25/2021 19:20,,,,0,,,,CC BY-SA 4.0 33906,2,,33901,12/25/2021 17:21,,0,,"

I'm not exactly sure what triplet loss is and not sure if I follow your explanation.

If you have a single loss function (i.e. a single scalar number), you have one forward pass and one backward pass. It doesn't matter if there are certain layers that are used multiple times (3 times, presumably) in the forward pass, that just means that layer will also be used 3 times in the backward pass.

",47080,,,,,12/25/2021 17:21,,,,2,,,,CC BY-SA 4.0 33909,2,,32501,12/26/2021 5:15,,0,,"

There is no specific number of labels that is "enough". For simple cases you can start with a few hundred examples, but normally you'll want several thousand.

Since you have a large number of classes your problem might be a harder one, but on the other hand it could be easy if most of your text is like "This merchant is called XXX".

",48143,,,,,12/26/2021 5:15,,,,0,,,,CC BY-SA 4.0 33911,2,,2672,12/26/2021 9:45,,1,,"

There is already an approach similar to the one you describe: federated learning (FL), where local nodes (e.g. mobile, edge devices but also companies of different sizes) keep the training data locally, so each node might have different (unbalanced and non-i.i.d.) datasets, and models, which then need to be aggregated.

One possible definition of federated learning is

Federated learning is a machine learning setting where multiple entities (clients) collaborate in solving a machine learning problem, under the coordination of a central server or service provider. Each client's raw data is stored locally and not exchanged or transferred; instead, focused updates intended for immediate aggregation are used to achieve the learning objective.

If you are interested in more details, there are already many sources on the topic that you can find on the web, but I would recommend the paper Advances and Open Problems in Federated Learning (2021, by Peter Kairouz et al.) or the Google's article Federated Learning: Collaborative Machine Learning without Centralized Training Data (2017). There are also software libraries for FL, such as TensorFlow Federated (TFF).

However, note that there are other approaches to distributed machine learning/training.

",2444,,,,,12/26/2021 9:45,,,,0,,,,CC BY-SA 4.0 33912,1,33913,,12/26/2021 11:57,,3,115,"

What are knowledge graph embeddings? How are they useful? Are there any extensive reviews on the subject to know all the details? Note that I am asking this question just to give a quick overview of the topic and why it might be interesting or useful, I am not asking for all the details, which can be given in the reference/survey.

",2444,,2444,,12/26/2021 12:04,1/1/2022 18:26,What are knowledge graph embeddings?,,1,0,,,,CC BY-SA 4.0 33913,2,,33912,12/26/2021 11:57,,4,,"

Knowledge graph embeddings (KGE) are embeddings created in the context of a knowledge graph (KG), which can be viewed as a visual/graphical representation of a knowledge base, where nodes are entities (e.g. "a car" or "Lewis Hamilton") and edges are relations between those entities (e.g. "drives"), so a (directed) connection from "Lewis Hamilton" to "a car" through the edge "drives" would represent the fact "Lewis Hamilton drives a car".

A KGE is a vector representation of an entity or relation in the KG that, hopefully, preserves the semantics of the entities and relations. For example, we expect the embedding of the entity "man" to be closer to the embedding of the entity "woman" than to the embedding of a "turtle". The idea of learning a lower-dimensional vector representation of objects that preserves some notion of semantics or meaning between those objects also appears in other AI subfields, like natural language processing (see e.g. word embeddings), or ML applied to software engineering (see code embeddings).

A KGE can be useful because a KG is most likely incomplete, so one of the tasks that you need to solve when using a KG is determining whether a fact is true or false. So, if we denote the fact as a triple $f = \langle s, r, o \rangle$, where $s$ stands for the subject, $r$ for the relation, and $o$ the object, then we want to determine whether this fact $f$ is true or false. In the context of KGEs, this is known as triple classification. Of course, in our KG, we don't have the fact $f$, but maybe we have the entities $s$ and $o$ but there's no edge between them that is labeled with $r$. So, given the embeddings of $s$, $r$, and $o$ (which was previously learned), we can then use a so-called score function to determine the likelihood of this fact. So, KGEs can be used to discover new "likely" knowledge, but it's a different approach than using deduction to derive new knowledge from a set of axioms or facts. It's an inductive/probabilistic approach, which has advantages (e.g. you don't need a deductive system) but also disadvantages (e.g. the fact may not actually be true even if our approach tells us that's the case).

There are many similar/related ways to learn KGEs. Just to give you an idea of how these are learned, a simple approach (TransE) is as follows. You learn the vectors $s$, $r$ and $o$ such that the constraint $s + r = o$ is satisfied. Of course, this constraint only makes sense if $o$ can only go with $s$ and $r$, so this approach is not realistic, as it doesn't model e.g. many-to-many relations. For this reason, people have introduced other approaches (like RotatE or ConvE), which have advantages but also disadvantages. There are also other issues that arise when learning these embeddings, like generating synthetic negative/false facts (i.e. if you have a KG, you only have known true facts, so how do you know if a fact is false? You need examples of false facts!).

The Wikipedia page on KGEs is already quite comprehensive, but there are many other resources on the topic. For example, the survey Knowledge Graph Embedding: A Survey of Approaches and Applications (2017) or the (long but very useful) tutorial Knowledge Graph Embeddings Tutorial: From Theory to Practice by some of the people that are currently doing research on this topic.

",2444,,2444,,1/1/2022 18:26,1/1/2022 18:26,,,,0,,,,CC BY-SA 4.0 33914,2,,31632,12/26/2021 12:27,,1,,"

From my experience with reading papers and books, I think these two terms are sometimes used interchangeably.

As you also point out, an encoder (in an auto-encoder) also may also learn some "semantics" of the inputs in order to produce the latent space. However, the way encoders are trained may not produce embeddings, with similar properties to e.g. word embeddings. For example, an image of a cat may be mapped to a latent vector that is closer to the latent vector of a person than to the latent vector of a dog (the usual way deterministic autoencoders are trained doesn't enforce these properties).

So, in my head, an encoding may not have any semantics (one-hot encoding is the typical example), but an embedding has. However, again, it depends on the context, so you should take context into account. So, I expect people to use the term encoding to refer to an embedding.

",2444,,2444,,12/26/2021 12:33,12/26/2021 12:33,,,,0,,,,CC BY-SA 4.0 33916,2,,31632,12/26/2021 15:19,,1,,"

Caveat: I am not a native English speaker (but French). And mostly interested in symbolic artificial intelligence (the topic of my PhD thesis defended in 1990; see books by Jacques Pitrat)

Encoding is related to decoding. Most of the time, if you encode something A into some other thing B, you can "decode" B to get back A.

Otherwise, you would just say "parsing".

Embedding means mapping something into a "greater" set (or category, e.g. differential manifolds).

Feel free to email me for more explanations.

",3335,,3335,,12/27/2021 14:29,12/27/2021 14:29,,,,0,,,,CC BY-SA 4.0 33917,2,,32501,12/26/2021 18:07,,0,,"

In my experience with NER with Spacy, and disagreeing with this stackoverflow solution and as @polm23 rightly mentioned, a several thousand samples for each entity should generate/predict entities, otherwise spacy would just recognise them based on default spacy entity types (mainly 'work-of-art')

",51787,,51787,,3/10/2022 17:55,3/10/2022 17:55,,,,2,,,,CC BY-SA 4.0 33918,2,,32415,12/26/2021 18:30,,1,,"

Although you don't seem to want to read papers, you should be able to follow the first pages of the following two papers, if you are familiar with the basics of machine learning (ML).

So, after having read these initial pages, you should have an overview of what online learning is. After that, you can dive into the details, if you think that you have the knowledge to understand them. Otherwise, you should probably pick an introductory ML book.

",2444,,2444,,12/26/2021 18:46,12/26/2021 18:46,,,,0,,,,CC BY-SA 4.0 33919,2,,33904,12/26/2021 19:23,,1,,"

The classification head works as follows.

After the stack of BiFPN we have a feature map of size B x C x H x W.

For EfficientDet H and W are 1/8 of the input image size.

Then for each pixel in this feature map one applies one convolution to get the bounding boxes. The model predicts n_anchors - rescaled and shifted versions of reference boxes. The number of output convolution channels is n_anchors x 4. 4 channels are for the location and scale of each box.

Another convolution predicts the probabilities of a class for each particular location on the grid. The number of output convolution channels is the number of pixels and the outputs are just the logits as for the classification problem.

You have not mentioned the code you're looking at. But I recommend to have a look at rwightman's implementation (if you are familiar with PyTorch).

",38846,,,,,12/26/2021 19:23,,,,0,,,,CC BY-SA 4.0 33920,2,,32415,12/26/2021 19:28,,0,,"
Duplicate of answer previously posted on DataScience.SE:

On-line learning algorithms trains new data as it arrives. It is often referred to as incremental learning or continuous learning as it trains continuous stream of data incrementally

As requested some resources in the form of books, tutorial, lecture notes, YouTube links, pdf documents along with available packages that support online learning algorithms are mentioned

BOOKS

TUTORIAL

LECTURE

YOU TUBE

PDF

ONLINE LEARNING ALGORITHMS

",48391,,1641,,12/28/2021 18:02,12/28/2021 18:02,,,,4,,,,CC BY-SA 4.0 33923,1,,,12/27/2021 0:52,,0,106,"

I am confused among the following in selecting the batch size for my model.

#1: powers of 2

I generally see that batch sizes are in powers of two: 32, 64, 128, 256.

#2: maximum GPU

Suppose my GPU allows a maximum batch size of 61. And it is not a power of two.

Which one should I apt? Is there anything like the powers of 2 will give relatively good results?

",18758,,2444,,12/27/2021 9:06,12/28/2021 16:48,Is it true that batch size of form $2^k$ gives better results?,,2,9,,,,CC BY-SA 4.0 33924,1,33933,,12/27/2021 4:25,,3,83,"

I'm training a Tensorflow model that receives an image and segments the image into foreground and background. That is, if the input image is w x h x 3, then the neural network outputs a w x h x 1 image of 0's and 1's, where 0 represents background and 1 represents foreground.

I've computed that about 75% of the true mask is background, so the neural network simply trains a model that outputs all 0's and gets a 75% accuracy.

To solve this, I'm thinking of implementing a custom loss function that checks if there are more than a certain percentage of 0's, and if so, to add a very large number to the loss to disincentivize the all 0's strategy.

The issue is that this loss function becomes non-differentiable.

Where should I go from here?

",51795,,2444,,12/28/2021 8:52,12/28/2021 8:52,Custom Tensorflow loss function that disincentivizes all black pixels,,1,0,,,,CC BY-SA 4.0 33925,2,,33886,12/27/2021 5:39,,1,,"

I didn't find a way to improve state design, but I did find a workaround in making my PG network modular.

I simply separated my PG network into two parts -- one taking in just the flattened grid matrix part from the aforementioned state, and the other taking in just the relative goal locations. Then I concatenated the outputs from the two sub-networks and passed them through a softmax layer to get the final policy.

If you want more details you can check out my codes here (The relevant codes are in MARL_PolicyGradient.py, MARL_env.py, and MARL_networks.py). Good luck!

",51748,,,,,12/27/2021 5:39,,,,0,,,,CC BY-SA 4.0 33926,1,,,12/27/2021 7:00,,1,36,"

I need to use the automatic caption from Youtube to precisely isolate excerpts from the video aligned to text and generate the dataset to train a model in French.

So I've already written the script, but when I compare the audio with the matching text, I noticed that the text is often delayed (positive or negative). For example, the text reads "1 2 3 4" and the audio says "0 1 2 3" ("0" comes from the previous clip).

If you have a look at a Youtube video in French, when you click on "open transcript", you can also notice this delay.

Here is an example that is very noticeable on short clips: The audio says "conditions de travail" whereas the transcript reads "de travail".

I measured the delay in Audacity and it is not consistent across the clips. Please note that it does not seem to happen in English videos.

If I use Google Speech Recognition in Python (recognize_google) on audio clips, there are no such delays (also because the clips are already separated) but the punctuation is missing which is not good for training my model.

Why can't Google align more accurately the audio and the text (caption)?

Can you suggest a better way of aligning audio with text?

",51797,,32410,,12/27/2021 21:15,12/27/2021 21:15,How to align or synchronize Youtube caption with audio accurately,,0,0,,,,CC BY-SA 4.0 33927,1,33930,,12/27/2021 7:05,,1,1342,"

In the data preparation phase, we have to divide the dataset into two parts: the training dataset and the test dataset.

I have seen this post regarding the time complexity for training a model.

However, I couldn't find any good source for the time complexity for testing a model, specifically, a stacked LSTM model (with 1 input, 3 layers, 4 LSTM units per LSTM layer, 1 output, sequence 18, and batch size 32 for MSE loss), based on the test set, e.g. the computation of the accuracy/loss on the test set given $N$ test examples.

Is there any source to check that?

",51798,,51798,,12/27/2021 10:35,12/27/2021 13:03,What is the time complexity for testing a stacked LSTM model?,,1,0,,,,CC BY-SA 4.0 33928,2,,33923,12/27/2021 8:46,,4,,"

The choice of the batch size to be a power of 2 is not due the quality of predictions .

The larger the batch_size is - the better is the estimate of the gradient, but a noise can be beneficial to escape local minima. However, there won't be much difference in optimization procedure for batch_size=61 and batch_size=64, since the amount of stochasticity would be of the same order of magnitude.

The actual reason, as explained here, is due to the alignment of virtual and physical processors. Usually, the number of physical processors is some multiple of power of two (number of CUDA streaming multiprocessors).

If the number of virtual processors is multiple of power of two, then each physical processor would be responsible for the same amount of virtual processors. Otherwise, some of them would stay idle.

So the reason for batches to be powers of 2 is about efficient utilization of GPU.

",38846,,38846,,12/27/2021 19:54,12/27/2021 19:54,,,,1,,,,CC BY-SA 4.0 33930,2,,33927,12/27/2021 11:22,,2,,"

The time complexity of an algorithm always depends on its implementation (e.g. searching in a red-black tree has a different time complexity than searching in an unbalanced binary search tree). This also applies to the case of computing the time complexity of the algorithm that tests a neural network with multiple LSTM layers, so one may need to assume how an algorithm or data structure is implemented in order to provide the right time complexity.

Now, note that testing a neural network typically means performing a forward pass. So, basically, if you have the time complexity for the forward pass, you are almost done. In this answer, I provide details of how you can compute the time complexity of the forward pass of a feedforward neural network with no recurrent layers, so you can read it in order to have an idea of how you might do this in general.

Now, in the case of a neural network with LSTM layers, we also need to take into account the recurrent connections in the LSTM units. This would also be the case for the vanilla RNN units, which are simpler than LSTM units, so the time complexity is smaller.

I will assume that the equations used to perform the forward pass of an LSTM layer are the ones given in the PyTorch documentation

$$ \begin{aligned} i_{t} &=\sigma\left(W_{i i} x_{t}+b_{i i}+W_{h i} h_{t-1}+b_{h i}\right) \\ f_{t} &=\sigma\left(W_{i f} x_{t}+b_{i f}+W_{h f} h_{t-1}+b_{h f}\right) \\ g_{t} &=\tanh \left(W_{i g} x_{t}+b_{i g}+W_{h g} h_{t-1}+b_{h g}\right) \\ o_{t} &=\sigma\left(W_{i o} x_{t}+b_{i o}+W_{h o} h_{t-1}+b_{h o}\right) \\ c_{t} &=f_{t} \odot c_{t-1}+i_{t} \odot g_{t} \\ h_{t} &=o_{t} \odot \tanh \left(c_{t}\right) \end{aligned} $$

Now, let's break these operations down. Note that the dimensionality and type of the objects (matrices and vectors) are important. I will start with the computation of $i_{t}$ (the output of the input gate). The computation of $f_{t}$ (forget gate), $g_{t}$ (cell) and $o_{t}$ (output gate) is the same, in terms of time complexity.

$$i_{t} =\sigma\left(W_{i i} x_{t}+b_{i i}+W_{h i} h_{t-1}+b_{h i}\right)$$

  • $W_{i i} \in \mathbb{R}^{h \times d}$ is the matrix that represents the forward connections for the input gate
  • $x_{t} \in \mathbb{R}^d$ is the input vector with $d$ entries (this could be e.g. the embedding of a word)
  • $b_{i i} \in \mathbb{R}^h$ is a bias vector for this forward connetions
  • $W_{h i} \in \mathbb{R}^{h \times h}$ is the matrix for the recurrent connections; note that this matrix has a different dimensionality than $W_{i i}$
  • $h_{t-1} \in \mathbb{R}^h$ is the hidden state
  • $b_{h i} \in \mathbb{R}^h$ is the bias vector for the recurrent operations

Here is an estimate of the time complexities of the operations inside the non-linearity

  • $W_{i i} x_{t}$ has time complexity $\mathcal{O}(h d)$ (matrix-vector multiplication)
  • $W_{i i} x_{t}+b_{i i}$ has time complexity $\mathcal{O}(h)$ (sum of two vectors, both with size $h$)
  • $W_{h i} h_{t-1}$ has time complexity $\mathcal{O}(h^2)$
  • $W_{h i} h_{t-1}+b_{h i}$ has time complexity $\mathcal{O}(h)$

Now, let $N = \max(d, h)$, then the time complexity of the operations inside the non-linearity is $\mathcal{O}(hN)$. More precisely, it would be

$$\mathcal{O}(h d + h + h^2 + h) = \mathcal{O}(h d + 2h + h^2) = \mathcal{O}(h(d + 2 + h)).$$

Actually, in this case, as I explain in the other answer, you could use $\Theta(hN)$ because this is an upper and lower bound in the limit.

Now, what is the time complexity of $\sigma$? It depends on what $\sigma$ is. If we assume it's a ReLU, then you perform basically the following operation

$$\displaystyle \sigma(x) = \max(0,x)$$

Let's assume that $\max(0,x)$ has a constant time complexity, so, if you apply the ReLU element-wise, you get a time complexity of $h$.

So, basically, the computation of $i_t$ has time complexity $\mathcal{O}(hN)$. The same applies to $f_{t}$ (forget gate), $g_{t}$ (cell) and $o_{t}$ (output gate), as stated above, so you could multiply this time complexity by $4$, but constant factors don't affect the time complexity. However, this is in theory, i.e. in the limit, in practice, they affect, so you probably want to include those. So, the time complexity, let's say, is

$$\mathcal{O}(4h(d + 2 + h))$$

so far (ignoring there's a tanh there: for simplicity, I assume it has the same time complexity as the ReLU even though that may not be the case: you can work out the details!)

Now, we have the following operation

$$c_{t} =f_{t} \odot c_{t-1}+i_{t} \odot g_{t},$$

where

  • $c_t \in \mathbb{R}^h$ is the cell vector
  • $\odot$ is an element-wise multiplication

So, this operation has time complexity

$$\mathcal{O}(2 h)$$

Similarly, the operation

$$h_{t} =o_{t} \odot \tanh \left(c_{t}\right)$$

has time complexity

$$\mathcal{O}(2 h)$$

One $h$ for the tanh and the other for the element-wise multiplication.

So, overall, the forward pass of a single LSTM layer has time complexity

$$\mathcal{O}(4h(d + 2 + h) + 4h) = \mathcal{O}(4h(d + 3 + h))$$

Now, you can work out the time complexity for multiple LSTM layers stacked one after the other. Just make sure that you understand that what is passed to the successive layer is not the input $x_t$ but the output of the previous layer.

In the end, you will also have to sum the complexity of the computation of the metric you're interested, which can be the loss or accuracy. If you understood my reasoning above, it should not be difficult to do that. The same applies for the case of a batch size and sequence size greater than 1.

",2444,,2444,,12/27/2021 13:03,12/27/2021 13:03,,,,0,,,,CC BY-SA 4.0 33933,2,,33924,12/27/2021 14:57,,2,,"

The background being an unbalance class is a well known problem in image segmentation. Before digging into custom losses you should take a look to existing ones that address this specific issue like the Dice Loss or Focal Loss, the latter being more tunable having a extra hyper parameter that can be optimized. You can easily find on github tensorflow implementations of both.
For a more detailed comparison and reference to other similar losses you can also check this paper.

",34098,,,,,12/27/2021 14:57,,,,1,,,,CC BY-SA 4.0 33934,2,,33923,12/27/2021 19:03,,1,,"

The main reason to use powers of 2 is in the way existing hardware and software are made, there isn't any purely mathematical reason. CPUs, GPUs, memories, and internal buses all use a size that's the power of 2 since that's the most efficient way to address them.

",51590,,32410,,12/28/2021 16:48,12/28/2021 16:48,,,,0,,,,CC BY-SA 4.0 33935,1,,,12/27/2021 21:03,,1,52,"

This training of all layers of a CNN simultaneously is standard practice today. It is found in every CNN (AlexNet (2012), VGG, Inception, GANs, etc) and even pre-CNN networks such as Le et al. 2012.

What is the advantage of training all the layers simultaneously? Wouldn't the later layers be learning from poor lower layers to start with, and have to re-learn to adapt? And why would there ever be an advantage for an autoencoder like Le et al. 2012 where there is no backpropagation to communicate from the later layers to the earlier layers?

I think the conventional answer is that the lower layers can actually learn to provide low-level features that support the layers above. An example of this is learning to detect a horizontal yellow-blue feature to detect the water line in a beach scene.

But couldn't the yellow-blue feature be found just as easily by training the lower layers first? This would be especially true of an autoencoder such as Le et al. 2012, which picks up on patterns in the training set without having ground truth-labels to group them.

Citations to experiments or theoretical work that directly answers this question would be appreciated!

This is a follow-on to an earlier question.

",15487,,,,,1/22/2023 0:09,Why is training all layers at a time effective for a multi-layer autoencoder?,,1,0,,,,CC BY-SA 4.0 33936,2,,33935,12/27/2021 21:03,,0,,"

I suspect there are two principles at play here, but I don't know which principle is more important in this case.

The first principle is the conventional wisdom: The lower layers not only learn to summarize the visual features found in the training set, but also learn those features which allow discrimination between categories. For example, in a beach image, there are likely many subtle yellow vs yellow or blue vs blue features that are technically part of the beach image. But where we sea the line in the sky may be a stronger indication of what this picture is of. All the features would be important for reconstructing an image of the beach, but the horizon line may be more important for distinguishing the beach from a picture of the sun in the sky for example. In a CNN like AlexNet, the horizon line would receive more weight early on in training.

But I don't see how this first principle would apply in the case of an autoencoding network like Le et al. 2012

The second principle is that random weights actually have some value from the very beginning of training, and those features which are useful early in training for a particular high-level decision will become even more useful later in training as they pick up patterns from the images to which they relate.

But I again don't see how this applies to an autoencoder like Le et al.

",15487,,,,,12/27/2021 21:03,,,,0,,,,CC BY-SA 4.0 33938,1,,,12/27/2021 23:09,,1,23,"

Why is Acme using own initializer for both tanh and ELU, when commonly used for tanh is Xavier and for ELU is He initializer? What mathematics is behind them?

Here is the code.

uniform_initializer = tf.initializers.VarianceScaling(
    distribution='uniform', mode='fan_out', scale=0.333)
",51813,,32410,,12/28/2021 2:09,12/28/2021 2:09,Why Acme is using own uniform initializer?,,0,0,,,,CC BY-SA 4.0 33939,1,,,12/28/2021 0:17,,0,304,"

I am trying to develop a "simple" announcer for sports segments that mainly consists of events like goals, fouls, substitutions, and many other events that could happen in many sports. The idea is that I already have key info like the player who does the action, the location in the court, the time that it takes place, and more extra info. I also have information like the sport being played and the type of event, so this task is purely focused on NLG.

The naïve idea that I had was to extract commentaries of soccer, which can be found on so many websites, and extract the key info of these commentaries that will act as the ground truth in the model for getting the input, i.e.,:

['football', 'goal', 'Bob', 'fourth minute'] -> Goal! that was a nice goal from Bob in the fourth minute of the match.

I would say that the first two words are being used for steering the model to generate sports according to phrases (a cricket player kicking the ball doesn't make sense).

The comments generated by the fine-tuned model on this input-output are acceptable.

The problem is that I have to build a dataset for many sports (or at least 2 with the same quality as soccer ones like in Flashscore) and I can't find any.

I have also been looking for Plug and Play methods to generate sentences.

What do you think? Is fine-tuning a must-do in this situation or can it be thought of in another way?

",51815,,32410,,12/28/2021 22:29,12/28/2021 22:29,Generating automatic sports commentary (NLG),,1,3,,,,CC BY-SA 4.0 33941,1,,,12/28/2021 8:55,,1,47,"

I would like the community to help me understand if the following example would be better represented as episodic or continuous task, this will help me structure the problem and chose the right RL algorithm.

The agent start with an initial score x of let's say 100. The agent objective is to maximise it's score. There is no upper bound! Theoretically the agent can get a score up to infinity, and there is no termination based on the number of steps, therefore the agent could play forever. However, the score can't be negative and if the agent get to a score of zero, the episode should terminate and the environment reset. I am undecided what would be the best representation, because if the agent learns how to play, the episode would never terminate, and the agent would theoretically play forever. However if the score get to zero, there is no way for the agent to continue playing so the environment needs to reset. Thank you.

",51821,,,,,12/28/2021 8:55,Should I represent my reinforcement learning as an episodic or continuous task?,,0,3,,,,CC BY-SA 4.0 33943,2,,33939,12/28/2021 11:34,,2,,"

Since sport commentaries are a fairly restricted domain, and the language does not vary much, I would go for a canned text approach.

Analyse what kind of events you get, and what variables you're dealing with. Then write some template sentences with placeholders for the variables. The more you write for the same data, the more varied your text will be. You could then structure them by pre-conditions, such as a goal that equalises a previous goal, as you will want to say something different than if it was a repeated goal by the same team.

You could probably look at interactive fiction for tools, such as Twine -- effectively you treat your sporting event like an interactive story, that is driven by what happens in the sporting event you are describing.

You will have the effort of writing the templates, but in return your output text will be of higher quality, and you have more control over what is generated than if you were using a machine learning approach.

",2193,,,,,12/28/2021 11:34,,,,4,,,,CC BY-SA 4.0 33947,1,,,12/28/2021 15:22,,0,41,"

I want to use a Deep Q-Network for a specific problem. My immediate rewards ($r_t = 0$) are all zeros. But my terminal reward is a large positive value $(r_T=100$). How could I normalize rewards to stabilize the training? I think clipping rewards to be in range $[0,1]$ makes training harder because it just forces most values to be near zero.

",35633,,35633,,12/28/2021 15:30,12/28/2021 15:30,How to normalize rewards in DQN?,,0,2,,,,CC BY-SA 4.0 33948,2,,32727,12/28/2021 20:01,,1,,"

In any case anyone is struggling with the same problem. It seems that they were simply typos in the original paper. I have downloaded the author's framework Darknet, as well as the configuration and weight files for YOLOv1.

Then, the architecture can be tested with one sample image using this command:

./darknet yolo test cfg/yolov1/yolo.cfg yolov1.weights data/person.jpg

The output of this command also prints the full architecure of the net. There, I see the expected depth values after the first (depth of 64) and second max layers (depth of 192):

   layer   filters  size/strd(dil)      input                output
       0 Create CUDA-stream - 0 
     Create cudnn-handle 0 
    conv     64       7 x 7/ 2    448 x 448 x   3 ->  224 x 224 x  64 0.944 BF
       1 max                2x 2/ 2    224 x 224 x  64 ->  112 x 112 x  64 0.003 BF
       2 conv    192       3 x 3/ 1    112 x 112 x  64 ->  112 x 112 x 192 2.775 BF
       3 max                2x 2/ 2    112 x 112 x 192 ->   56 x  56 x 192 0.002 BF

So it seems clear to me that the 192 and 256 depth values after the first and second max layer on the figure in the question are just typos.

",51503,,,,,12/28/2021 20:01,,,,0,,,,CC BY-SA 4.0 33952,1,33956,,12/29/2021 3:55,,-1,139,"

The question looks foolish, but I think cross-entropy is somewhat weird as a cost function.

As a cost function for linear regression, the mean square error $ \sum_{i=1}^{n} (y_i - (ax_i+b)) ^2$ seems quite reasonable, because it literally/directly measures the error between real value and predicted value.

However, in the case of the cross-entropy, I do not understand what it is.

For multi-class classification, for example, with 3 classes, the true target is $[ 0, 0, 1 ]$, while the output of the model is $[ 0.2, 0.3, 0.5 ]$ (maybe with a softmax activation at the last layer). So, the error of it is: $C(x) = -(0*log(0.2) + 0*log(0.3) + 1*log(0.5))$.

It looks... I don't know, why is it an "error?" How can it be updated with backpropagation?

Also, what is the objective of it? Maybe optimization, so maybe minimizing error? Then what happens?

",50119,,2444,,12/29/2021 11:17,12/30/2021 8:57,Why is the cross-entropy a cost function?,,1,0,,,,CC BY-SA 4.0 33954,1,,,12/29/2021 7:37,,1,35,"

A neural network model needs a loss function for training. The neural network needs to minimize the loss function.

A neural network is evaluated after training using a metric. The neural network needs to either minimize or maximize the metric depending on the context.

Suppose $L$ is the loss function used and $M$ is the metric/evaluation function. Assume the metric needs to be minimized and is calculated based on the output of the neural network. We can use $L+M$ as the loss function. It looks to me that it may be up to the choice of the designer to use certain function $f$ for either $L$ or $M$ as they try to quantify how good/bad the model is working.

But, in the literature, if we observe, there are some fixed loss functions and fixed metrics for evaluation depending on the underlying task.

With this context, what is the inherent quality of the function that makes it treated as either loss or evaluation metric?

",18758,,,,,1/5/2022 22:48,What inherent quality of a function makes it treated as either loss or evaluation metric?,,1,0,,,,CC BY-SA 4.0 33955,1,,,12/29/2021 8:09,,1,69,"

I'm working on yet another NEAT implementation for a personal project, and I feel like I'm missing something about the proposed solution to the Competing Conventions problem.

Here's what I'm assuming:

  • Each new connection gene yields a new innovation number.

The new connection gene created in the first mutation is assigned the number 7, and the two new connection genes added during the new node mutation are assigned the numbers 8 and 9.

  • If a connection from X to Y appears more than once at the same generation, it receives the same innovation number.

However, by keeping a list of the innovations that occurred in the current generation, it is possible to ensure that when the same structure arises more than once through independent mutations in the same generation, each identical mutation is assigned the same innovation number.

  • Every new node receives a globally new id. (I have no source for this. Maybe the problem is here)
  • Equal fitness is assumed at the example below, so all genes are randomly inherited (on this case the probability is 100%, for demonstration purposes).

In this case, equal fitnesses are assumed on the example below, so the disjoint and excess genes are also inherited randomly.

So we have two genomes after two generations, which suffered the exact same mutations at each generation: add_connection(0,2) at gen 1, add_node(1) at gen 2.

Even though the topology is exactly the same for both, the genomes don't share innovation numbers, so the crossover yields a more complex topology. It seems to me that using global ids for nodes breaks the historical tracking of connections, which blows up the innovation counter pretty fast (around 1400 innovations in 100 gens for 100 individuals), and yields big, non-functional networks.

What am I missing?

",51827,,51827,,12/31/2021 9:17,12/31/2021 9:17,NEAT: How to properly handle Node IDs and avoid Competing Conventions?,,0,0,,,,CC BY-SA 4.0 33956,2,,33952,12/29/2021 9:49,,2,,"

Optimizing the cross-entropy is equivalent to optimizing the log-likelihood of the parameters given the data, $\ell(\theta)$, which is what we want, i.e. find the parameters that most likely generated the data.

So, the likelihood is defined as $$\mathcal{L}(\theta) = P(y \mid x; \theta),$$ i.e. a function of the parameters $\theta$.

The log-likelihood is just the logarithm of the likelihood

$$\ell(\theta) = \log \mathcal{L}(\theta)$$

To understand why we take the logarithm, read this answer.

Now, for a fixed $\hat{\theta}$, $\mathcal{L}(\hat{\theta}) = P(y \mid x; \hat{\theta})$ is a conditional probability distribution.

For simplicity, let's assume that $\mathcal{L}(\hat{\theta}) = P(y \mid x; \hat{\theta})$ is a (conditional) Bernoulli distribution, which is defined by a single parameter $\hat{p}$, whose probability mass function (pmf) is defined as

$$\hat{p}^y (1-\hat{p})^{1-y},$$

where $y = \{0, 1\}$.

Now, in the context of neural networks, we use a neural network to output $\hat{p}$, i.e. an estimate of the parameter of the Bernoulli, which we plug into the cross-entropy loss function. So, if $f_\theta: \mathcal{X} \rightarrow [0, 1]$ is your neural network, then $f_{\hat{\theta}}(x) = \hat{p}$, given the input-output pair $(x, y)$.

So, the Bernoulli pmf for a given pair $(x, y) \in \mathcal{D}$ can be written as

\begin{align} P(y \mid x; \hat{\theta}) &= \hat{p}^y (1-\hat{p})^{1-y} \\ &= f_{\hat{\theta}}(x)^y (1-f_{\hat{\theta}}(x))^{1-y} \end{align} So, without fixing the parameters, we can write the likelihood (which is not a probability distribution wrt $\theta$) as follows

\begin{align} \mathcal{L}(\theta) &= f_{\theta}(x)^y (1-f_{\theta}(x))^{1-y} \\ \iff \ell(\theta) &= \log \left( f_{\theta}(x)^y (1-f_{\theta}(x))^{1-y}\right) \\ &= \log f_{\theta}(x)^y + \log \left(1-f_{\theta}(x)\right)^{1-y} \\ &= y \log f_{\theta}(x) + (1-y)\log \left(1-f_{\theta}(x)\right) \end{align}

which is our well-known binary cross-entropy for a single pair $(x, y)$ (note that only one of the addends above is not zero). For multiple pairs, you can apply the same reasoning, and you will see why we also use the log, i.e. it will allow you to simplify a few expressions in the derivation (specifically, multiplications will turn into sums, which is nice numerically).

So, if we maximize $\mathcal{L}$, with respect to $\theta$ (e.g. the parameters of your neural network), then we will find an estimate $\hat{\theta}$, that produces the parameter $\hat{p}$, such that this parameter can be used to construct/define the Bernoulli distribution that most likely (hence the name likelihood) produced our given dataset $\mathcal{D}$.

This reasoning also applies to the case where $P$ is categorical or even Gaussian distribution. So, actually, when you're minimizing the MSE, you're actually also optimizing the log-likelihood. It just turns out that the expression is different because Gaussians are not Bernoullis.

I will not talk about the back-propagation, but it's just the chain rule. If the log-likelihood is differentiable, then you can use it in the context of back-propagation and GD. You can ask another question, if you're interested, but note that, in the general case, computing the partial derivatives can be a pain in the neck, so your question may be too broad. Find a very simple case.

To conclude, don't dwell on the word error. Think in terms of probabilities, probability distributions, likelihoods, and why we can optimize likelihoods. Read my answer multiple times, if you didn't understand something.

",2444,,2444,,12/30/2021 8:57,12/30/2021 8:57,,,,2,,,,CC BY-SA 4.0 33957,1,,,12/29/2021 10:58,,1,74,"

I am reading the paper A Contextual-Bandit Approach to Personalized News Article Recommendation, where it refers to $\epsilon$-greedy (disjoint) algorithm. I suspect, that it is just a version of a K-armed bandit with regressors that estimate the average reward for an arm. However, I cannot find the description of this algorithm in the literature (papers, books, or other resources)

",2254,,2444,,12/29/2021 11:01,12/29/2021 11:01,Is there a paper/article on contextual $\epsilon$-greedy algorithm?,,0,0,,,,CC BY-SA 4.0 33963,1,,,12/29/2021 17:17,,2,103,"

I'm implementing the Watkins' Q(λ) algorithm with function approximation (in 2nd edition of Sutton & Barto). I am very confused about updating the eligibility traces because, at the beginning of chapter 9.3 "Control with Function Approximation", they are updated considering the gradient: $ e_t = \gamma \lambda e_{t-1} + \nabla \widehat{q}(S_t, A_t, w_t) $, as shown below.

                                      

Nevertheless, in Figure 9.9, for the exploitation phase the eligibility traces are updated without the gradient: $ e = \gamma \lambda e $.

                                      

Furthermore, by googling, I found that the gradient is simplified with the value of the i-th feature: $ \nabla \widehat{q}(S_t, A_t, w_t) = f_i(S_t, A_t)$.

I thought that, in Figure 9.9, the gradient is not considered because in the next step the eligibility traces are increased by 1 for the active features. So, the +1 can be seen as the value of the gradient, as I found on google, being binary features. But I'm not sure.

So, what is (and why) the right rule to update the eligibility traces?

",51849,,,,,12/29/2021 17:17,Watkins' Q(λ) with function approximation: why is gradient not considered when updating eligibility traces for the exploitation phase?,,0,0,,,,CC BY-SA 4.0 33964,2,,13907,12/29/2021 18:30,,0,,"

You definitely could -technically- use AI (advanced informatics) techniques like in the BinSec binary analyzer (static analyzer of binary code).

You might be forbidden legally to do so. Check with your lawyer.

Contact me by email for more information (on the technical level).

",3335,,3335,,12/30/2021 7:00,12/30/2021 7:00,,,,0,,,,CC BY-SA 4.0 33967,1,33999,,12/30/2021 7:02,,0,85,"

TL:DR, (Why) is one of the terms in the expectation not derived properly?

Relative entropy policy search or REPS is used to optimize a policy in an MDP. The update step is limited in the policy space (?) by the KL-divergence metric to stabilize the update. Based on the KL-divergence constraints, and some constraints about the definition of a policy, we can derive its Lagrangian, and its dual optimization problem afterwards. And lastly, we find the appropriate update step (delta) by solving the dual problem.

However, I think we can also use it to find multiple optimal solutions in an optimization problem, just like (CMA)-evolutionary strategy algorithm.

So, based on the original paper and a section of REPS in this paper, I'm trying to derive the dual problem.

Suppose that we're finding set of solutions represented as a parametrized distribution $\pi(x|\theta)$ that maximizes $H(x)$. Suppose that the last parameters we came up with is denoted as $\hat{\theta}$, we find the optimal parameters $\theta$ by:

max $\int_x H(x)\pi(x|\theta) dx$

s.t. $\int_x \pi(x|\theta) dx = 1$

$D_\text{KL}\left(\pi(.|\theta) || \pi(.|\hat{\theta})\right) \leq \epsilon $

with $D_\text{KL}\left(\pi(.|\theta) || \pi(.|\hat{\theta})\right) = \int_x \pi(x|\theta)\log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta})}$

Based on the equations above, we can write the Lagrangian as follows:

$L(\theta, \lambda, \eta) = \int_x H(x)\pi(x|\theta) dx + \lambda(1-\int_x \pi(x|\theta) dx) + \eta(\epsilon-\int_x \pi(x|\theta)\log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta})})$

Now, we can see that the term $\lambda(1-\int_x \pi(x|\theta) dx)$ is $0$, right? But here, it was not cancelled out. So, following the flow based on the two papers, We can simplify the Lagrangian by treating the integral wrt to $x$ as an expectation.

$L(\theta, \lambda, \eta) = \lambda + \eta\epsilon + \underset{\pi(x|\theta)}{\mathbb{E}}\left[H(x) -\lambda -\eta \log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta})} \right]$

We will find the optimal $\pi(x|\theta)$ by solving $\frac{\partial L}{\partial \pi(x|\theta)} = 0$. Now, I got confused starting from this step. If I mindlessly copy/follow the notations from here, the derivative of $L$ wrt the policy parametrized by $\theta$ is:

$\frac{\partial L}{\partial \pi(x|\theta)} = H(x) - \lambda - \eta \log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta}}$

Where's the integral wrt $x$ goes? Is it because they are all multiplied by $\pi(x|\theta)$, so that it can cancel the integral/the expectation? If so, then why the derivative of the KL term in the expectation derived into this $\eta \log\frac{\pi(x|\theta)}{\pi(x|\hat{\theta})}$? Isn't the $\pi(x|\theta)$ in the log will derive something more?

",44920,,2444,,12/30/2021 9:16,1/3/2022 10:51,How to derive the dual function step by step in relative entropy policy search (REPS)?,,1,0,,,,CC BY-SA 4.0 33968,1,,,12/30/2021 9:40,,0,28,"

Computer vision is highly benefited by AI algorithms. Image data is abundantly available. There are different varieties of tasks such as image classification, prediction, segmentation, generation, etc.

Although the collection of the folder(s) of image(s) is mandatory, it may not be enough. Different types of annotations are used in datasets. Annotations can be treated as some extra information related to each image that helps for the AI algorithm under consideration. I want to know the kinds of annotations at the individual image level that are generally used. Although the necessity of a particular type of annotations depends on the task under consideration. I want to know the requirements for the contemporary prevalent tasks including classification, prediction, segmentation, and generation. You are encouraged to provide for more tasks if you are aware.

I know the following types of annotations:

  1. Bounding box(es)
  2. Label

What can be the other kinds of annotations used for images in image datasets?

",18758,,2444,,12/31/2021 9:36,12/31/2021 9:36,"What are the ""per image"" annotations that are generally used for image datasets in AI?",,0,6,,,,CC BY-SA 4.0 33974,1,,,12/30/2021 22:31,,1,33,"

As the title suggests, I have a doubt about the computation of the $Q_a$ used to update the delta and the $Q_a$ used to select the next action in the exploitation phase, as shown below (source of pseudocode in Figure 8.9).

In both for loops enumerated in the image with 1 and 2, the $Q_a$ are calculated considering the new state, but while in 1 for each action the active features $F_a$ are calculated, in 2 they aren't. So what are the active features to consider in 2? Are they the same as those in 1? In that case, I could avoid recalculating the $Q_a$ by storing those calculated in 1.

",51849,,2444,,12/31/2021 10:00,12/31/2021 10:00,What is the difference between the $Q_a$ calculated to update delta and those to select next action in the exploitation phase?,,0,0,,,,CC BY-SA 4.0 33975,1,,,12/30/2021 22:38,,0,179,"

I'm currently trying to train a GAN to recreate similar images from a dataset. The dataset is using the Eiffel Tower Pictures from Googles Quick Draw dataset. The images aren't very large (only 12x12 pixels) and are all black and white.

The performance increases at an expected rate initially however after a certain point the quality begins to go down despite the cost from both generator and discriminator networks seeming to stay at a steady value as expected.

I've tried changing the learning rate and other hyper parameters however they all end up with the same result of the network eventually getting worse.

My learning rate is currently 0.01 which I'm aware is quite high compared to what other people are using but anything lower takes too long to train and the results don't seem any better even if I do give it long enough.

Any pointers on what could be causing this or the specific name of this problem if it's a common issue with GANs would be appreciated.

Thanks

",51881,,51881,,1/1/2022 18:16,1/1/2022 18:16,GAN performance starts to get worse as training continues,,0,2,,,,CC BY-SA 4.0 33976,1,,,12/31/2021 5:40,,0,74,"

I'm working on an implementation of NEAT, which evolves neural networks with small and sparse topologies.

Evaluating a sparse and possibly recurrent network requires a different approach than the matrix operations of dense networks, and I'm trying to wrap my head around the order in which nodes should be evaluated.

I've set up a simple example:

  • Every node (except inputs) starts at 0. (t0)
  • Every weight is 1. There's no activation function or biases.

Assuming that the algorithm has the same intuition we humans do: evaluate the nodes connected to the input, then the "aggregation" node, then the output. How should it decide whether to evaluate node [3] before node [4] or vice-versa?

By starting (t0) at [3], the value of [4] is 0. And vice-versa.

The side-effect of this behavior appears when comparing two networks. Say you have the network above, and a copy of it where [3] and [4] are inverted. I feel like both should return the same result for the same input in order for evolution to work properly.

Any thoughts?

",51827,,2444,,12/31/2021 8:54,1/4/2022 5:10,Order of operations on sparse recurrent network alters the output. How to deal with it?,,1,0,,,,CC BY-SA 4.0 33977,1,,,12/31/2021 6:52,,0,415,"

I'm trying to learn how time series forecasting models work and while reading a tutorial off the TensorFlow website I came across these algorithms. I don't quite understand what the article means by "time signals" and how do sine and cosine functions help accomplish them. Can anyone please explain?

Here's a link to the tutorial

The following code was provided along with the caption

"the time in seconds is not a useful model input. Being weather data, it has clear daily and yearly periodicity. There are many ways you could deal with periodicity.

You can get usable signals by using sine and cosine transforms to clear "Time of day" and "Time of year" signals:"

day = 24*60*60
year = (365.2425)*day

df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
",51882,,51882,,1/1/2022 14:01,10/3/2022 16:09,How do sine and cosine transforms help in extracting frequencies in time series forecasting models?,,2,0,,,,CC BY-SA 4.0 33980,2,,32712,12/31/2021 13:00,,1,,"

Masks in Recurrent Neural Networks are used to transform variable-length inputs to one general length. Therefore we use padding and masking together.

Padding: Usually we create a vector for every sentence in the dataset initialized with 0s and the length of the longest sentence in the dataset. Then we fill the mask with 1s for every position the sentence has words in.

Masking: Now we need to inform the network that some values in the vector are actually padding and should not be used since they have no information.

",51888,,18758,,1/2/2022 23:45,1/2/2022 23:45,,,,0,,,,CC BY-SA 4.0 33982,2,,24744,12/31/2021 13:20,,0,,"

Convolutional Neural Networks are mostly used for all kind of computer vision tasks.

Here you can find a tutorial on how to train a CNN for image classification from scratch.

",51888,,,,,12/31/2021 13:20,,,,0,,,,CC BY-SA 4.0 33984,2,,33977,12/31/2021 21:53,,0,,"

Sorry for my weak English. Your are using neural network to forecast times series which often have irregular fluctuations.

Stock values are volatile and have changing frequency. Applying periodic encodings to original data makes it easier to capture frequency information.

Read this paper to understand why it is essential for the neural network to know about frequency of data and how sinusoidal encoding could make it easier for the neural network to predict highly varying patterns inside data.

https://bmild.github.io/fourfeat/

Or you could use periodic activation functions without applying sinusoidal encoding. Networks with non-periodic activation fail to predict time series.

https://arxiv.org/pdf/2006.08195.pdf

Deep networks are biased towards low frequency functions.

",35633,,35633,,12/31/2021 22:17,12/31/2021 22:17,,,,1,,,,CC BY-SA 4.0 33985,1,,,12/31/2021 23:48,,0,27,"

In the typical RL/MDP framework, I have offline data of $(s,a,r,s')$ of expert Atari gameplay.

I'm looking to train a CNN to predict $r$ based on $(s, a)$.

The states are represented by a $4 \times 84 \times 84$ image of the Atari screen, where 4 represents 4 sequential frames, and $84 \times 84$ is the size of the image. The action is an integer from 0 to 3.

I'm not sure how best to merge these two inputs $(s, a)$ together. How should I incorporate the action into the CNN?

",45562,,2444,,1/2/2022 12:42,1/2/2022 12:42,Augmented an Image with other data when training CNN,,0,3,,,,CC BY-SA 4.0