Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense 25495,2,,20656,1/1/2021 5:57,,4,,"
Generally, if one googles "quantum machine learning" or anything similar the general gist of the results is that quantum computing will greatly speed up the learning process of our "classical" machine learning algorithms.
This is correct. A lot of machine learning methods involve linear algebra, and it often takes far fewer quantum operations to do things in linear algebra than the number of classical operations that would be needed. To be more specific, for a matrix of size $N\times N$, if a classical computer needs $f(N)$ operations to do some linear algebra operation, such as diagonalization which can take $f(N)=\mathcal{O}(N^3)$ operations on a classical computer, a quantum computer would often need only $\log_2 f(N)$ operations, which inn (in and only in) the language of computational complexity theory, means exponential speed-up. The "only in" part is there because we have made here an assumption that "fewer quantum operations" means "speed-up", which for now is something we only know to be true in the world of "computational complexity theory".
However, "speed-up" itself does not seem very appealing to me as the current leaps made in AI/ML are generally due to novel architectures or methods, not faster training.
I disagree. Take vanilla deep learning for example (without GANs or any of the other things that came up in the last decade). Hinton and Bengio had been working on deep learning for decades, so why did interest in deep learning suddenly start growing so much from 2011-2014 after a roughly monotonic curve from 1988-2010? Not that this rise started before newer advances such as GANs and DenseNet were developed:
Notice also the similarity between the above graph and these ones:
These days pretty much everyone doing deep learning uses GPUs if they have access to GPUs, and what is possible to accomplish is extremely tied to what computing power a group has. I don't want to undermine the importance of new methods and new algorithms, but GPUs did play a big role for at least some areas of machine learning, such as deep learning.
Are there any quantum machine learning methods in development that are fundamentally different from "classical" methods?
I think you mean: "Most quantum machine learning algorithms are simply based on classical machine learning algorithms, but with some sub-routine sped-up by the QPU instead of a GPU -- are there any quantum algorithms that are not based on classical machine learning algorithms, and are entirely different".
The answer is yes, and more experts might be able to tell you more here.
One thing you might consider looking at is Quantum Boltzmann Machines.
Another things I'll mention is that a child prodigy named Ewin Tang who began university at age 14, discovered at around the age of 17 some classical algorithms that were inspired by quantum algorithms rather than the other way around, and the comments on the Stack Exchange question Quantum machine learning after Ewin Tang might give you more insight on that. This is related to something called dequantization of quantum algorithms.
By this I mean that these methods are (almost*) impossible to perform on "classical" computers. *except for simulation of the quantum computer of course
Unfortunately quantum computers can't do anything that's impossible for classical computers to do, apart from the fact that they might be able to do some things faster. Classical computation is Turing complete, meaning anything that can be computed can be computed on a big enough classical computer.
",19524,,,,,1/1/2021 5:57,,,,4,,,,CC BY-SA 4.0 25496,2,,25437,1/1/2021 8:52,,1,,"input to output is linear refers to the input X i.e image and output is the output logits/softmax from the network.
So how does linearity help in constructing adversarial examples? Imagine a simple logistic regressor and a simple 2D space. There is a definite boundary beyond which the label that the model(i.e logisitc regressor in this case) changes. So if we move perpendicular to the boundary (i.e the line represented by the model in this case) we can get to another class's space. So if we perturb the input in this direction, the model outputs wrong class. { Refer the slide with the title Adversarial Examples from Excessive Linearity for diagram }
Now imagine the neural network trained on imagenet, there are so many boundaries and a small change can just shift change the class the model would predict. Now it is important to note that these subspaces of the image remain nearly the same if we train VGG or ResNet etc. So this explains how an adversarial example on one network effects another.
You may ask how such small change can effect. this is because the vectors we deal it is not 2d or 3d, it is very large and small changes add up.
",35616,,,,,1/1/2021 8:52,,,,5,,,,CC BY-SA 4.0 25497,2,,25491,1/1/2021 9:44,,4,,"I guess the issue is you lost track of where the samples came from and since you requested a math explanation I'll try to go step by step using my notation and without checking other material to avoid being biased by how other authors present it
So we start from
$$ L(D,G) = E_{x \sim p_{r}(x)} \log(D(x)) + E_{x \sim p_{g}(x)}\log(1 - D(x)) $$
then you apply the definition of $E_{\cdot}(\cdot)$ operator in the continuous case
$$ L(D,G) = \int_{x} \log(D(x)) p_{r}(x)dx + \int_{x}\log(1 - D(x))p_{g}(x)dx $$
then you Monte Carlo sample it to approximate it
$$ L(D,G) = \frac{1}{n} \sum_{i=1}^{n} \log(D(x_{i}^{(r)})) + \frac{1}{m} \sum_{j=1}^{m}\log(1 - D(x_{j}^{(g)})) $$
As you can see here I have kept the samples from the 2 distributions separated and used a notation that allows to track their origin so now you can use the right label in the Cross Entropy
$$ L(D,G) = \frac{1}{n} \sum_{i=1}^{n} L_{ce}(1, D(x_{i}^{(r)})) + \frac{1}{m} \sum_{j=1}^{m} L_{ce}(0, D(x_{j}^{(g)})) $$
But you could also have decided to merge the 2 integrals before to have
$$ L(D,G) = \int_{x} \left( \log(D(x)) p_{r}(x) + \log(1 - D(x))p_{g}(x) \right) dx $$
which is mathematically legit operation, however the issue is when you try to discretize this with Monte Carlo sampling.
You can't just replace the integral with one sum since you are Monte Carlo sampling and here, contrary to what we have done above, you do not have 1 distribution per integral to sample but in the same integral you have 2 distributions and for each sample you have to say what distribution it comes from which is where the issue is in your notation since you lost track of this information and it seems all the samples come from one distribution
",1963,,41615,,1/3/2021 23:23,1/3/2021 23:23,,,,3,,,,CC BY-SA 4.0 25498,1,25515,,1/1/2021 10:38,,1,472,"I am new to NLP and AI in general. I am just expecting springboard information so that I can skip all the introduction to NLP websites. I have just started studying NLP and want to know how to go about solving this problem. I am creating a chatbot that will take voice input from customers ordering food at restaurants. The customer input I am expecting as;
I want to order Chicken Biryani
Can I have a Veg Pizza, please
Coca-cola etc
I want to write an algorithm that can separate the name of the food item from the user input and compare it with the list of food items in my menu card and come up with the right item.
I am new to NLP, I am studying it online for this particular project, I can do the required coding, I just need help with the overall algo or sort of flow chart. It will save my time tremendously. Thanks.
",43434,,32410,,4/24/2021 3:34,4/24/2021 3:34,How to design a NLP algorithm to find a food item in menu card list?,I was reading the book Deep Learning by Ian Goodfellow. I had a doubt in the Maximum likelihood estimation section (Pg 131). I understand till the Eq 5.58 which describes what is being maximized in the problem.
$$ \theta_{\text{ML}} = \text{argmax}_{\theta} \sum_1^m \log(p_{\text{model}}(x^{(i)};\theta)) $$
However the next equation 5.59 restates this equation as:
$$ \theta_{\text{ML}} = \text{argmax}_{\theta} E_{x \sim \hat{p}_{\text{data}}}(\log(p_{\text{model}}(x;\theta)) $$
where $$\hat{p}_{\text{data}}$$ is described as the empirical distribution defined by the training data. Could someone explain what is meant by this empirical distribution? It seems to be different from the distribution parametrized by theta as that is described by $$ p_{\text{model}} $$
",43272,,16521,,1/1/2021 19:12,1/2/2021 23:17,What is emperical distribution in MLE?,This is the back-propogation rule for the output layer of a multi-layer network:
$$W_{jk} := W_{jk} - C \dfrac{\delta E}{\delta W_{jk}}$$
What does this rule do in the more ambiguous cases such as:
(1) The output of a hidden node is near the middle of a sigmoid curve?
(2) The graph of error with respect to weight is near a maximum or minimum?
",42926,,16521,,1/1/2021 15:52,1/1/2021 16:08,Backpropogation rule for the output layer of a multi-layer network - What does the rule do in ambiguous cases?,I assume you are considering a network where the activation function of the last layer is a sigmoid, so the output of your network is $$\tilde{y}=\sigma(W^{L}\cdot f(X, W^1, \dots, W^{L-1})),$$ where $X$ is the input vector, and $f$ is obtained by feeding the input to the network up to the layer $L-1$. Let's also call $Z:= W^{L}\cdot f(X, W^1, \dots, W^{L-1})$.
The error term is computed as $$E(y, \tilde{y})=E(y, \sigma(Z)),$$ where $y$ is the actual output. Let's get the derivative of the error with respect to the output of the last node (the input of the sigmoid) $$\frac{\partial E}{z_i}=\frac{\partial E}{\partial \tilde{y}}\frac{\partial\tilde{y}}{\partial z_i}=\frac{\partial E}{\partial \tilde{y}}\frac{\partial\sigma}{\partial z_i}.$$ The update rule is $$z_i =z_i - C\frac{\partial E}{z_i}= z_i - C\frac{\partial E}{\partial \tilde{y}}\frac{\partial\sigma}{\partial z_i}.$$ Now we can analyse your questions.
For instance, what happens if you are close to the center of the sigmoid but also close to an extremum of the loss? You will have a multiplication of $2$ terms, one trying to make the update small and the other trying to make the update large, and what matter are the orders of magnitude involved.
In conclusion, the rules of thumb are as in the points 1. and 2., but they are no guarantee that you won't find any special cases.
There's nothing stopping you from training a model with whatever tags you want.
Using what you describe as "usual" format means you would have approx half as many tags as using the IOB format. In theory this means your model will develop higher accuracy faster and with less training data. On the downside, you will need to do more work when interpreting the results in order to be confident where one named entity ends and another one begins.
I made a notebook to do some empirical tests on this which uses 17 tags in the IOB format and 9 in the "usual" format.
TL;DR using the "usual" format did not produce a noticeable improvement in model quality.
",43256,,,,,1/1/2021 16:47,,,,0,,,,CC BY-SA 4.0 25504,2,,25499,1/1/2021 17:43,,2,,"The idea behind this kind of reasoning is that there is a "true" distribution (unknown to us, mere mortals) and that the data is generated following this distribution. But what we don't really know the shape of the distribution, all we know is the distribution of the data that we have. This is called the empirical distribution. Let's see a simple example to illustrate the point.
Let's consider a die. Each number is equally likely to show up if we throw the die, so the true underlying distribution is uniform over the set $\{1, 2,\dots6\}$. Now let's say you ask your friend to throw the die 60 times, what you will see is likely something close to uniform over the set $\{1, 2,\dots6\}$ (not really uniform though, as that would be highly unlikely). This distribution is the empirical one, and as you collect more and more samples it will converge to the actual underlying distribution.
In your case what happened is the following:
$x_1,\dots, x_m$ is your sample (in the example above, the $60$ numbers that you see as your friend throws the die). This sample defines a distribution, $\hat{p}_{data}$. In the example above, $\hat{p}_{data}$ would likely be close to the uniform distribution over $\{1, \dots, 6\}$. Now you can think about the sum over $x_1\dots, x_m$ of $\log(p_{model}(x^{(i)};\theta))$ as the average of $\log(p_{model}(x;\theta))$, where $x$ is drawn according to $\hat{p}_{data}$.
Let me make another example with some actual numbers. Let's say you toss a fair coin $7$ times and you see $$\{H, H, H, T, H, T, H\}.$$
The empirical distribution is $\mathbb{P}(H)=5/7$, $\mathbb{P}(T)=2/7$. So if you compute the expectation $$\mathbb{E}_{x \sim \hat{p}_{data}}[\log(p_{model}(x;\theta)]$$ you get
$$\log(p_{model}(H;\theta)\cdot\mathbb{P}(H) +\log(p_{model}(T;\theta)\cdot\mathbb{P}(T) = \\ \frac{5}{7}\log(p_{model}(H;\theta) +\frac{2}{7}\log(p_{model}(T;\theta)),$$
which is what you will get if you compute the first sum you wrote.
When implementing a genetic algorithm, I understand the basic idea is to have an initial population of a certain size. Then, we pick two individuals from a population, construct two new individuals (using mutation and crossover), repeat this process X number of times and the replace the old population with the new population, based on selecting the fittest.
In this method, the population size remains fixed. In reality in evolution, populations undergo fluctuations in population sizes (e.g. population bottlenecks, and new speciations).
I understand the disadvantages of variable populations sizes from a biological view are, for example, a bottleneck will reduce the population to minimal levels, so not much evolution will occur. Are there disadvantages to using variable population sizes in genetic algorithms, from a programming perspective? I was thinking the numbers per population could follow a distribution of some sort so they don't just randomly fluctuate erratically, but maybe this does not make sense to do.
",42926,,42926,,1/1/2021 19:10,1/6/2021 10:44,Are there any disadvantages to using a variable population size in genetic algorithms?,I understand that in each generation of a genetic algorithm, that generation must re-prove it's fitness (and then the fittest of that population is taken for the next population).
In this case, I guess it's a presumption that if you take the fittest of each generation, and use them to form the basis of the next generation, that your population as a whole is getting fitter with time.
But algorithmically, how can I detect this? If there's no end goal known, then I can't measure the error/distance from goal? So how can you tell how much each generation is becoming fitter by?
",42926,,,,,1/6/2021 10:28,How to detect that the fitness landscape of a genetic algorithm is changing over time?,I'm just starting to explore topics within computer vision and curious if there are any concepts in that area that could be applied to segmenting multivariate time series with the goal of grouping individual data points similar to how a human might do the same. I know that there are a number of time series segmentation methods, but in-depth explanations of multivariate methods are more scarce and it seems like somewhat of an underdeveloped topic overall. Since segmentation is such a fundamental part of CV and is inherently multidimensional, I'm wondering if concepts there can be modified to apply to time series.
Specifically, I'd like to be able segment a time series and reformulate a prediction problem as something closer to a language processing problem. The process would look something like this:
In a few days of reading about CV, it seems like there's a ton to learn. If there are traditional time series segmentation techniques that are more suitable, that would be of interest, but I'd still be curious about a CV approach since that approach likely better aligns with how a person might look at a graph to identify segments.
",30154,,2444,,1/31/2021 17:09,1/31/2021 17:09,Time series analysis using computer vision principles,There is no exact way to assess that a genetic algorithm has located a global optima. Indeed there may be multiple global optima. You must fall back to heuristic methods. The fitness of a population is the maximum fitness of any individual. Unless specific measures are taken to maintain diversity the population will converge to an optima, local or global. At that point all individuals will, except for mutation, be identical. You could take the fittest individual of such a population as your solution, but you will not know if the solution is a global or local optima.
Two reasonable heuristics are these. First, run the algorithm till it converges and maintains its fitness for a number of further generations. Or second run the algorithm multiple times, and take the fittest of all the located solutions. Neither is exact.
",26382,,,,,1/1/2021 19:53,,,,1,,,,CC BY-SA 4.0 25510,2,,25505,1/1/2021 21:05,,1,,"Population size is a tricky issue even in pure biological models. Biological population sizes obviously vary. The two great protagonists of the argument were Ronald Fisher and Sewell Wright, with argument being between Fisher favouring few large populations against Wright’s many small interconnected populations. There is evidence that evolution occurs more rapidly in Wright’s model but the evidence is inconclusive. The theory concentrates on the probability that a mutation will occur and then become dominant in a population. In a small population a beneficial mutation is more likely to be selected for reproduction, but premature convergence is a serious danger. While in a larger population a mutant is less likely to be removed from the population during reproduction. I would strongly recommend a read of Games of life by Karl Sigmund.
",26382,,,,,1/1/2021 21:05,,,,5,,,,CC BY-SA 4.0 25511,2,,25371,1/1/2021 21:24,,1,,"It is entirely possible!
You see, the agents will perform whatever actions are available to them, and if the evolutionary algorithm is setup correctly, whatever set of actions provides them with a bigger survival rate will be the one that gets explored and reproduced the most.
Here is a very interesting list of "Specification Gaming" in AI, where the agents happened to "game" the rules to reach their goals (metric optimization) without actually doing what the creators intended: https://docs.google.com/spreadsheets/u/1/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml (second link)
",190,,,,,1/1/2021 21:24,,,,0,,,,CC BY-SA 4.0 25512,1,25514,,1/1/2021 22:59,,1,126,"Referring to this post, in the following formula to update the state-action value
$$ Q(s,a) = Q(s,a) + \alpha (G − Q(s,a)),$$
is the value of $G$ (the return) the same for every state-action $(s,a)$ pair?
I am a little confused about this point, so I will thank any clarification.
",33566,,2444,,1/2/2021 12:33,1/2/2021 12:34,"When updating the state-action value in the Monte Carlo method, is the return the same for each state-action pair?",I'm working on the code trying to generate new images using DCGAN model. The structure of my code is from the PyTorch tutorial here. I'm a bit confused trying to find and understand how the latent vector is transforming to the feature map from the Generator part (this line of code is what I'm interested in) :
nn.ConvTranspose2d( nz, ngf * 8, 4, 1, 0, bias=False)
It means the latent vector (nz) of shape 100x1 transforming to the 512 matrices of 4x4 size (ngf=64). How does it happen? Hence, I can't even clear to myself how the length of the latent vector influence to the generated image. P.S. The left part of the Genarator structure is clear.
The only idea that I got is :
Is this right? Or does it happen in other way?
",41783,,41783,,1/2/2021 16:03,1/2/2021 17:22,How is the latent vector transforming to a feature map in DCGAN (Generator structure)?,The discussion uses poor notation, there should be a time index. You obtain a list of tuples $(s_t, a_t, r_t, s_{t+1})$ and then, for every visit MC, you update
$$Q(s_t, a_t) = Q(s_t, a_t) + \alpha (G_t - Q(s_t, a_t))\;;$$
where $G_t = \sum_{k=0}^\infty \gamma^k r_{t+k}$, for each $t$ in the episode. You can see that the returns for each time step are calculated for time timestep onwards, and so are not necessarily the same across time steps.
",36821,,2444,,1/2/2021 12:34,1/2/2021 12:34,,,,2,,,,CC BY-SA 4.0 25515,2,,25498,1/2/2021 1:45,,2,,"Since you want a shortcut use the spoonacular API. Below is a test with your words. You can see it had trouble with 'Coca' and 'veg'.
What you need is 'named-entity recognition' for food. This is not a new thing but clearly not a solved problem. The Foodie Favorites repository attempts to solve the problem from scratch.
If want to do some research and dig deeper see FoodBase corpus: a new resource of annotated food entities. From the abstract:
",5763,,,,,1/2/2021 1:45,,,,0,,,,CC BY-SA 4.0 25516,1,,,1/2/2021 2:31,,1,235,"It consists of 12,844 food entity annotations describing 2105 unique food entities. Additionally, we provided a weakly annotated corpus on an additional 21,790 recipes. It consists of 274,053 food entity annotations, 13,079 of which are unique.
I'm not very knowledgeable in this field but I'm wondering if any research or information already exists for the following situation:
I have some data that may or may not look similar to each other. Each represents a node that represents a vector of size 128.
And they are inserted into a tree graph according to similarity.
and a new node is created with an edge connecting the most similar vertex node found in the entire tree graph.
Except I'm wasting a lot of time searching through the entire graph to insert each new node when I could narrow down my search according to previous information. Imagine a trunk node saying "Oh, I saw a node like you before, it went down this branch. And don't bother going down that other branch." I could reduce the cost of searching the entire tree structure if there was a clever way to remember if a similar node went down a certain path.
I've thought about some ways to use caching or creating a look-up table but these are very memory intensive methods and will become slower the longer the program runs on. I have some other ideas I am playing around with but I was hoping someone could point me in the right direction before I started trying out weird ideas.
Edit: added a better (more realistic) graph picture
",43419,,43419,,1/6/2021 5:43,1/26/2023 11:07,A way to leverage machine learning to reduce DFS/BFS search time on a tree graph?,The main issue during training is that you haven't right-shifted the input of the decoder, which is probably why you set the diagonals of mask
to -inf (when it should be $0$).
Also, just an FYI, although you haven't focused on evaluation/prediction yet, I will explain the evaluation/prediction here as well for completeness, since it works so differently than training, and also since you will need it when generating the graphs for debugging.
Both tgt
and tgt_mask
should be changed to simulate auto-regressive properties.
You are feeding in tgt
as the input to the decoder, where tgt
is the ground truth target sequence to predict. tgt
should have dimension length(sequence)
x batchSize
x $\|dict\|$. Additionally, you are feeding in mask where the diagonals are -inf
.
Instead, you should do the following:
<START>
and <END>
. So your vocabulary will need to be extended to 16 events.<START>
as the first token. So during training, when you want to feed in tgt
to the decoder, you should right-shift tgt
by adding in the <START>
token at the beginning.
tgt_shifted
will now be of dimension length(sequence)+1
x batchSize
x $\| dict \|$tgt_shifted_mask
will now be of dimension length(sequence)+1
x length(sequence)+1
. Diagonal should be all $0$tgt_shifted
, tgt_shifted_mask
, and memory
length(sequence)+1
x batchSize
x $\| dict\|$, and will not be right-shifted because it will not have <START>
as the first word. But it should have <END>
as the last word.<END>
to the ground truth tgt
, and turn it into one-hot encoding, so tgt
should have dimension length(sequence)+1
x batchSize
x $\|dict\|$. Your loss operator should be some sort of element-wise comparison between tgt
and the output of the decoder.Note that you only run the decoder once per training batch during training, i.e. the decoder simultaneously predicts the logits of all length(sequence)
tokens at the same time. On the other hand, during evaluation/prediction, you must run the decoder length(sequence)
times, since you can only use the decoder to predict one token at a time.
During evaluation/prediction, the model should generate its own sentence entirely from scratch, one word at a time:
out
should be initialized as a single token, <START>
, and have dimensions $1$ x batchSize
x $\|dict\|$.out_mask
for the current version of out
. out_mask
should be a matrix with dimension length(out)
x length(out)
. Diagonal should be all $0$out
, out_mask
, and memory
into the decoder layer. The output of the decoder will be a length(out)
x batchSize
x $d_{model}$ tensor (note that in your case, $d_{model} = \|dict\|$), which indicates tokenIdx
x batchIdx
x vocabularyIdx
decoderOutput[-1,:,:]
if all sequences in your batch have the same length) into probabilities, and take all the events that surpasses some probability threshold and set them to $1$. Set all other events to $0$. This is the new time step that the decoder layer predicts. So add this time step to out
to append it's length by $1$.<END>
token, which signals the end of decoding, or until the max output length is reached.During evaluation, you can calculate the validation loss by using the logits of only the final iteration of the decoder (which should have dimension length(sequence)+1
x batchSize
x $\|dict\|$), and compare those logits with the ground truth with the <END>
token added at the end (resulting in dimensions length(sequence)+1
x batchSize
x $\|dict\|$).
Your question seems to be talking about two slightly different topics:
One vs Rest in Multi-Class Classification
Recognising digits is an example of multi-class classification. The approach you outline is the kind of approach summarised in the "One vs Rest" section of the Wikipedia page on multi-class classification. The page notes the following issues with this approach:
Firstly, the scale of the confidence values may differ between the binary classifiers. Second, even if the class distribution is balanced in the training set, the binary classification learners see unbalanced distributions because typically the set of negatives they see is much larger than the set of positives.
You might also like to look into another approach called One vs One ('One vs Rest' vs 'One vs One') which sets up the classification problem as a set of binary alternatives. In the digit recognition case you'd end up with a classifier for "1 or 2?", "1 or 3?", "1 or 4?" etc. This might help with the "4 vs 9" problem but it does mean an enormous amount of classifiers, that might be better represented in some kind of network. Perhaps even a network inspired by brain neurons.
Use of Neural Networks in single output vs multi-class classification
There is nothing magical about a neural network that means it has to be used for multi-class classification. Nor is there anything magical about it that makes it the only option for multi-class classification.
For example:
Conclusions
A 10-class neural network is used to identify digits because this has turned out to be an efficient way of doing so when compared with one vs rest and one vs all approaches.
A bit off-topic, perhaps, but if you think about this in the context of T5, there does seem to be a trend of moving towards larger more multi-purpose models rather than lots of small specialised models.
",43256,,,,,1/2/2021 9:23,,,,0,,,,CC BY-SA 4.0 25520,1,,,1/2/2021 9:26,,0,82,"I'm trying to understand how the logic gates (e.g. AND, OR, NOT, NAND) can be built into single-layer perceptrons.
I understand specific examples of weights and thresholds for the gates, but I'm stuck on understanding how to translate these to general inequalities for these gates.
Is my reasoning correct in the table below, or are there cases where these general inequalities do not make sense for this problem? Am I missing other logic gates that can be done in a similar fashion? (e.g., I know XOR cannot)?
In the table below, a perceptron has two input nodes, and one output node. W1 and W2 are the weights (real values) on those input nodes. T is the threshold, above which, the perceptron will fire. I have come up with example values that would work for each logic gate (e.g., for the AND gate, a perceptron with two input weights, W1 = 1 and W2 = 1, and a threshold = 2, will fire, and I'm trying to understand more generally, what is the equation needed for each gate).
Gate | Example W1, W2 | Threshold | General inequalities |
---|---|---|---|
AND | 1,1 | 2 | W1 + W2 >= t, where W1, W2 > 0 |
OR | 1,1 | 1 | W1 > t or W2 > t |
NOT | -1 | -0.49 | W1 > 2(t) |
NAND | -2,-2 | 3 | W1 + W2 <= t |
I am reading an exam question about NN (that I cannot publish, for copyright reasons). The question says: 'Construct a rectangle in 2D space. Define the lines, and then define the weights and threshold that will only fire for points inside the rectangle.'
I understand that this is an example of a rectangle drawn as a NN (i.e. this NN will fire, if the point is in the rectangle, where the rectangle is defined by the lines X = 4; X = 1, Y = 2, Y = 5).
In this diagram, since it's a rectangle, the equations of the line in this example are x = 4, x =1, y=2, y=5, so I left the other weights out (as they equal to 0).
I'm now wondering how this could be translated to a 3D structure. For example, if a 3D shape was defined by the points:
(0,0,0), (0,1,0), (0,0,1), (0,1,1), (1,0,0), (1,1,0), (1,0,1), (1,1,1)
I wanted to draw a hyperplane that separates the corner point (1,1,1) from the other points in this cube. Can this 3D shape be drawn similarly to below (maybe it would be easier to understand, if there were other numbers except 1 and 0 in the co-ordinates)?
Would I draw this with 3 nodes in the input layer, still one node in the output layer, I just don't understand what the hidden layer should look like? Would it have 24 nodes? One for each surface of the cube, with relevant X and Y values?
",42926,,42926,,1/3/2021 12:45,1/3/2021 12:45,How to draw a 3-dimensonal shape's neural network,If I have a neural network, and say the 6th output node of the neural network is:
$$x_6 = w_{16}y_1 + w_{26}y_2 + w_{36}y_3$$
What does that make the derivative of:
$$\frac{\partial x_6}{\partial w_{26}}$$
I guess that it's how is $x_6$ changing with respect to $w_{26}$, so, therefore, is it equal to $y_2$ (since the output, $y_2$, will change depending on the weight added to the input)?
",42926,,2444,,1/2/2021 23:21,1/4/2021 22:42,What is the derivative of a specific output with respect to a specific weight?,Noise vector (batch_size, 100, 1, 1) is deconvoloved with filter_1 (100, 4, 4). Result is feature_map_1 (1, 4, 4). And since there are 512 filters, so there will be 512 feature maps. Output shape will be (batch_size, 512, 4, 4).
I think you need better undersanting for convolutional calculations in general. In this stack it was explained very well.
",41615,,41615,,1/2/2021 17:22,1/2/2021 17:22,,,,1,,,,CC BY-SA 4.0 25524,2,,22859,1/2/2021 17:15,,0,,"What you should do as part of your exploration is to learn various models of increasing complexity. Start from a simple linear model, ending in multi-layer neural networks (with non-linear activations of course). If the nonlinear models are better then that implies that your data do not follow a linear hyperplane.
Also check this out for recent trends: https://machinelearningmastery.com/auto-sklearn-for-automated-machine-learning-in-python/
",43358,,,,,1/2/2021 17:15,,,,0,,,,CC BY-SA 4.0 25526,1,25528,,1/2/2021 17:22,,0,162,"If the derivative is supposed to give the rate of change of a function at that point, then why is the derivative of the softmax layer (a vector) the Jacobian matrix, which has a different shape than the output/softmax vector? Why is the shape of the softmax vector's derivative (the Jacobian) different than the shape of the derivative of the other activation functions, such as the ReLU and sigmoid?
",43270,,2444,,1/2/2021 21:33,1/2/2021 21:33,Why is the derivative of the softmax layer shaped differently than the derivative of other neurons?,I learned about the universal approximation theorem from this guide. It states that a network even with a single hidden layer can approximate any function within some bound, given a sufficient number of neurons. Or mathematically, ${|g(x)−f(x)|< \epsilon}$, where ${g(x)}$ is the approximation, ${f(x)}$ is the target function and is $\epsilon$ is an arbitrary bound.
A polynomial of degree $n$ has at maximum $n-1$ turning points (where the derivative of the polynomial changes sign). With each new turning point, the approximation seems to become more complex.
I'm not necessarily looking for a formula, but I'd like to get a general idea on how to figure out the sufficient number of neurons is for a reasonable approximation of a polynomial with a single layer of the neural network (you may consider "reasonable" to be $\epsilon = 0.0001$). To ask in other words, how would adding one more neuron affect the model's ability to express a polynomial?
",38076,,2444,,1/2/2021 23:06,1/2/2021 23:06,What is the number of neurons required to approximate a polynomial of degree n?,When you use the softmax activation function is usually as a last layer of your network and to get an output that is a vector. Now your confusion is about shapes, so let's review a bit of calculus.
If you have a function $$f:\mathbb{R}\rightarrow\mathbb{R}$$
the derivative is a function on its own and you have $$f':\mathbb{R}\rightarrow\mathbb{R}.$$
If you increase the dimension of the input space have
$$f:\mathbb{R}^n\rightarrow\mathbb{R}.$$
The "derivative" in this case is called gradient and it is a vector collecting the $n$ partial derivatives of $f$. The input space of the gradient function is $\mathbb{R}^n$ (the same as for $f$), but the output is the collection of the $n$ derivatives, so the output space is also $\mathbb{R}^n$. In other words
$$\nabla f:\mathbb{R}^n\rightarrow\mathbb{R}^n,$$
which makes sense as for each point $x$ of the input space you get a vector ($\nabla f(x)$) as output.
So far so good, but what happens if you consider a function that takes a vector as input and spits out a vector as output, i.e.
$$f:\mathbb{R}^n\rightarrow\mathbb{R}^m?$$
How to compute the equivalent of the derivative? (This is the softmax case, where you have a vector as input and a vector as output.)
You can reduce this case to the previous case considering $f=(f_1, \dots, f_m)$, where $f_i:\mathbb{R}^n\rightarrow\mathbb{R}.$ Now for each $f_i$ you can compute the gradient $\nabla f_i$ and end up with $m$ gradients. When you evaluate them at a point $x\in\mathbb{R}^n$ you get $m$ $n-$dimensional vectors. These vector can be collected in a matrix, which is the Jacobian; formally
$$Jf:\mathbb{R}^n\rightarrow\mathbb{R}^{m\times n}.$$
Finally, to answer your question, you get a Jacobian "instead" of a gradient (they all represent the same concept) because the output of the softmax is not a single number but a vector.
By the way the sigmoid and relu are functions with one dimensional input and output, so they don't really have a gradient but a derivative. The trick is that people write $\sigma(W)$, where $W$ is vector or a matrix, but they mean that $\sigma$ is applied component-wise, as a function from $\mathbb{R}$ to $\mathbb{R}$ (I know, it's confusing).
(I know, I kind of skipped your other question, but this answer was already long and I think you can convince yourself that the dimensions match correctly (with the Jacobian) for the update rule to work. If not I'll edit)
I'm reading this book chapter, and I'm looking at the questions on the last page. Can someone explain question 2 on the last page to me, or show me an example of a solution so I can understand it?
The question is:
Consider a simple perceptron with $n$ bipolar inputs and threshold $\theta = 0$. Restrict each of the weights to have the value $−1$ or $1$. Give the smallest upper bound you can find for the number of functions from $\{−1, 1 \}^n$ to $\{−1, 1\}$ which are computable by this perceptron. Prove that the upper bound is sharp, i.e., that all functions are different.
What I understand:
A perceptron is a very simple network with $n$ input nodes, a weight assigned to each input nodes, which are then summed to be above/not above a threshold ($\theta$).
In this example, there are $n$ input nodes, and the value of each input node is either $−1$ or $1$. And we want to map them to outputs of either $−1$ or $1$.
What I'm confused about: Is it asking how many different ways can you map input values of $\{−1, 1\}$ to $\{−1, 1\}$ output?
For example, is the answer, where each tuple in this list is input1, input2 and label, as described above:
$$[(1,1,1), (1,1,-1), (-1,1,-1), (-1,1,1), (1,-1,1), (1,-1,-1), (-1,-1,-1)]$$
",42926,,2444,,1/3/2021 12:01,1/3/2021 12:01,What is the smallest upper bound for a number of functions in a range that are computable by a perceptron?,This is mostly an implementation architecture problem, and the thing is that basically you can implement anything in the traditional setting. To do so instead of having Env<->Agent1<->Agent2
, you should have Agent1<->SuperEnv<->Agent2
where SuperEnv
contains Env
, and simply uses the reward given to SuperEnv
by Agent1
and passes it to Agent2
.
I know this might seem a little counter-intuitive when comparing the implementation to the real-world problem setting, but the consistency of the RL structure (i.e. Environment that interacts with all the agent) is very important for your solutions to be easily understandable by others.
",42903,,,,,1/2/2021 20:09,,,,2,,,,CC BY-SA 4.0 25531,1,,,1/3/2021 0:58,,1,25,"I'm doing character embedding for NLP tasks using one-dimensional convolutional neural networks (see Chiu and Nichols (2016) for the motivation). I haven't found any empirical evidence of whether or not marking the word boundaries makes a difference. As an example, a 1-D CNN with kernel size 2 would take "the"
as input and use {"th", "he"}
in its filters. But if I explicitly marked the boundaries it would give me {"t", "th", "he", "h"}
.
Is there a go-to paper or project that definitively answers this question?
",19703,,19703,,1/3/2021 18:30,1/3/2021 18:30,Is there any research work that shows that we should explicitly mark the word boundaries for 1D CNNs?,Does it make sense to incorporate constant states in the Markov Decision Process and employ a reinforcement learning algorithm to solve it?
For instance, for applications of personalization, I would like to include users' attributes, like age and gender, as states, along with other changing states. Does it make sense to do so? I have a sequential problem, so I assume the contextual bandit does not fit here.
",23707,,2444,,1/3/2021 19:33,1/3/2021 19:33,Does it make sense to include constant states into reinforcement learning formulation?,It is, I suppose, a philosophical question whether data that describes a whole episode and does not respond to events within it is part of the state, or is part of some other structure.
However, the practical response is to view such descriptive data as defining an instance of a class of related environments, and to include it in the state features. This may be done for two main reasons:
The static data is a relevant parameter of the environment, affecting state transitions and rewards.
It is possible to generalise over the population of all values that the parameters can take.
In simple environments, generalisation might only be that the same agent can learn about all variations in a single combined training session. You could use a tabular RL method, starting randomly with one of the possible variations until all were sufficiently covered.
In more complex environments, generalisation may also occur through functon approximation, in a similar manner to contextual bandits. In your personalisation example, you are not expecting to train the agent for all possible user descriptions, but hope that people with similar age, gender etc descriptions will respond similarly to an agent that personalises content.
Philosophically, the contextual data is either part of a larger state space (with a restriction that transitions between different contexts do not happen within an episode), or it is metadata that impacts the "real" state transitions and rewards. Pragmatically, to allow the data to influence value functions and policies, it is necessary to use it in the arguments of those functions. Whether you then view it as part of the state feature vector or as something that is concatenated to state features is a personal choice. Most of the literature I have seen assumes without comment that it is part of the state.
",1847,,1847,,1/3/2021 10:35,1/3/2021 10:35,,,,0,,,,CC BY-SA 4.0 25534,1,25577,,1/3/2021 8:01,,1,221,"I have an interesting problem related to training the model on two different datasets for the target feature on images taken on different conditions, which might affect the model's ability to generalize.
To explain I will give examples of images from the two different datasets.
Dataset 1 sample image:
Dataset 2 sample image:
As you see the images are captured in two completely different conditions. I am afraid that the model will infer from the background information that it shouldn't use to predict the plant diseases, what makes the problem worse is that some plant diseases only exist in one dataset and not the other, if all the diseases are contained in both datasets then I wouldn't think there would be a problem.
I am assuming I need a way to unify the background by somehow detecting the leaf pixels in the images and unifying the background in a way that makes the model focuses on the important features.
I've tried some segmentation methods but the methods I tried don't always give desirable results for all the images.
What is the recommended approach here? All help is appreciated
Further explanation of the problem.
Ok so I will explain one more thing, my model on the two datasets works fine when training and validating, It got a 94% accuracy.
The problem is, even though the model performs well on the datasets I have, I am afraid that when I use the model on real-life conditions (say someone capturing an image with their phone) the model will be heavily biased towards predicting labels in the second dataset (the one with actual background) since the background is similar and it somehow associated the background with the inference process.
I have tried downloading a leaf image of a label that is contained on the first dataset ( the one with the white background), where the image had a real-life background, the model as expected failed to predict the correct label and predicted a label contained in the second dataset, I am assuming it was due to the background. I have tried this experiment multiple times, and the model consistently failed in similar scenarios
I used some Interpretability techniques as well to visualize the important pixels and it seems like the model is using the background for inference, but I am not an expert in interpreting these graphs so I am not 100% sure.
",43480,,2444,,1/3/2021 14:06,1/4/2021 20:09,Training a classifier on different datasets with different image conditions for different labels causes the model to infer using the background,Your problem fundamentally is that you are confusing what the state and actions are in this setting. Webpages are not your states; your state is the entire priority queue of (website-outlink)
pairs + the (new_website-outlink)
pairs. Your action is which pair you select.
Now this is a variable sized state-space and variable sized action-space problem setting at the same time. To deal with this lets start by noting that state==observation
need not be (in general). So what is your observation? Your observation is a variable-sized batch of either:
(website-outlink)
pairs ornext_website
(where each next_website
is determined by its corresponding pair)Both of these observations could work just fine, choosing between one or the other is just a matter of whether you want your agent to learn "which links to open before opening them" or "which links are meaningful (after opening them)".
What your priority queue is essentially doing is just adding a neat trick that:
website
, but the list/batch of website-outlink
)new_website
, but selecting an outlink from all available choices in the queue)Note however that to actually have the second saving it is crucial to store the Q-values for each pair!!!
Last important thing to note is that in a scenario where you use a Replay Buffer (which I guess is likely given that you chose a DQN), you can't use the priority queue whilst learning from the RB. To see why (and to see in detail how your learning process actually looks like), start by remembering that your Q-value updates are given by the formula here; your state s_t
is a (quasi-ordered1) batch of pairs. Q(s_t, a_t)
is just the output of running your DQN regression on just the best website/pair in this batch (you have to add an index to denote the best choice when adding transitions to the RB, in order to be consistent about which action was taken from this state). To compute the estimate of the optimal future value however you will have to recompute the Q-value of every single website/pair in the next state. You CANNOT use the priority queue when training from the RB.
1 You have the priority queue ordered for all websites you had in it whilst looking at the last website, but all the new_website-outlink
pairs you are now adding are not ordered yet. You still have to run the agent on them and then you can order them with the rest of the priority queue to generate the next state (which still will not be ordered because you will have new_new_website-outink
pairs).
I was facing a problem I mentioned in a previous question but after a while, I realize that maybe the problem in the dataset not in the learning rate.
I build the dataset from white positions only i.e the boards when it's white's turn.
Each data set consists of one game.
First, I tried to let the agent plays a game then learn from it immediately, but that did not work, and the agent converges to one state of playing (he only lose or win against itself, or draw in a stupid way).
Second, I tried to let the agent plays about 1000 games against its self then train on each game separately, but that also does not work.
Note: the first and second approach describes one iteration of the learning process, I let the agent repeat them so in total it trained on about 50000 games.
Is my approach wrong? or my dataset must be built in another way? maybe train the agent on several games at once?
My project is here if someone needs to take a closer look at it: CheckerBot
",36578,,36578,,1/3/2021 12:25,1/3/2021 12:25,Is training on single game each time appropriate for an agent to learn to play checkers,I have a problem that arose as part of a NEAT (Neuro Evolution Through Augmenting Topologies) implementation that I am writing. I am wanting it to produce topologies or graphs that describe neural networks, similar to the one below.
Here, nodes 0
and 1
are inputs, and 4
is the output node, the rest of the nodes are hidden nodes. Each of these nodes can have some activation function defined for them (not necessary that all the hidden nodes have the same activation function)
Now, I want to perform the forward pass of this neural network with some data, and, based on how well it performed in that task, I assign it with a fitness value, which is used as part of the NEAT evolutionary algorithm to move towards better architectures and weights.
So, as part of the evolution process, I can have connections that can cause internal loops in the hidden layers and there is the possibility that a skip connection is made. Because of this, I feel the regular matrix-based forward pass (of fully connected MLPs) will not work in order to perform the forward pass of these evolved neural networks, and hence I want to know if an algorithm exists that can solve this problem.
In short, I want this neural network to just take the inputs and provide me outputs - no training involved at all, so I'm not interested in the back-propagation part now.
The only way to solve this that I see is to use something on the lines of a job queue (the queue will consist of the nodes that needs processing in order). I feel this is extremely inefficient and I cannot allocate this simulation method a proper stop condition. Or, even when to take output from the neural network graph and consider it.
Can anybody at least point me in the right direction?
",43484,,2444,,1/3/2021 14:23,1/28/2021 8:00,"How can I perform the forward pass in a neural network evolved with NEAT, given that some connections may not exist or there may be loopy connections?",Another good resource is the free CatalyzeX browser extension — it adds in-line links to any relevant code wherever you come across papers on various websites: AI/ML Papers with Code Everywhere - CatalyzeX
The corresponding website is catalyzeX.com.
Full disclosure: I'm one of the creators. It's actively maintained and all feedback and requests are welcome!
",43485,,2444,,5/10/2021 0:20,5/10/2021 0:20,,,,0,,,,CC BY-SA 4.0 25542,1,,,1/3/2021 12:56,,1,33,"Given the original paper (https://arxiv.org/pdf/1809.02864.pdf), I would like to implement the Accelegrad algorithm for which I report the pseudocode of the paper:
In the pseudocode, the authors refer to a compact convex set $K$ of diameter $D$. The question is whether I can know such elements. I think that they are theoretical conditions to satisfy some theorems. The problem is that the diameter $D$ is used in the learning rate and also the convex set $K$ is used to perform the projection of the gradient descent. How can I proceed?
",32694,,2444,,1/3/2021 14:22,1/3/2021 14:22,How to derive compact convex set K and its diameter D to program Accelegrad algorithm in practice?,I have been trying to solve the OpenAI lunar lander game with a DQN taken from this paper
https://arxiv.org/pdf/2006.04938v2.pdf
The issue is that it takes 12 hours to train 50 episodes so something must be wrong.
import os
import random
import gym
import numpy as np
from collections import deque
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import Model
ENV_NAME = "LunarLander-v2"
DISCOUNT_FACTOR = 0.9
LEARNING_RATE = 0.001
MEMORY_SIZE = 2000
TRAIN_START = 1000
BATCH_SIZE = 24
EXPLORATION_MAX = 1.0
EXPLORATION_MIN = 0.01
EXPLORATION_DECAY = 0.99
class MyModel(Model):
def __init__(self, input_size, output_size):
super(MyModel, self).__init__()
self.d1 = Dense(128, input_shape=(input_size,), activation="relu")
self.d2 = Dense(128, activation="relu")
self.d3 = Dense(output_size, activation="linear")
def call(self, x):
x = self.d1(x)
x = self.d2(x)
return self.d3(x)
class DQNSolver():
def __init__(self, observation_space, action_space):
self.exploration_rate = EXPLORATION_MAX
self.action_space = action_space
self.memory = deque(maxlen=MEMORY_SIZE)
self.model = MyModel(observation_space,action_space)
self.model.compile(loss="mse", optimizer=Adam(lr=LEARNING_RATE))
def remember(self, state, action, reward, next_state, done):
self.memory.append((state, action, reward, next_state, done))
def act(self, state):
if np.random.rand() < self.exploration_rate:
return random.randrange(self.action_space)
q_values = self.model.predict(state)
return np.argmax(q_values[0])
def experience_replay(self):
if len(self.memory) < BATCH_SIZE:
return
batch = random.sample(self.memory, BATCH_SIZE)
state_batch, q_values_batch = [], []
for state, action, reward, state_next, terminal in batch:
# q-value prediction for a given state
q_values_cs = self.model.predict(state)
# target q-value
max_q_value_ns = np.amax(self.model.predict(state_next)[0])
# correction on the Q value for the action used
if terminal:
q_values_cs[0][action] = reward
else:
q_values_cs[0][action] = reward + DISCOUNT_FACTOR * max_q_value_ns
state_batch.append(state[0])
q_values_batch.append(q_values_cs[0])
# train the Q network
self.model.fit(np.array(state_batch),
np.array(q_values_batch),
batch_size = BATCH_SIZE,
epochs = 1, verbose = 0)
self.exploration_rate *= EXPLORATION_DECAY
self.exploration_rate = max(EXPLORATION_MIN, self.exploration_rate)
def lunar_lander():
env = gym.make(ENV_NAME)
observation_space = env.observation_space.shape[0]
action_space = env.action_space.n
dqn_solver = DQNSolver(observation_space, action_space)
episode = 0
print("Running")
while True:
episode += 1
state = env.reset()
state = np.reshape(state, [1, observation_space])
scores = []
score = 0
while True:
action = dqn_solver.act(state)
state_next, reward, terminal, _ = env.step(action)
state_next = np.reshape(state_next, [1, observation_space])
dqn_solver.remember(state, action, reward, state_next, terminal)
dqn_solver.experience_replay()
state = state_next
score += reward
if terminal:
print("Episode: " + str(episode) + ", exploration: " + str(dqn_solver.exploration_rate) + ", score: " + str(score))
scores.append(score)
break
if np.mean(scores[-min(100, len(scores)):]) >= 195:
print("Problem is solved in {} episodes.".format(episode))
break
env.close
if __name__ == "__main__":
lunar_lander()
Here are the logs
root@b11438e3d3e8:~# /usr/bin/python3 /root/test.py
2021-01-03 13:42:38.055593: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-01-03 13:42:39.338231: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2021-01-03 13:42:39.368192: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.368693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.8095GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-01-03 13:42:39.368729: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-01-03 13:42:39.370269: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-01-03 13:42:39.371430: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-01-03 13:42:39.371704: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-01-03 13:42:39.373318: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-01-03 13:42:39.374243: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-01-03 13:42:39.377939: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-01-03 13:42:39.378118: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.378702: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.379127: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-01-03 13:42:39.386525: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3411185000 Hz
2021-01-03 13:42:39.386867: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fb44c0 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2021-01-03 13:42:39.386891: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2021-01-03 13:42:39.498097: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.498786: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x4fdf030 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2021-01-03 13:42:39.498814: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 1080, Compute Capability 6.1
2021-01-03 13:42:39.498987: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.499416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX 1080 computeCapability: 6.1
coreClock: 1.8095GHz coreCount: 20 deviceMemorySize: 7.92GiB deviceMemoryBandwidth: 298.32GiB/s
2021-01-03 13:42:39.499448: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-01-03 13:42:39.499483: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2021-01-03 13:42:39.499504: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2021-01-03 13:42:39.499523: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2021-01-03 13:42:39.499543: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2021-01-03 13:42:39.499562: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2021-01-03 13:42:39.499581: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2021-01-03 13:42:39.499643: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.500113: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.500730: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0
2021-01-03 13:42:39.500772: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2021-01-03 13:42:39.915228: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2021-01-03 13:42:39.915298: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0
2021-01-03 13:42:39.915322: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N
2021-01-03 13:42:39.915568: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.916104: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2021-01-03 13:42:39.916555: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6668 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080, pci bus id: 0000:01:00.0, compute capability: 6.1)
Running
2021-01-03 13:42:40.267699: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
This is the GPU stats
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.66 Driver Version: 450.66 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GTX 1080 Off | 00000000:01:00.0 On | N/A |
| 0% 53C P2 46W / 198W | 7718MiB / 8111MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
As you can see, TensorFlow does not compute on the GPU but reserves the memory so I'm assuming it's because the inputs of the neural networks are too small and it uses the CPU instead.
To make sure the GPU was installed properly, I ran a sample from their documentation and it uses the GPU.
Is it an issue with the algorithm or the code? Is there a way to utilize the GPU in this case?
Thanks!
",30875,,,,,1/3/2021 16:14,Simple DQN too slow to train,When it comes to GPU usage,
nvidia-smi
shows the usage at the time it was executed. You should try running
watch -n0.01 nvidia-smi
to see the usage of GPU every 0.01 second. It should output some small usage for current model, like 5%. You could try to increase you model, to e.g.
self.d1 = Dense(1024, input_shape=(input_size,), activation="relu")
self.d2 = Dense(1024, activation="relu")
self.d3 = Dense(output_size, activation="linear")
to see if the usage of GPU increased.
",,user43110,,,,1/3/2021 16:02,,,,1,,,,CC BY-SA 4.0 25545,2,,25543,1/3/2021 16:14,,2,,"Is it training at all? Or is agent performance not improving over time? Q learning can be pretty unstable. I would recommend logging the sum of rewards received by the agent at the end of each episode and the model loss to help in the debugging process. The sum of rewards will show you if the agent is improving over time and the model loss will give you a rough idea about how stable the convergence is. I would recommend using tensorboard to log these metrics (https://www.tensorflow.org/tensorboard/get_started#using_tensorboard_with_other_methods). You will be able to monitor these metrics throughout the training process. You could also just print these metrics at the end of every epoch and monitor them in your console. You really just need someway to see what's going on during training.
In the paper you linked, it also mentioned double q learning, which in your code does not seem to be implemented. Vanilla q learning can have a reputation of being overoptimistic in the values that it assigns to states. This results in compounding approximation errors, which tend to destabilize learning. Using double q learning may help speed up convergence. If you need help with double q learning check out this paper: https://arxiv.org/pdf/1509.06461.pdf, and this github page: https://github.com/jihoonerd/Deep-Reinforcement-Learning-with-Double-Q-learning/blob/master/ddqn/agent/ddqn_agent.py
If you use double q learning, you may have to write your own custom training loop. This can be achieved by using the gradient tape object. Make sure to wrap this new function in a tf.function decorator. This will tell the TensorFlow back-end to compile that bit of code, making it run faster (https://www.tensorflow.org/guide/function). There are also some handy speed up tips in this post (https://www.tensorflow.org/tutorials/reinforcement_learning/actor_critic). They even wrap the environment step functions in tf.functions.The article uses actor-critic, which is a combination of policy gradient and q learning techniques, but you can swap out their neural network update code with the q learning functionality that you need.
If you need help with double q learning check out this paper: https://arxiv.org/pdf/1509.06461.pdf, and this github page: https://github.com/jihoonerd/Deep-Reinforcement-Learning-with-Double-Q-learning/blob/master/ddqn/agent/ddqn_agent.py
",40428,,,,,1/3/2021 16:14,,,,2,,,,CC BY-SA 4.0 25547,2,,12671,1/3/2021 20:36,,1,,"As opposed to what is written in this answer, you can have the analytical expression of the function that the neural network computes, even if that neural network computes a non-linear function. Take a look at this example: i.e. it is just the expression that you use to perform the forward pass of the neural network. The problem is how to interpret this function, how it generally behaves, and how it is different from the usually unknown function that the neural network is supposed to approximately compute: that's why neural networks are called black-box models, because they are not easily interpretable.
",2444,,,,,1/3/2021 20:36,,,,0,,,,CC BY-SA 4.0 25548,1,,,1/3/2021 21:49,,0,167,"I am reading this paper, that is discussing the use of distance metrics for character recognition predicton.
I can see the advantages of using a distance metrics in predictions like character recognition: You can model a set of known character's pixel vectors, and then measure a new unseen vector against this model, get the distance, and if the distance is low, predict that this unseen vector belongs to a particular class.
I'm wondering if there's any disadvantages to using distance metrics as the cost function in character recognition? For example, I was thinking maybe the distance calculation is slow for large images (would have to calculate the distance between each item in two long image vectors)?
",42926,,,,,10/1/2021 3:16,What are the disadvantages to using a distance metric in character recognition prediction,As I see it, the question boils down to the comparison between distance (function/metric) based Optical Character Recognition (OCR) and (for example) OCR done by means of Convolutional Neural Networks (CNNs). Particularly, it focuses on the cons of the former option.
There are a few potential problems associated with using distance based OCR systems. First of all, this approach requires an appropriate distance metric to deliver good results. Different types of distance metrics/functions are sensitive to different features in the input images. For example, some functions might penalize absolute differences, while others penalize squared differences, where the latter punishes differences of magnitude $> 1$ stronger than the former, while the former penalizes differences of (absolute) magnitude $< 1$ stronger than the latter. Which type of distance metric works best for a given problem has commonly to be determined empirically.
Also, distance based OCR systems may possibly be more sensitive to, for example, different levels in illumination than CNNs. While both sorts of classifiers could profit from data augmentation (i.e. adding variations of the existing training data to the dataset), OCR based on CNNs has the benefit that CNN training procedures produce classifiers that commonly generalize well to slight variations in the incoming data, while some slight novel tilt (or again variation in illumination) may break the distance based classification procedure, but may have not too detrimental effects on CNN based classifiers due to their rather strong generalization capabilities.
Of course, one can also try to increase the robustness of distance based OCR systems, but this is commonly associated with developing exhaustive preprocessing pipelines to standardize the appearance of incoming character images. Thus, in terms of system design, it is often easier to set up a CNN-based neural network architecture and train it (possibly drawing upon regularization to boost generalization of the trained system even further) than trying to design complicated preprocessing strategies to shift, rotate, and normalize character images exhaustively to boost performance of a distance based OCR system.
To sum up, after all, there are a lot of potential issues related to using plain distance (metric/function) based OCR systems, most of which, however, can strongly be alleviated when drawing upon clever (pre-)processing pipelines to standardize input images before performing distance based classification.
From all those issues mentioned above, the strongest disadvantage of this technique, however, (which cannot be alleviated so easily) is that it doesn't generalize too easily to novel input data, where CNNs might perform better with appropriate training data (+ augmentation and regularization in general).
You also mentioned the aspect of computational cost associated with distance based classification. First of all, I am not an expert in computational complexity/cost related questions. However, most image related computations can efficiently be executed on graphics cards. So, subtracting images pixel-wise, for example,is not very costly given it can be executed on a graphics card. But of course, when using distance based OCR, the computational cost associated with computing distances and comparing distance values scales with the reference dataset size. Using CNNs, the time needed to perform a classification is always the same irrespective of the training dataset size.
",37982,,,,,1/4/2021 0:27,,,,0,,,,CC BY-SA 4.0 25552,2,,25481,1/4/2021 0:39,,1,,"In image segmentation the target is actually an image, with the same dimensions as the input, where each pixel has a label depending on which class it represents. It is not uncommon for such a dataset to have a "background" class that essentially consists of the pixels not belonging to any other class. If not you can always group together classes typically associated with background (e.g. "sky", "cloud", "grass", "mountain", etc.) to form the class "background". Likewise you could group all other possible classes of interest (e.g. "person", "car", "horse", etc.) into the class "foreground". With this dataset you could train an image segmentation model that predicts if a pixel belongs to the background or the foreground, without actually classifying it into a "person" or a "car".
So suppose you want to make your own removal.ai, you could:
I was studying the off-policy policy improvement method. Then I encountered importance sampling. I completely understood the mathematics behind the calculation, but I am wondering what is the practical example of importance sampling.
For instance, in a video, it is said that we need to calculate the expected value of a biased dice, here $g(x)$, in terms of the expected value of fair dice, $f(x)$. Here is a screenshot of the video.
Why do we need that, when we have the probability distribution of the biased dice?
",43507,,2444,,1/4/2021 22:21,4/7/2021 9:40,Why do we need importance sampling?,The basic approach to non-maximum-suppression makes sense, but I am kind of confused about how you handle nested bounding boxes.
Suppose you have two predicted boxes, with one completely enclosing another. What happens under this circumstance (in particular, in the case of the Single Shot MultiBox Detector)? Which bounding box do you select?
",32390,,2444,,1/7/2021 17:55,1/7/2021 17:55,How are nested bounding boxes handled in object detection (and in particular in the case of the SSD)?,Let us assume that I am working on a dataset of black and white dog images.
Each image is of size $28 \times 28$.
Now, I can say that I have a sample space $S$ of all possible images. And $p_{data}$ is the probability distribution for dog images. It is easy to understand that all other images get a probability value of zero. And it is obvious that $n(S)= 2^{28 \times 28}$.
Now, I am going to design a generative model that sample from $S$ using $p_{data}$ rather than random sampling.
My generative model is a neural network that takes random noise (say, of length 100) and generates an image of the size $28 \times 28$. My function is learning a function $f$, which is totally different from the function $p_{data}$. It is because of the reason that $f$ is from $R^{100}$ to $S$ and $p_{data}$ is from $S$ to $[0,1]$.
In the literature, I often read the phrases that our generative model learned $p_{data}$ or our goal is to get $p_{data}$, etc., but in fact, they are trying to learn $f$, which just obeys $p_{data}$ while giving its output.
Am I going wrong anywhere or the usage in literature is somewhat random?
",18758,,,,,1/4/2021 8:16,Confusion between function learned and the underlying distribution,You're right! The generative model $f$ is not the same as the probability density (p.d.f.) function $p_{data}$. The kind of phrases you've referred to are to be interpreted informally. You learn $f$ with the hope that sampling a latent vector $z$ from some known distribution (from which it is easy to sample), results in $f(z)$ that has the probability density function $p_{data}$. However, merely learning $f$ does not give you the power to estimate what $p_{data}(x)$ is for some image $x$. Learning $f$ only gives you the power to sample according to $p_{data}(\cdot)$ (if you've learned an accurate such $f$).
",36974,,,,,1/4/2021 8:16,,,,0,,,,CC BY-SA 4.0 25559,2,,25553,1/4/2021 9:20,,7,,"Importance sampling is typically used when the distribution of interest is difficult to sample from - e.g. it could be computationally expensive to draw samples from the distribution - or when the distribution is only known up to a multiplicative constant, such as in Bayesian statistics where it is intractable to calculate the marginal likelihood; that is
$$p(\theta|x) = \frac{p(x|\theta)p(\theta)}{p(x)} \propto p(x|\theta)p(\theta)$$
where $p(x)$ is our marginal likelihood that may be intractable and so we can't calculate the full posterior and so other methods must be used to generate samples from this distribution. When I say intractable, note that
$$p(x) = \int_{\Theta} p(x|\theta)p(\theta) d\theta$$
and so intractable here means that either a) the integral has no analytical solution or b) a numerical method for computing this integral may be too expensive to run.
In the instance of your die example, you are correct that you could calculate the theoretical expectation of the bias dice analytically and this would probably be a relatively simple calculation. However, to motivate why importance sampling may be be useful in this scenario, consider calculating the expectation using Monte Carlo methods. It would be much simpler to uniformly sample a random integer from 1-6 and calculate the importance sampling ratio $x \frac{g(x)}{f(x)}$ than it would be to draw samples from the bias dice, not least because most programming languages have built in methods to randomly sample integers.
As your question is tagged as reinforcement learning I will add why it is useful in the RL domain. One reason is that it may be our policy of interest is expensive to sample from, so instead we can just generate actions from some other simple policy whilst still learning about the policy of interest. Second, we could be interested in a policy that is deterministic (greedy) but still be able to explore, so we can have an off-policy distribution that explores much more frequently.
NB: it may not be clear how you can use importance sampling if the distribution is only known up to a constant so see this answer for an explanation.
",36821,,36821,,4/7/2021 9:40,4/7/2021 9:40,,,,0,,,,CC BY-SA 4.0 25560,1,,,1/4/2021 9:50,,1,196,"For action recognition or similar tasks, one can either use 3D CNN or combine 2D CNN with optical flow. See this paper for details.
Can someone tell the pros/cons of each, in terms of accuracy, cost such as computation and memory requirement, etc.? In other words, is the computation overhead of 3D CNN justified by its accuracy improvement? Under what scenarios would one prefer one over another?
3D CNNs are also used for volumetric data, such as MRI images. Can 2D CNN + optical flow be used here?
I understand 2D CNNs and 3D CNNs, but I do not know about optical flow (my background is not computer-vision).
",20491,,2444,,1/5/2021 12:03,1/5/2021 12:03,What are the pros and cons of 3D CNN and 2D CNN combined with optical flow for action recognition?,I've been working on research into reproducing social behavior using multi-agent reinforcement learning. My focus has been on a GridWorld-style game, but I was thinking that maybe a simpler Prisoner's Dilemma game could be a better approach. I tried to find existing research papers in this direction, but couldn't find any, so I'd like to describe what I'm looking for in case anyone here knows of such research.
I'm looking for research into scenarios where multiple RL agents are playing Iterated Prisoner's Dilemma with each other, and social behaviors emerge. Let me specify what I mean by "social behaviors." Most research I've seen into RL/IPD (example) focuses on how to achieve the ideal strategy, and how to get there the fastest, and what common archetypes of strategies emerge. That is all nice and well, but not what I'm interested in.
An agent executing a Tit-for-Tat strategy is giving positive reinforcement to the other player for "good" behavior, and negative reinforcement for "bad" behavior. That is why it wins. My key point here is that this carrot-and-stick method is done individually rather than in groups. I want to see it evolve within a group.
I want to see an entire group of agents evolve to punish and reward other players according to how they behaved with the group. I believe that fascinating group dynamics could be observed in that scenario.
I programmed such a scenario a decade ago, but by writing an algorithm manually, not using deep RL. I want to do it using deep RL, but first I want to know whether there are existing attempts.
Does anyone know whether such research exists?
",25904,,2444,,1/12/2021 0:01,10/9/2021 17:07,Research into social behavior in Prisoner's Dilemma,I was wondering which AI techniques and architectures are used in environments that need predictions to continually improve by the feedback of the user. So let's take some kind of recommendation system, but not for a number of $n$ products, but for some problem of higher space. It's initially trained, but should keep improving by the feedback and corrections applied by the user. The system should continue to improve its outcomes on-the-fly in production, with each interaction.
Obviously, (deep) RL seems to fit this problem, but can you really deploy this learning process to production? Is it really capable of improving results on-the-fly?
Are there any other techniques or architectures that can be used for that?
I'm looking for different approaches in general, in order to be able to compare them and find the right one for problems of that kind. Of course, there always is the option to retrain the whole network, but I was wondering whether there are some online, on-the-fly techniques that can be used to adjust the network?
",43542,,2444,,1/4/2021 15:19,1/4/2021 15:19,What are good techniques for continuous learning in production?,When I look up the generalised delta rule equation for back-propogation, I am seeing two conflicting equations.
For example, here (slide 20), given $o$ (the output, defined in slide 18), $z$ (the activated output) and a target $t$, defined in slide 17, then:
$\frac{\delta E}{\delta Z} = o(1-o)(o-t)$
When I look for the same equation else, e.g. here, slide 14, it says, given $o$ the output and $y$ the label, then (using slightly different notation $\beta_k$):
$\beta_k = o_k(1-o_k)(y_k-o_k)$
I can see here that these two equations are almost the same, but not quite. One subtracts the output from the target, and one subtracts the target from the output.
The reason why I'm asking this is I'm trying to do question 29 and 30 of this paper, and they are using the second equation ($\beta_k$) but my college notes (that I can't copy and paste due to copyright) define the equation according to the first equation $\frac{\delta E}{\delta Z}$. I'm wondering which way is correct, do you subtract the target from the obtained output, or vice versa?
",42926,,2444,,1/5/2021 0:31,1/5/2021 0:31,"For the generalised delta rule in back-propogation, do you subtract the target from the obtained output, or vice versa?",I have a time series with both continuous and categorical features, and I want to do a prediction task.
I will elaborate:
The data is composed of 100Hz sampling of some voltages, kind of like an ecg signal, and of some categorical features such as "green", "na" and so on.
In total, the number of features can reach 300, of which most are continuous.
The prediction should take in a chunk of frames and predict a categorical variable for this chunk of frames.
I want to create a deep learning model that can handle both categorical and continuous features.
Best I can think of is two separate losses, like MSE and cross entropy, and a hyperparameter to tune between them, kind of like regularization.
Best I could find on this subject was this, with an answer from 2015.
I wonder if something better was invented, since then, or maybe just someone here knows something better.
",21645,,21645,,1/5/2021 10:47,1/5/2021 10:47,Correct way to work with both categorical and continuous features together,This should not matter that much as long as you do enough preprocessing?
But one could use e.g. the Facenet architecture to extract the embedding vector for each face of the same subject.
Then you could compute the covariance matrix and perform PCA on it. This would give you the most significant features with which you can decide which face to take.
Alternatively, you could do something in the direction of the eigenfaces: https://en.wikipedia.org/wiki/Eigenface.
",42601,,,,,1/4/2021 14:33,,,,1,,,,CC BY-SA 4.0 25573,1,,,1/4/2021 16:10,,2,98,"Say I have an Machine/Deep learning algorithm I developed on a desktop pc to achieve a real-time classification of time series events from a sensor. Once the algorithm is trained and performs good, I want to implement it on an low power embbeded system, with the same sensor, to classify events in real-time:
Formally speaking $x_6$ is a function of $w_{16},\ w_{26}$ and $w_{36}$, that is $$x_6 =f(w_{16}, w_{26}, w_{36})=w_{16}y_1 + w_{26}y_2 + w_{36}y_3.$$ The derivative w.r.t. $w_{26}$ is $$\frac{\partial x_6}{\partial w_{26}}= \frac{\partial w_{16}y_1}{\partial w_{26}} +\frac{\partial w_{26}y_2}{\partial w_{26}} +\frac{\partial w_{36}y_3}{\partial w_{26}} = 0 +y_2 \frac{\partial w_{26}}{\partial w_{26}} + 0= y_2.$$ The first equality is obtained using the fact that the partial derivative is linear (so the derivative of the sum is the sum of the derivatives); the second equality comes from again from the linearity and from the fact that $w_{16}y_1$ and $w_{36}y_3$ are constants with respect to $w_{26}$, so their partial derivative w.r.t. this variable is $0$.
Bonus
Not really asked in the original question, but since I'm here let me have a bit of fun ;).
Let's say $x_6$ is the output of the sixth node after you apply an activation function, that is $$x_6 =\sigma(f(w_{16}, w_{26}, w_{36}))=\sigma(w_{16}y_1 + w_{26}y_2 + w_{36}y_3).$$ You can compute the partial derivative applying the properties illustrated above, with the additional help of the chain rule $$\frac{\partial x_6}{\partial w_{26}}=\frac{\partial \sigma(w_{16}y_1 + w_{26}y_2 + w_{36}y_3)}{\partial w_{26}}=\sigma'\frac{\partial w_{16}y_1}{\partial w_{26}} +\sigma'\frac{\partial w_{26}y_2}{\partial w_{26}} +\sigma'\frac{\partial w_{36}y_3}{\partial w_{26}}=y_2\sigma'$$ $\sigma'$ denotes the derivative of sigma with respect to its argument.
",42424,,2444,,1/4/2021 22:42,1/4/2021 22:42,,,,0,,,,CC BY-SA 4.0 25576,1,,,1/4/2021 19:49,,1,49,"I am re-implementing vpg and using Spinning Up as reference implementation. I noticed that the default epoch size is 4000. I also see cues in papers that big batch size is quite standard.
My implementation doesn't batch XP together just applies the update after every episode. It turns out my implementation is more sample efficient than the reference implementation on simple problems (like CartPole or LunarLander) even though I haven't added the critic yet! Of course this could be due to number of reasons, for example I've only done parameter search on my implementation.
But it would make sense anyway: bigger batch size is generally considered better only because the GPU is faster processing many samples parallel. Is this the reason here? It would make sense but it is surprising for me as I thought sample efficiency is considered more important than computing efficiency in RL.
",43565,,,,,1/4/2021 19:49,Why do we use big batch/epoch size in policy gradient methods (vpg specifically)?,I am afraid that the model will infer from the background information that it shouldn't use to predict the plant diseases, what makes the problem worse is that some plant diseases only exist in one dataset and not the other
I am afraid that when I use the model on real-life conditions (say someone capturing an image with their phone) the model will be heavily biased towards predicting labels in the second dataset (the one with actual background) since the background is similar and it somehow associated the background with the inference process.
You are right to be concerned about this. However, you are not alone in having a neural network model that relies heavily on the background or other confounding factors for your classification task.
A small paper that addresses a similar issue is Context Augmentation for Convolutional Neural Networks. Table 1 from this paper shows how the model trained and tested on only the background performs better than model trained and tested on only the foreground. They address this by making a training set where the foreground objects are given many backgrounds. Given your imbalanced datasets, with some diseases only present in one dataset and not the other, I think context augmentation might work for you. The main barrier here is that you do not have segmentation maps for these. Are your datasets too large to segment yourself? You might consider automatic segmentation networks - especially for the images with white backgrounds.
As for your interpretability methods, you should not expect too much from them though some groups are able to get good results with them using GradCAM.
Here is an alternative idea that may also solve your problem: If you are trying to make a system that can be used in the field, is it possible to have the technician or photographer put a white board under the diseased leaf?
",21471,,,,,1/4/2021 20:09,,,,0,,,,CC BY-SA 4.0 25578,2,,25516,1/4/2021 20:19,,0,,"It sounds like you may be looking for the A* search algorithm. It is a search algorithm, like DFS and BFS, but it will explore only the most promising branches based on a heuristic function you supply. The difficult part of implementing this is deciding on a low-cost, admissible heuristic.
Excited to reflect nbro's suggestion from comments.
",21471,,21471,,1/5/2021 16:16,1/5/2021 16:16,,,,6,,,,CC BY-SA 4.0 25584,1,,,1/5/2021 5:40,,1,35,"I am curious about the working of a Siamese network. So, let us suppose I am using a triplet loss for my network and I have instantiated single CNN 3 times and there are 3 inputs to the network. So, during a forward pass, each of the networks will give me an embedding for each image, and I can get the distance and calculate the loss and compare it with the output, so that my model is ready to propagate the gradients to update the weights.
The Question: How do these weights get updated during the back propagation? Just because we are using 3 inputs and 3 branches of the same network and we are passing the inputs one by one (I suppose), how do the gradients are updated? Are these series? Like the one branch will update, then the second and then the third. But won't it be a problem because each branch would try to update based on its output? If in parallel, then which branch is responsible for the gradients update? I mean to say that I am unable to get the idea how weights are updated in Siamese network. Can someone please explain in simpler terms?
",36062,,2444,,1/6/2021 12:13,1/6/2021 12:13,How do gradients are flown back into the Siamese network when branching is done?,In NEAT, the innovation of a node does not affect the evolution directly. Only the connection genes and their innovation will matter. So you can simply have whole numbers as IDs under each Genome / Network.
--EDIT-- (Complete reasoning)
In the original paper, it is clearly stated that the nodes from the better genome is taken during crossover and then only the connections are cross-over'ed (in some method) and hence the innovation numbers for the connections. NEAT is connection-centric and does not care much about evolving nodes.
Adding to that from basic neural networks theory, the nodes will never matter in a neural network because all the calculation and the learning happens in the connections. Think about a regular feed-forward network. You only care about the weight matrix which is a property of the connections instead of the nodes though both are present. Similarly in a NEAT generated network, the nodes will not matter, as all the learning takes place in the way these nodes are connected and the weights of the network.
Further, the nodes list can easily be derived from the connection list and hence marking the connections, are enough.
",43484,,43484,,1/8/2021 17:03,1/8/2021 17:03,,,,0,,,,CC BY-SA 4.0 25586,1,25590,,1/5/2021 13:11,,6,686,"I'm currently reading the paper Federated Learning with Matched Averaging (2020), where the authors claim:
A basic fully connected (FC) NN can be formulated as: $\hat{y} = \sigma(xW_1)W_2$ [...]
Expanding the preceding expression $\hat{y} = \sum_{i=1}^{L} W_{2, i \cdot } \sigma(\langle x, W_{1,\cdot i} \rangle))$, where $ i\cdot$ and $\cdot i$ denote the ith row and column correspondingly and $L$ is the number of hidden units.
I'm having a hard time wrapping my head around how it can be boiled down to this. Is this rigorous? Specifically, what is meant by the ith row and column? Is this formula for only one layer or does it work with multiple layers?
Any clarification would be helpful.
",43582,,43582,,1/6/2021 8:17,1/8/2021 17:56,How to express a fully connected neural network succintly using linear algebra?,Further to my last question, I am training a custom entity of FOODITEM to be recognized by Spacy's Name Entity Recognition engine. I am following tutorials online, following is the advise given in most of the tutorials;
Load the model or create an empty model
We can create an empty model and train it with our annotated dataset or we can use the existing spacy model and re-train with our annotated data.
But none of the tutorials tell how/why to choose between the two options. Also, I don't understand how will the choice affect my final output or the training of the model.
How do I make the choice between a pre-trained model or a blank model? What are the factors to consider?
",43434,,,,,1/6/2021 5:02,Should we use a pre-trained model or a blank model for custom entity training of NER in spacy?,I was going through the paper on U-Net. U-net consists of a contracting path followed by an expanding path. Both the paths use a regular convolutional layer. I understand the use of convolutional layers in the contracting path, but I can't figure out the use of convolutional layers in the expansive path. Note that I'm not asking about the transpose convolutions, but the regular convolutions in the expansive path.
",31749,,2444,,1/6/2021 17:17,1/6/2021 17:17,What is the use of the regular convolutional layer in expansion path of U-Net?,Many authors of research papers in AI (e.g. arXiv) write their neural networks from the ground-up, using low-level languages like C++ to implement their theories. Can existing open source frameworks also be used for this purpose, or are their implementations too limited?
Can, for example, TensorFlow be used to craft an original network architecture that shows improvements on existing benchmarks? Can original mathematical work be coded into a high-level framework like TensorFlow such that original research on network architectures/approaches be demonstrated in a paper?
A quick search reveals many papers using C++ in their implementation:
The equation $$\hat{y} = \sigma(xW_\color{green}{1})W_\color{blue}{2} \tag{1}\label{1}$$ is the equation of the forward pass of a single-hidden layer fully connected and feedforward neural network, i.e. a neural network with 3 layers, 1 input layer, 1 hidden layer, and 1 output layer, where
As an example, suppose that we have $N$ real-valued features, there are $L$ hidden units (or neurons) and $M$ output units, then the elements (feature vector and parameters) of equation \ref{1} would have the following shape
$\sigma$ is an activation function that is applicable to all elements of the matrix separately (i.e. component-wise). So, $\hat{y} \in \mathbb{R}^{1 \times M}$.
The equation
$$ \hat{y} = \sum_{\color{red}{i} = 1}^{L} W_{\color{blue}{2}, \color{red}{i} \cdot} \sigma(\langle x, W_{\color{green}{1}, \cdot \color{red}{i}} \rangle )\label{2}\tag{2} $$
is another way of writing equation \ref{1}.
Before going to the explanation, let's try to understand equation \ref{2} and its components.
$W_{\color{green}{1}, \cdot \color{red}{i}} \in \mathbb{R}^N$ is the $\color{red}{i}$th column of the matrix that connects the inputs to the hidden neurons, so it is a vector of $N$ elements (note that we sum over the number of hidden neurons, $L$).
Similarly, $ W_{\color{blue}{2}, \color{red}{i} \cdot} \in \mathbb{R}^M$ is also a vector, but, in this case, it is a row of the matrix $ W_{\color{blue}{2}}$ (rather than a column: why? because we use $\color{red}{i} \cdot$ instead of $\cdot \color{red}{i}$, which refers to the column).
So, $\langle x, W_{\color{green}{1}, \cdot \color{red}{i}} \rangle$ is the dot (or scalar) product between the feature vector $x$ and the $\color{red}{i}$th column of the matrix that connects the inputs to the hidden neurons, so it's a number (or scalar). Note that both $x$ and $W_{\color{green}{1}, \cdot \color{red}{i}}$ have $N$ elements, so the dot product is well-defined in this case.
$ W_{\color{blue}{2}, \color{red}{i}} \sigma(\langle x, W_{\color{green}{1}, \color{red}{i}} \rangle))$ is the product between a vector of shape $M$ and a number $\sigma(\langle x, W_{\color{green}{1}, \cdot \color{red}{i}} \rangle )$. This is also well-defined. You can multiply a real-number with a vector, it's like multiplying the real-number with each element of the vector.
$\sum_{\color{red}{i} = 1}^{L} W_{\color{blue}{2}, \color{red}{i} \cdot} \sigma(\langle x, W_{\color{green}{1}, \cdot \color{red}{i}} \rangle )$ is thus the sum of $L$ vectors of size $M$, which makes $\hat{y}$ also have size $M$, as in equation \ref{1}.
Now, the question is: is equation \ref{2} really equivalent to equation \ref{1}? This is still not easy to see because $xW_\color{green}{1}$ is a vector of shape $L$, but, in equation \ref{2}, we do not have any vector of shape $L$, but we have vectors of shape $N$ and $M$ (and the vectors of shape $M$ are summed $L$ times). First, note that $\sigma(xW_\color{green}{1}) = h\in \mathbb{R}^L$, so $hW_\color{blue}{2}$ are $M$ dot products collected in a vector (i.e. $\hat{y} \in \mathbb{R}^M$), where the $j$th element of $\hat{y}$ was computed as a summation of $L$ elements (a dot product of two vectors is the element-wise multiplication of the elements of the vectors followed by the summation of these multiplications). Ha, still not clear!
The easiest way (for me) to see that they are equivalent is to think that $\sigma(xW_\color{green}{1})$ is a vector of $L$ elements $\sigma(xW_\color{green}{1}) = \ell = [l_1, l_2, \dots, l_L]$. Then you know that to multiply this vector with $\ell$ (from the left), you actually perform a dot product between $\ell$ and each column $W_\color{blue}{2}$. A dot product is essentially a sum, and that's why we sum in equation \ref{2}. So, essentially, in equation \ref{2}, we first multiple $l_1$ with the first row of $W_\color{blue}{2}$ (i.e. by all elements of the first row). Then we multiply $l_2$ by the second row of $W_\color{blue}{2}$. We do this for all $L$ rows, then we sum the rows (to conclude the dot product). So, you can think of equation 2 as first perform all multiplications, then summing, rather than dot product-by-dot product.
So, in my head, I have the following picture. To simplify the notation, let $A$ denote $W_\color{blue}{2}$, so $A_{ij}$ is the element at the $i$th row and $j$th column of matrix $W_\color{blue}{2}$. So, we have the following initial matrix
$$ A = \begin{bmatrix} A_{11} & A_{12} & A_{13} & \dots & A_{1M} \\ A_{21} & A_{22} & A_{23} & \dots & A_{2M} \\ \vdots & \vdots & \vdots & \dots & \vdots \\ A_{L1} & A_{L2} & A_{L3} & \dots & A_{LM} \\ \end{bmatrix} = \begin{bmatrix} W_{\color{blue}{2}, \color{red}{1} \cdot} \\ W_{\color{blue}{2}, \color{red}{2} \cdot} \\ \vdots \\ W_\color{blue}{2, \color{red}{L} \cdot} \\ \end{bmatrix} $$
Then, in the first iteration of equation \ref{2}, we do the following
$$ \begin{bmatrix} l_1 A_{11} & l_1 A_{12} & l_1 A_{13} & \dots & l_1 A_{1M} \\ A_{21} & A_{22} & A_{23} & \dots & A_{2M} \\ \vdots & \vdots & \vdots & \dots & \vdots \\ A_{L1} & A_{L2} & A_{L3} & \dots & A_{LM} \\ \end{bmatrix} $$
In the second, we do the following
$$ \begin{bmatrix} l_1 A_{11} & l_1 A_{12} & l_1 A_{13} & \dots & l_1 A_{1M} \\ l_2 A_{21} & l_2 A_{22} & l_2 A_{23} & \dots & l_2 A_{2M} \\ \vdots & \vdots & \vdots & \dots & \vdots \\ A_{L1} & A_{L2} & A_{L3} & \dots & A_{LM} \\ \end{bmatrix} $$ Until we have
$$ \begin{bmatrix} l_1 A_{11} & l_1 A_{12} & l_1 A_{13} & \dots & l_1 A_{1M} \\ l_2 A_{21} & l_2 A_{22} & l_2 A_{23} & \dots & l_2 A_{2M} \\ \vdots & \vdots & \vdots & \dots & \vdots \\ l_L A_{L1} & l_L A_{L2} & l_L A_{L3} & \dots & l_L A_{LM} \\ \end{bmatrix} $$ Then we do a reduce sum across the rows to end the dot product (i.e. for each column we sum the elements in the rows). This is exactly equivalent to first performing the dot product between $\ell$ and the first column of $W_\color{blue}{2}$, then the second column, and so on.
",2444,,2444,,1/8/2021 17:56,1/8/2021 17:56,,,,0,,,,CC BY-SA 4.0 25591,1,25593,,1/5/2021 18:51,,2,210,"I started reading up on SVM and very little is defined of what are support values. I reckon it's they are denoted as $\alpha$ in most formulations.
",43588,,2444,,1/6/2021 12:09,1/6/2021 12:09,What are support values in a support vector machine?,Your statement that researchers build their network from the ground-up using C++ or some other low level library couldn't be further from the truth.
You could take a look at this analysis showing the popularity of these two frameworks in the top ML conferences. The following Figure is taken from there.
In CVPR-2020, for example, TensorFlow and pytorch combined for over 500 papers! Furthermore, because the two most active research entities (Google and Facebook) are backing these two frameworks, they are used in some of the most impactful research studies.
I want to give some reasons that support the popularity of these frameworks, but first I'm going to rephrase your question a bit:
Why use TensorFlow/Pytorch in python rather than build your model on your own using C++?
Note: The reason I rephrased the question is because TensorFlow and PyTorch both have a C++ APIs.
Some reasons are the following
Rapid prototyping. Languages link C++, have bloated syntaxes, require low-level operations (e.g. memory management) and cannot be run interactively. This means it takes someone much less time to create and test a model in python than it does in C++.
No need to re-invent the wheel. Some operations are common in most networks (e.g. backpropagation), why re-implement them? Other functionalities are hard to implement on your own (e.g. parallel processing, GPU computation). Do data scientists need to have such a strong technical background to research neural networks?
Open-source. They benefit from being opensource and can offer a great deal of tools at your disposal for building neural networks. You want to add batchnorm to your network? No worries, just import it and add it in a single line! Also, they offer the perfect opportunity for sharing pretrained models.
They are optimized. These frameworks are optimized to run as fast as possible on GPUs (if available) or CPUs. It would be virtually impossible for someone to write code that runs as fast on his own.
In the least-squares SVM (LS-SVM) the non-zero Lagrange multipliers ($\alpha$) are the support values. The corresponding data points are the support vectors. Johan Suykens explains this in Least Squares Support Vector Machines.
",5763,,2444,,1/6/2021 12:09,1/6/2021 12:09,,,,0,,,,CC BY-SA 4.0 25594,2,,25587,1/6/2021 5:02,,1,,"The reason you would load a pre-existing model is that it offers something of value to your task (e.g. named entity recognition for food) and the cost of training it from scratch is not worth it. For example, to train GPT-3 from scratch would cost several million dollars. Typically someone will use a model like BERT and fine tune it. This is called transfer learning. With spaCy you will typically use en_core_web_sm which was trained on the OntoNotes corpus and includes named entities. Making a custom food NER using en_core_web_sm should be more accurate than making one from scratch. You should be able to build a good model with and without transfer learning fairly quickly if you have a GPU.
",5763,,,,,1/6/2021 5:02,,,,0,,,,CC BY-SA 4.0 25595,1,,,1/6/2021 5:38,,4,414,"The sklearn's documentation of the method roc_auc_score
states that the parameter multi_class
can take the value 'OvR'
(which stands for One-vs-Rest) or 'OvO'
(which stands for One-vs-One). These values are only applicable for multi-class classification problems.
Does anyone know in what particular cases we would use OvR as opposed to OvO? In the general academic literature, is there a preference given to one?
",33734,,2444,,1/6/2021 22:00,1/6/2021 22:00,"When computing the ROC-AUC score for multi-class classification problems, when should we use One-vs-Rest and One-vs-One?",I'm using MATLAB 2019, Linux, and UNet (a CNN specifically designed for semantic segmentation). I'm training the network to classify all pixels in an image as either cell or background to get segmentations of cells in microscopic images. My problem is the network is classifying every single pixel as background, and seems to just be outputting all zeroes. The validation accuracy improves a little at the very start of the training but than plateaus at around 60% for the majority of the training time. The network doesn't seem to be training very well and I have no idea why.
Can anyone give me some hints about what I should look into more closely? I just don't even know where to start with debugging this.
Here's my code:
% Set datapath
datapath = '/scratch/qbi/uqhrile1/ethans_lab_data';
% Get training and testing datasets
images_dataset = imageDatastore(strcat(datapath,'/bounding_box_cropped_resized_rgb'));
load(strcat(datapath,'/gTruth.mat'));
labels = pixelLabelDatastore(gTruth);
[imdsTrain, imdsVal, imdsTest, pxdsTrain, pxdsVal, pxdsTest] = partitionCamVidData(images_dataset,labels);
% Weight segmentation class importance by the number of pixels in each class
pixel_count = countEachLabel(labels); % count number of each type of pixel
frequency = pixel_count.PixelCount ./ pixel_count.ImagePixelCount; % calculate pixel type frequencies
class_weights = mean(frequency) ./ frequency; % create class weights that balance the loss function so that more common pixel types won't be preferred
% Specify the input image size.
imageSize = [512 512 3];
% Specify the number of classes.
numClasses = 2;
% Create network
lgraph = unetLayers(imageSize,numClasses);
% Replace the network's classification layer with a pixel classification
% layer that uses class weights to balance the loss function
pxLayer = pixelClassificationLayer('Name','labels','Classes',pixel_count.Name,'ClassWeights',class_weights);
lgraph = replaceLayer(lgraph,"Segmentation-Layer",pxLayer);
%% TRAIN THE NEURAL NETWORK
% Define validation dataset-with-labels
validation_dataset_with_labels = pixelLabelImageDatastore(imdsVal,pxdsVal);
% Training hyper-parameters: edit these settings to fine-tune the network
options = trainingOptions('adam', 'LearnRateSchedule','piecewise', 'LearnRateDropPeriod',10, 'LearnRateDropFactor',0.3, 'InitialLearnRate',1e-3, 'L2Regularization',0.005, 'ValidationData',validation_dataset_with_labels, 'ValidationFrequency',10, 'MaxEpochs',3, 'MiniBatchSize',1, 'Shuffle','every-epoch');
% Set up data augmentation to enhance training dataset
aug_imgs = {};
numberOfImages = length(imdsTrain.Files);
for k = 1 : numberOfImages
% Apply cutout augmentation
img = readimage(imdsTrain,k);
cutout_img = random_cutout(img);imwrite(cutout_img,strcat('/scratch/qbi/uqhrile1/ethans_lab_data/augmented_dataset/img_',int2str(k),'.tiff'));
end
aug_imdsTrain = imageDatastore('/scratch/qbi/uqhrile1/ethans_lab_data/augmented_dataset');
% Add other augmentations
augmenter = imageDataAugmenter('RandXReflection',true, 'RandXTranslation',[-10 10],'RandYTranslation',[-10 10]);
% Combine augmented data with training data
augmented_training_dataset = pixelLabelImageDatastore(aug_imdsTrain, pxdsTrain, 'DataAugmentation',augmenter);
% Train the network
[cell_segmentation_nn, info] = trainNetwork(augmented_training_dataset,lgraph,options);
save cell_segmentation_nn
",9983,,,,,2/9/2021 1:03,Semantic segmentation CNN outputs all zeroes,I am currently working on my master's thesis and going to apply Deep-SARSA as my DRL algorithm. The problem is that there is no datasets available and I guess that I should generate them somehow. Datasets generation seems a common feature in this specific subject as stated in [1]
When a dataset is not available, learning is performed through experience.
I am wondering how to generate datasets when the environment is not as simple as a tic-tac-toe or a maze problem and what the experience means.
PS: The environment consists of 15 mobile users and 3 edge servers, each of which covers a number of mobile users. Each mobile user might generate a computationally heavy-load task and at the beginning of each timestep and can process the task itself or requests its associated edge server to do the processing. If the associated edge server is not capable of processing, due to some reasons, it requests a nearby edge server to lend it a hand. The optimization problem (reward) is to reduce time and energy consumption (multi-objective optimization). Each server has a DRL agent that makes offloading decisions.
I'd really appreciate your suggestions and help.
",43578,,2444,,1/8/2021 0:32,1/8/2021 7:39,How should I generate datasets for a SARSA agent when the environment is not simple?,I am wondering how to generate datasets when the environment is not as simple as a tic-tac-toe or a maze problem
There is no difference in concept, which is why tic-tac-toe and maze problems are used to teach.
As you have noted, the main difference between reinforcement learning (RL) and supervised learning is that RL does not use labeled datasets. If you are using SARSA then you would not expect to use any record of previous experience either because SARSA is designed to work on-policy and online - which means that data needs to be generated during training. Training data for SARSA is typically stored only temporarily before being used, or is used immediately (you might keep a log of it for analysis or to help document your thesis, but that log will not be used for further training by the agent). This is different to Q-learning and DQN, which could in theory make use of longer-term stored experience.
You have two main choices for acquiring data:
Use a real environment. In your case, set up 15 mobile users and 3 edge servers. Instrument the environment to collect state and reward data for the agent. Implement the agent as the real decision maker in this environment.
Simulate the environment. Write a simulation that models user behaviour and server loading. Instrument that to provide state and reward data, and integrate your learning agent with it. Typically the agent will call the environment's step
function, passing the action choice as an argument and receiving reward and state data back.
If you can simulate the environment, this is likely to be preferable to you since you will likely use less compute resources (than 3 servers and 15 mobile phones) and can run the training faster than real time. Deep reinforcement learning can use a large amount of experience to converge on near-optimal policies, and fast simulations can help because they generate experience faster than reality.
You can also do both approaches. Train an initial agent in simulation, then implement the real version once it reaches a good level of performance in simulation. You can even have the agent continue to learn and refine behaviour in production. Given that you are working with SARSA, this may be an important part of the intent of your project, that the agent continues to adapt to changes in user behaviour and server load over time. In fact this is a key advantage of SARSA over Q-learning, that it should be more reliable and safe to use in such a continuous learning scenario deployed to production.
and what the experience means.
The experience in reinforcement learning is the record of states, actions and rewards that the agent encounters during training.
",1847,,1847,,1/8/2021 7:39,1/8/2021 7:39,,,,3,,,,CC BY-SA 4.0 25599,1,,,1/6/2021 9:05,,1,21,"I'm trying to replicate a paper from Google on view synthesis/lightfields from 2019: DeepView: View Synthesis with Learned Gradient Descent and this is the PDF.
Basically the input to the neural network comes from a set of cameras which number is variable, and the output is a stack of images which number is also variable. For that they use both a Fully Convolutional Network and Learned Gradient Descent.
I don't know if I am understanding this correctly: (in each LGD iteration) They use the same network for all depth slices AND all views. Is this correct?
This is the LGD network, not much important to the question but it helps you understand the setup. You can see at least 3 LGD iterations. Part b) is just the calculation they do in the "green gradient boxes" on part a).
This is the inside of the CNNs. On each LGD iteration they use basically the same architecture, but the weights are different per iteration.
For me the confusing part is that they represent each view as a different network, but they don't represent each depth slice as a different network. As you can see in the next image they do say that they use the same parameters for all depth slices, and that the order of the views doesn't matter so it must be that they're also reusing the parameters for all views, right? So if I understand correctly, this is a matter of reusing the same model for all depths and all views. BTW note that the maxpool kind of operation is over each view.
Also I have a question on the practicalities of the implementation. I'll be implementing this with normal 2D convolution layers, so if I want them to run independent of the views and depth slices, I guess I could concatenate views and depth slices in the "batch" dimension? I mean, before the maximum k operation, and then reuse the output.
This is what they say:
Thanks
",43595,,43595,,1/6/2021 15:11,1/6/2021 15:11,"In the DeepView paper, do they use the same FCN for all depth slices AND all views?",Consider an image that contains one can (or bottle, or any similar oval object), which has texts all over it. In the image below, I have many bottles, but you can assume that each image only contains one such object.
As we can see, in each can, the text can flow from left to right, and any OCR system may miss the text on the left and right sides of the can, as they are not aligned with the camera angle.
So, is there any solution/s for this, like preprocessing in a certain way, so that we can read the text or make this round object into a straight one? (If there is any Python program that can solve this problem, could you please share it with me?)
",37980,,2444,,1/8/2021 0:40,1/8/2021 0:40,"In OCR, how should I deal with the warped text on the sides of oval objects?",Starting from my own understanding, and scoped to the purpose of image generation, I'm well aware of the major architectural differences:
A GAN's generator samples from a relatively low dimensional random variable and produces an image. Then the discriminator takes that image and predicts whether the image belongs to a target distribution or not. Once trained, I can generate a variety of images just by sampling the initial random variable and forwarding through the generator.
A VAE's encoder takes an image from a target distribution and compresses it into a low dimensional latent space. Then the decoder's job is to take that latent space representation and reproduce the original image. Once the network is trained, I can generate latent space representations of various images, and interpolate between these before forwarding through the decoder which produces new images.
What I'm more interested is the consequences of said architectural differences. Why would I choose one approach over the other? And why? (for example, if GANs typically produce better quality images, any ideas why that is so? is it true in all cases or just some?)
",16871,,16871,,1/6/2021 12:30,1/6/2021 12:30,What are the fundamental differences between VAE and GAN for image generation?,The point is that in the expansive path you have two forms of information:
Intuitively you can think of it as this: high-level features help the network tell which areas to group together, while details help it tell where each group starts and ends at the pixel level.
The idea is to combine these two forms of information, i.e. the high-level features and the details, optimally. To do this you need trainable layers that learn this optimal combination. Here is where the convolution layers come to play.
",26652,,,,,1/6/2021 10:01,,,,3,,,,CC BY-SA 4.0 25603,2,,25506,1/6/2021 10:28,,0,,"There is no general method to detect a change in the fitness landscape, since changes can be very local and can occur in just a small area of the fitness landscape. For this reason nature inspired optimization algorithms usually maintain a diversified population to cope with environmental changes. a common mechanism is using several sub-populations and ensuring that these sub-populations do not overlap. Also, there are some heuristics proposed that can help the algorithms in detecting changes. for instance if you are using multiple sub-populations, you can double test one element of each sub-population at some generation to find out if an environmental change has occurred or not. albeit, there are some comprehensive heuristics have been proposed for change detection, like the one proposed in [R. Mukherjee, S. Debchoudhury, S. Das, Modified differential evolution with locality induced genetic operators for dynamic optimization]. In my opinion, Use simple methods. for instance, at the keep a small by diverse set of population members at some k generations, and reevaluate them after k generations to detect change.
",43208,,,,,1/6/2021 10:28,,,,1,,,,CC BY-SA 4.0 25604,2,,25505,1/6/2021 10:44,,0,,"using different population sizes at different stages of the optimization process can be beneficial. With large population sizes you can effectively explore the landscape to find proper areas. Large population sizes helps in finding global optima or high fitness local optima. However, using large populations require more fitness evaluations and waste of computational resources. With a small population you can effectively exploit a previously find appropriate area and get high accuracy solutions. For this reason, some works suggest to use a large population at the start of the algorithm and gradually, decrease its size, like [Improving the search performance of SHADE using linear population size reduction]. Also, some dynamic optimization methods, use dynamic population sizes. For instance, they create sub-populations when it is necessary to discover more optima or to cover the landscape, they decrease their sub-populations when they detect a change or new appropriate uncovered area in the landscape.
",43208,,,,,1/6/2021 10:44,,,,1,,,,CC BY-SA 4.0 25605,2,,25427,1/6/2021 11:07,,1,,"I'm not familiar with your game so I can't tell you what a good heuristic woul be in your specific case, but I can give you some advice on how to look for a good heuristic function.
As a rule of thumb, the heuristic function for a MiniMax algorithm is best kept simple and efficient, so you can get deeper into the tree. But it depends on how costly it is to compute the heuristic function compared to simulating moves in the game.
If the heuristic takes longer than simulating a game move, it might be worth simplifying it so it runs faster and you can look ahead further. This often leads to more emergent and advanced strategies that are hard to express mathematically. An extreme example of a simple heuristic would be the current player score minus the opponent's score. Since scores only change when someone lands on a bonus tile, many paths you take down the tree would have equal value, so you need to be able to look ahead many moves to find a non-zero heuristic and be able to prune parts of the tree. But because the heuristic is so fast to compute, you can do this and discover more non-obvious strategies, simply by brute force simulation. This leads to more emergent behavior, and would tell you more about different ways to play the game (if that is your goal).
If simulating a game move takes longer than your current heuristic, it's probably not worth making the heuristic faster since the game simulation is the dominating factor in how deep you can go down the tree. This case is much more difficult because it means you have to find optimal strategies (and how to express them as a heuristic function) yourself. This is more of an art than a science - ask any chess grandmaster. I'd look up existing literature on the game and see if any existing strategies can be translated into a heuristic function. If there is no literature (e.g. because the game is new or unpopular), you could spend some time playing it yourself to discover what works. Alternatively (and perhaps more interesting), you could use a simple heuristic function with more emergent behavior, increase the time the MiniMax algorithm has to make moves, and play against your NPC opponent a few times to see what strategies it discovers. Or even have two NPCs with different heuristic functions play against each other. Then try to incorporate those into your final heuristic function. This is also a way to determine which heuristic function is better, if you have multiple candidates.
There are ways to optimize the heuristic function automatically using machine learning (specifically reinforcement learning), but it's probably not worth opening that can of worms in your case.
",40060,,,,,1/6/2021 11:07,,,,0,,,,CC BY-SA 4.0 25609,2,,25601,1/6/2021 12:25,,5,,"GANs generally produce better photo-realistic images but can be difficult to work with. Conversely, VAEs are easier to train but don’t usually give the best results.
I recommend picking VAEs if you don’t have a lot of time to experiment with GANs and photorealism isn’t paramount.
There are exceptions such as Google’s VQ-VAE 2 which can compete with GANs for image quality and realism. There is also VAE-GAN and VQ-VAE-GAN.
As a note, GANs and VAEs are not specifically for images and can be used for other data types/structures.
",5763,,,,,1/6/2021 12:25,,,,1,,,,CC BY-SA 4.0 25610,2,,25486,1/6/2021 14:47,,1,,"I couldn't understand your question clearly, however, it think you are making a slim mistake. let's look at the flowing code from "Russell " and do pruning step by step:
Assume your are In D and you have traversed its both children, Alpha at D becomes 20. We then return back to B, Beta becomes 20 (note that Alpha is -Inf in D). We go to E then L and then back to E. 30 is greater than Beta, so the rest is pruned. Not that Alpha remains -Inf in E, though its value here isn't important. we go back to B the value of Beta remains on changed 20. If we go to F and return back to B Beta remains unchanged and is still 20 (Note that Alpha is still -Inf). We return to A, Alpha becomes 20, And Beta remains +Inf.
So, Considering the code, the max node does not get the Alpha value from its children, rather it decides on its own by setting Alpha = max(Alpha, and returned value of its child) after visiting each child. the same is true for Beta.
How can I update the observation probability for a POMDP (or HMM), in order to have a more accurate prediction model?
The POMDP relies on observation probabilities that match an observation to a state. This poses an issue as the probabilities are not exactly known. However, the idea is to make them more accurate over time. The simplest idea would be to count the appeared observation as well as the states and use Naive Bayes estimators.
For example, $P(s' \mid a,s)$ is the probability that a subsequent state $s'$ is reached, given that the action $a$ and the previous state $s$ are known: In that simple case I can just count and then apply e.g. Naive Bayes estimators.
But, if I have an observation probability $P(z \mid s')$ (where $z$ is the observation) depending on a state, it's not as trivial to just count up the observation and the states, as I can not say that a state really was reached (Maybe I made an observation, but I was in a different state than wanted). I can just make an observation and hope I was in a certain state. But I can not say if e.g. I was in $s_1$ or maybe $s_2$. I think the update of the observation probability is only possible in the late aftermath.
So, what are good approaches to estimate my state?
",21157,,2444,,1/8/2021 0:09,1/8/2021 0:09,How to update the observation probabilities in a POMDP?,To my understanding, transfer learning helps to incorporate data from other related datasets and achieve the task with less labelled data (maybe in 100s of images per category).
Few-shot learning seems to do the same, with maybe 5-20 images per category. Is that the only difference?
In both cases, we initially train the neural network with a large dataset, then fine-tune it with our custom datasets.
So, how is few-shot learning different from transfer learning?
",43623,,2444,,1/7/2021 17:22,5/16/2021 9:09,How is few-shot learning different from transfer learning?,I have sentences with some grammatical errors , with no punctuations and digits written in words... something like below:
As you can observe, a proper noun , winston isnt highlighted with capital in Sample column. 'People' is spelled wrong and there are no punctuations in Sample column. The date in the first row isnt in right format. I have millions of rows like this and want to train a model to learn punctuations and corrections. Can a single BERT or T5 handle this task? or only option is to try 1 model for each task? Thanks in advance
",43626,,43626,,1/13/2021 9:28,4/28/2021 17:25,T5 or BERT for sentence correction/generation task?,I have 2 small images. They are basically the same, but differ in rotation and size. I should estimate the parameters for affine transform to get them similar. What network structure can be suitable for this task? For example, those based on convolutional networks did badly, because the pictures are too small.
In the original U-Net paper, it is written
The energy function is computed by a pixel-wise soft-max over the final feature map combined with the cross entropy loss function.
...
$$ E=\sum_{\mathbf{x} \in \Omega} w(\mathbf{x}) \log \left(p_{\ell(\mathbf{x})}(\mathbf{x})\right) \tag{1}\label{1} $$
where $w(\mathbf{x})$ is a weight map (I'm not interested in that part right now), and $p_{k}(\mathbf{x})$ is
$$ p_{k}(\mathbf{x})=\exp \left(a_{k}(\mathbf{x})\right) /\left(\sum_{k^{\prime}=1}^{K} \exp \left(a_{k^{\prime}}(\mathbf{x})\right)\right) $$
The pixel-wise softmax with $a_{k}(\mathbf{x})$ being the activation in feature channel $k$ at pixel position $\mathbf{x}$ and $K$ the number of classes. Then $\ell(\mathbf{x})$ from $p_{\ell(\mathbf{x})}$ is the true label of each pixel, i.e. if the pixel at position $\mathbf{x}$ is part of class $1$, then $p_{\ell(\mathbf{x})}$ is equal to $p_1(\mathbf{x})$.
As far as is understand $-E$ should be the cross-entropy function. Right? I've already done the math for the binary case (ignoring $w(\mathbf{x})$) and it seemed to be equal.
",43632,,2444,,1/8/2021 17:19,1/8/2021 17:19,Have I understood the loss function from the original U-Net paper correctly?,Instead of NNs, you can use RANSAC algorithm to calculate homography matrix, but first you need to find feature points. However, if your images are blob-like, you may not get such a successful results. Here some presentations for better understanding: cs.umd notes and csail.mit notes
(Also, there might be better image processing tools.)
",41615,,,,,1/7/2021 15:50,,,,0,,,,CC BY-SA 4.0 25621,2,,25600,1/7/2021 18:25,,2,,"There are many papers on this but the following is a good start:
You mentioned you do not want to do a panoramic view but that has more than one meaning. If I assume you mean you do not want to rotate the can while taking multiple photos, or you don't want to take multiple photos from different angles, you could try a pericentric lens. This would require some image processing to do the unwrapping. More resolution is needed as the wrapping is much more severe. The advantage though is that you will have a single image of the full cylindrical surface and won't miss any features or text.
",5763,,5763,,1/7/2021 21:19,1/7/2021 21:19,,,,0,,,,CC BY-SA 4.0 25622,1,,,1/8/2021 0:04,,2,150,"I am using DDPG to solve a RL problem. The action space is given by the Cartesian product $[0,20]^4\times[0,6]^4$. The actor
is implemented as a deep neural network with an output dimension equals to $8$ with tanh
activation.
So, given a state s
, an action is given by a = actor(s)
where a
contains real numbers in [-1,1]
. Next, I map this action a
into a valid action valid_a
that belongs to the action space $[0,20]^4\times[0,6]^4$. Than, I use valid_a
to calculate the reward.
My question is: how does the DDPG algorithm know about this mapping that I am doing? In what part of the DDPG algorithm should I specify this mapping? Should I provide a bijective mapping to guarantee that the DDPG algorithm learns bad from good actions?
",37642,,,,,1/8/2021 10:50,How does DDPG algorithm know about my action mapping function?,I am new to reinforcement learning. May I ask a simple (and maybe a bit silly) question here? I am trying to use the "one-step actor-critic" method to train a robot on a gridworld. Let's focus on the actor as there is nothing puzzling me for the critic.
I used a feedforward ANN with one hidden layer to parameterize the action preference function (i.e. the $h$ function). The ANN has one bias node in the input layer to connect to all the hidden nodes. Therefore, there are three sets of weights associated with the $h$ function -- the weights connecting the inputs to the hidden nodes (let's call it $W1$ matrix), the weights connecting the hidden nodes to the outputs (let's call it $W2$ matrix), and the weights connecting the bias node to the hidden nodes (let's call it $c$ vector).
I used the exponential soft-max as the policy function (i.e. the $\pi$ function). That is,
$$\pi(a|s,W1,W2,c) = \displaystyle\frac{e^{h(a,s,W1,W2,c)}}{\sum_be^{h(b,s,W1,W2,c)}}.$$
The inputs to the ANN are the state/feature vector, and the outputs of the ANN are the action preference values (i.e. the $h$ values). With these action preference values, the $\pi$ function can compute the probabilities for each action to be chosen.
It is easy to derive that
$$\nabla_{W1/W2/c} \log \pi(a|s,W1,W2,c) = \nabla_{W1/W2/c} h(a,s,W1,W2,c)-\sum_b \big[\nabla_{W1/W2/c}h(b,s,W1,W2,c)\pi(b|s,W1,W2,c)\big]$$
In the above, $/$ means "or".
My puzzle is, I found that $\nabla_{W2} h(\cdot,s,W1,W2,c) \equiv \sigma$, where $\sigma$ is the vector of the sigmoid activation values. That implies, $\nabla_{W2}$ is independent of actions!? Consequently, $\nabla_{W2} \log \pi(\cdot|s,W1,W2,c) \equiv 0$, which implies that $W2$ will not be updated at all...
Where did I get wrong?
(Actually, the above puzzle of mine extends to any policy gradient method as long as $\nabla \log \pi$ is involved and a feedforward ANN is used to approximate the action preferences.)
",43644,,2444,,1/9/2021 1:13,1/12/2021 3:19,$\nabla \log \pi$ with respect to some parameters constantly being zero,Yes, $E$ is the cross-entropy function and a direct generalization of the binary case.
For the binary case, probability to belong to the class $1$ is given by a sigmoid function $\sigma(x)$ of the output $x$, and the probability to belong to the class $0$ is $1 - \sigma(x)$.
Therefore the binary crossentropy will give: $$ -\sum_i (l_i \log \sigma(x) + (1 - l_i) \log (1 - \sigma(x)) $$ Where the sum is over all samples in the dataset. Because $l_i$ is a binary variable one of these two terms would be zero. The gradient descent forces the model to predict true label with more confidence.
For the multiclass case, now the output is a $K$-vector and the softmax function forces its elements to sum to one: $$ \sum_{i = 1}^{K} p_i = 0 $$ The true label is one-hot encoded vector, with $1$ on the position of the true label, and $0$ elsewhere. The generalization of the binary case is: $$ -\sum_i \sum_{j = 1}^{K}(l_{ij} \log \sigma_j(x) + (1 - l_{ij}) \log (1 - \sigma_j(x)) $$ In this case, there will be $K$ non-vanishing contributions.The minization of $E$ forces the classifier to predict the true label more confidently, and all other (to make $\log(1 - \sigma_j(x))$) as small as possible.
",38846,,,,,1/8/2021 7:02,,,,0,,,,CC BY-SA 4.0 25627,1,25636,,1/8/2021 9:23,,4,3451,"From the tid-bits, I understand of neural networks (NN), the Loss function is the difference between predicted output and expected output of the NN. I am following this tutorial, the losses are included at line #81 in the nlp.update()
function.
I am getting losses in the range 300-100. How to interpret them? What should be the ideal output of this losses variable? I went through Spacy's documentation, but nothing much is written there about losses. Also, please let me know the links to relevant theories to understand this in general.
",43434,,5763,,1/9/2021 7:19,1/9/2021 7:19,How to understand 'losses' in Spacy's custom NER training engine?,Regarding your first point, it depends on what neural network you would like to use, the sensor temporal resolution, and the capabilities of the embedded system. You can figure out the number of operations required for a forward pass of your network, then when combined with the internal clock of the embedded system, you can calculate the approximate time it would take for one classification event in real time.
A good explanation is given here What is the computational complexity of the forward pass of a convolutional neural network?
If you have the computational complexity of your network, and the number of CPU cycles per seconds, you can roughly approximate the time.
",43651,,43651,,1/8/2021 10:00,1/8/2021 10:00,,,,0,,,,CC BY-SA 4.0 25629,1,,,1/8/2021 10:00,,0,77,"I'm using Kalman Filter approaches and I've just implemented the extended Kalman filter (EKF) with my object 2D trajectory. However, I have a mess of alternative approaches that may fit better like Unscented Kalman Filter (UFK), particle filters, adaptive filtering, etc.
How can I choose the most suitable algorithm for my case? In addition, are there algorithms that can predict more than one step ahead?
",43650,,43650,,1/8/2021 11:48,1/8/2021 11:48,Which is the best algorithm to predict the trajectory of a vehicle using lat/lon data?,Consider our parametric model $p_\theta$ for an underlying probabilistic distribution $p_{data}$.
Now, the likelihood of an observation $x$ is generally defined as $L(\theta|x) = p_{\theta}(x)$.
The purpose of the likelihood is to quantify how good the parameters are. How can the probability density at a given observation, $p_{\theta}(x)$, measure how good the parameters $\theta$ are?
Is there any relation between the goodness of parameters and the probability density value of an observation?
",18758,,18758,,12/20/2021 22:09,12/20/2021 22:09,How can a probability density value be used for the likelihood calculation?,I would recommend doing is allowing your network to output any real number and then clipping the output. For instance, I was working with an agent that had to learn an angle between $[0, 2\pi]$ and $[0, 1]$. If the network outputted e.g. 10 in the first dimension then this would just be clipped to $2\pi$.
This way the agent only learns about actions within the action space and the weights of the network would eventually be adjusted to only output actions within this action space, provided that the boundaries aren't the optimal actions.
",36821,,,,,1/8/2021 10:50,,,,0,,,,CC BY-SA 4.0 25632,1,,,1/8/2021 11:07,,1,225,"I'm studying machine learning and I came into a challenging question.
The answer is 2. But based on my ML notes, all of them are true. Where are the wrong points?
",43653,,43653,,1/8/2021 13:25,6/12/2021 12:02,"If the training data are linearly separable, which of the following $L(w)$ has less optimum answer for $w$, when $y = w^Tx$?",The probability density is used to 'measure how good' the parameters are because it is a natural way of quantifying if these parameters are good for the observed data.
Also, as the notation often causes some confusion, $L(\theta | x)$ denotes the probability of all of your observed data, not just one value. Also the "$|$" may cause confusion as it looks like we are conditioning on $x$ but this is not the case - it may be better practice to use $L(\theta; x)$ which is the notation used when I was learning likelihood. Further, as you have written $L(\theta | x) = p_\theta(x)$ I would like to clarify that if we are being precise in our definitions then this is only correct if you have one observed data point, assuming that you meant $p_\theta(x)$ is the density of $X$.
In my example below I use $p_\mu(x)$ to denote the density of the normal distribution with mean parameter $\mu$, but the density of the likelihood is the product of all the densities (because we assumed iid data). It is crucial that you understand that in general the likelihood is the probability of your observed data and not just the density or the product of densities as you may not always have iid and so it will not always boil down to taking the product of some densities.
The idea of Maximum Likelihood is to maximising the (log-)likelihood for a given set of data. This means that we need to choose a probability distribution that is parameterised by some parameters $\boldsymbol{\theta}$ and then optimise the parameters such that the likelihood is maximum.
Assuming we don't remove any constants then this makes intuitive sense as maximising the likelihood would be maximising the probability that the data came from this distribution -- i.e. the data we observed is most likely to have come from the given distribution with the given parameters. This is important as it means we then have the most likely model of our data which allows us to use this distribution to make inference about our data.
As an example, imagine if I had some iid data $x_1, x_2, ..., x_n \sim \mathcal{N}(0, 1)$. Now I could try to fit a $\mathcal{N}(\mu, 1)$ to this data and optimise for $\mu$. If I chose e.g. $\mu = -1000000$ then the likelihood (again assuming we don't remove any constants) would be $\approx 0$. If I chose a value of $\mu = 0.1$ then the likelihood would be much higher because this parameter is closer to the true parameter value.
To see why it is higher, recall that the likelihood for iid data is given by $$\prod_{i=1}^n p_\mu(x_i)$$ and if we evaluated our likelihood at $\mu=-10000000$ then you're going to be taking the product of lots of numbers that are $\approx 0$ - if you think about the bell curve shape of a Normal distribution that is centred at $-10000000$ with variance 1 then the density at the true $x_i$ values (recalling they are simulated from a unit Normal) would be approx $0$ - whereas if we evaluated the $x_i$ at the density of a Normal distribution centred at $0.1$ then the density will be non-zero and so your likelihood will have a higher value.
To summarise, the density value can be used to measure how good parameters are for a set of data as maximising wrt the parameters is analogous to maximising the probability that your data arose from said distribution.
As an aside, note that the definition of likelihood is the probability of observing your data under some assumed distribution. For discrete random variables this is fine, but for continuous distributions we have to be a more subtle. For any continuous random variable $X$ we have $\mathbb{P}(X=x) = 0$. However, for a very small $\delta$ we can say that
$$\mathbb{P}(x - \frac{\delta}{2} < X \leq x + \frac{\delta}{2}) = \int_{x - \frac{\delta}{2}}^{x + \frac{\delta}{2}} f_X(x) dx \approx \delta p_X(x) \; ;$$
you can think of this approximation by visualising an integral and recalling that an integral represents the area under the curve, and so for small $\delta$ this integral can be approximated by taking the area of a rectangle which is width $\times$ height, where the width is $\delta$ and the height is $f_X(x)$. This justifies our use of the density function for continuous random variables. Note that typically in maximum likelihood we omit any multiplicative constants as they don't depend on the parameters which is what happens with the $\delta$ from this justification.
",36821,,36821,,1/8/2021 12:53,1/8/2021 12:53,,,,1,,,,CC BY-SA 4.0 25634,1,,,1/8/2021 11:49,,2,106,"In Barton and Sutton's book, Reinforcement Learning: An Introduction (2nd edition), an expression, on page 289 (equation 12.2), introduced the form of the $\lambda$-return defined as follows
$$G_t^{\lambda} = (1-\lambda)\sum_{n=1}^{\infty} \lambda^{n-1}G_{t:t+n} \label{12.2}\tag{12.2}$$
with the truncated return defined as
$$ G_{t:t+n} \doteq R_{t+1} +\gamma R_{t+2} + \ldots + \gamma^{n-1} R_{t+n} + \gamma^{n}\hat{v}(S_{t+n}, \mathbf{w}_{t+n-1}) \label{12.1}\tag{12.1}$$
However, slightly later in the text, page 290 (equation 12.4), the update algorithm for the offline $\lambda$-return algorithm is defined as
$$ \mathbf{w}_{t+1} \doteq \mathbf{w}_{t}+\alpha\left[G_{t}^{\lambda}-\hat{v}\left(S_{t}, \mathbf{w}_{t}\right)\right] \nabla \hat{v}\left(S_{t}, \mathbf{w}_{t}\right), \quad t=0, \ldots, T-1 \label{12.4}\tag{12.4} $$
My question is: how do we bootstrap the truncated returns in the update algorithm?
The way the truncated return is currently defined can not plausibly be used, since we would not have access to $\mathbf{w}_{t+n-1}$, as we are in the process of finding $\mathbf{w}_{t+1}$. I suspect $\mathbf{w}_{t}$ is used for bootstrapping in all returns, but that would alter the definition of the truncated return which I just wanted to clarify.
And as a follow-up question: What weights are used for bootstrapping in the online $\lambda$-return algorithm described on page 298?
I assume it's either $\mathbf{w}_{t-1}^{h}$ or $\mathbf{w}_{h-1}^{h-1}$, it's briefly mentioned that the online $\lambda$-return algorithm performs slightly better than the offline one at the end of the episode which leads me to believe the latter is used otherwise the two algorithms would be identical.
Any insight into either question would be great.
",42514,,42514,,1/9/2021 12:43,1/9/2021 12:43,How does bootstrapping work with the offline $\lambda$-return algorithm?,Your bias formula is a bit incorrect. You should subtract a true value, not an estimator.
$\mathrm{b}(\hat{\theta}) \stackrel{\text { def }}{=} \mathbb{E}[\hat{\theta}]-\theta$
$\mathrm{b}\left(\max _{a} Q\right)=\mathbb{E}\left[\max _{a} Q\right]-\max _{a} q=\mathbb{E}\left[\max _{a} Q\right]-\max _{a} \mathbb{E}[Q]$
$\mathbb{E}\left[\max _{a} Q\right] \geq \max _{a} \mathbb{E}[Q] \Rightarrow \mathrm{b}\left(\max _{a} Q\right) \geq 0$
A critical goal of training a neural network is to minimize the loss. Loss is not explained for spaCy because it is a general concept for machine learning and deep learning. Loss is not specific to spaCy and although there are some finer details I don't believe that is your inquiry.
In general, to understand loss functions, I recommend the following resources:
If you like videos watch:
Maybe Just use simple Convnets (Pre-trained perhaps) and train it on the images of the teacher on the blackboard. You could use a GAN to remove the teacher and complete the rest of the image (https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwj9lKKSh43uAhWbfH0KHeaDC1UQFjAAegQIAxAC&url=http%3A%2F%2Fstanford.edu%2Fclass%2Fee367%2FWinter2018%2Ffu_guan_yang_ee367_win18_report.pdf&usg=AOvVaw2tG3bStRlys_NPLX9-XPep) But that would be too troublesome.
The best way would be to take real-time video chunks, use a Convolutional Network to detect the shapes you want, and return bounding boxes for the appropriate shapes (for the location purposes) and their 'classification' - whether a straight line, curved line, or some other user defined shape. You could also choose to use some other YOLO (You Only Look Once) technique. You can check out with: https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiei4C3iI3uAhXXfX0KHQXxAVAQFjAJegQIARAC&url=https%3A%2F%2Ftowardsdatascience.com%2Fobject-detection-using-deep-learning-approaches-an-end-to-end-theoretical-perspective-4ca27eee8a9a&usg=AOvVaw2TkW094_O5Q7-mcVMJ5SEN ;
You also won't have to deal with the pesky teacher with the above method (assuming he doesn't stand in one place and constantly block a part of the BB). These methods are near SOTA and would be extremely effective than conventional algos. Not to mention that using Keras
is a piece of cake with a giant community and endless resources to help you in case you get stuck on some problem. Since it is easy to use, you can setup a prototype in almost no time.
Beginner's guide and Introduction-> https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiei4C3iI3uAhXXfX0KHQXxAVAQFjALegQIGhAC&url=https%3A%2F%2Fmachinelearningmastery.com%2Fobject-recognition-with-deep-learning%2F&usg=AOvVaw3M-b1gYnbTdzzwPahTTWWR ;
A research paper from ArXiv--> https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwiei4C3iI3uAhXXfX0KHQXxAVAQFjAPegQIHhAC&url=https%3A%2F%2Farxiv.org%2Fpdf%2F1807.05511&usg=AOvVaw33CWoOX4LlrA3T_f75zVZu
Training with YOLO: https://machinelearningmastery.com/how-to-perform-object-detection-with-yolov3-in-keras/
",36322,,36322,,1/8/2021 19:38,1/8/2021 19:38,,,,3,,,,CC BY-SA 4.0 25642,2,,25617,1/8/2021 19:42,,1,,"Just use IMGAug
library for Applying the 'zoom' augmentation on the images and a convnet (or even MLP) would have no problem in this task. Zooming on the image would be more than enough to use as, as long as your zoom is of sufficient power.
So apply "zoom" of the same factor on all images, make them bigger, and Convolutional Neural Networks would do fine. Also, If color is not an important feature, then you would be better off rescaling images b/w 0
and 1
and making them GrayScale
I would like some references of works that try to understand the functioning of any kind of RNN in natural language processing tasks. They can be any work that tries to explain the functioning of the model by studying the structure of the model itself. I have the feeling that it is very common for researchers to use models, but there is still little theory about how they work in solving natural language processing tasks.
",36175,,2444,,1/9/2021 1:37,1/9/2021 1:37,Is there a reference that describes Recurrent Neural Networks for NLP tasks?,I am trying to understand the solution to part 4 of problem 3 from the midterm exam 6.867 Machine learning: Mid-term exam (October 15, 2003).
For reproducibility, here is problem 3.
We consider here linear and non-linear support vector machines (SVM) of the form: $$ \begin{equation} \begin{aligned} \min w_{1}^{2} / 2 & \text { subject to } y_{i}\left(w_{1} x_{i}+w_{0}\right)-1 \geq 0, \quad i=1, \ldots, n, \text { or } \\ \min \mathbf{w}^{T} \mathbf{w} / 2 & \text { subject to } y_{i}\left(\mathbf{w}^{T} \Phi_{i}+w_{0}\right)-1 \geq 0, \quad i=1, \ldots, n \end{aligned} \end{equation} $$ where $\Phi_{i}$ is a feature vector constructed from the corresponding real-valued input $x_{i}$. We wish to compare the simple linear SVM classifier $\left(w_{1} x+w_{0}\right)$ and the non-linear classifier $\left(\mathbf{w}^{T} \Phi+w_{0}\right)$, where $\Phi=\left[x, x^{2}\right]^{T}$.
Here is part 4 of the same problem.
In general, is the margin we would attain using scaled feature vectors $\Phi=\left[2 x, 2 x^{2}\right]^{T}$
- greater
- equal
- smaller
- any of the above
The correct answer is the first (greater). Why is that the case?
",43665,,43665,,1/9/2021 19:30,1/9/2021 19:30,"Why is the margin attained with $\Phi=\left[2 x, 2 x^{2}\right]^{T}$ greater than the margin attained with $\Phi=\left[x, x^{2}\right]^{T}$?",First things first. What is an optimal $w$?. In this case it is supposed to be not only the one that minimizes the emprical /sample loss, but also non-trivial as we shall sonn see. Now inspect the loss functions, we see a term $-y_iw^Tx_i$ coming up. What exactly is this term? It can be anything. The correct term would have been $-y_i(w^Tx_i + b)$, or atleast I will assume this term, but the reasoning will exactly be the same without this term.
Now this term is well defined and since the Eucledian distance of a point, along with sign, $x_i$ from the hyperplane $w^Tx + b=0$ (the proof for this can be found easily), is given by $w^Tx_i + b$. So if the datapoint is supposed to have $y_i=-1$ but according to the classifier it gives $w^Tx_i + b = l_i > 0$, then one cane see $-y_il_i > 0$, hence $\max (0, -y_i l_i) = -y_i l_i$ a positive loss, and the same will apply to case when $y_i = 1$ and $w^Tx_i + b = l_i <0 $
We see another assumption, the datapoints are linearly seperable. This means, there exists a classifier (or a hyperplane) which can completely seperate the 2 classes. Now if one inspects the loss function $1,3,4$ all of them have one goal to make $\max (0, -y_i l_i) = 0, \forall i$. A point to note is that $1$ does this by directly reducing $\sum_{i=1}^n \max (0, -y_i l_i)$, $3$ does this using the $0-1$ loss i.e if the instance is wrongly classified (or $-y_il_i > 0$), then incur $\frac{1}{n}$ loss else $0$, while $4$ incurs a loss of $\frac{1}{n}(w^Tx_i + b)^2$ for each wrong classification. It is very important to note that the solution of all these 3 problems are all the same $w^*,b^*$ which classifies the dataset correctly, which makes the problem equivalent. (This may not hold under some very similar looking loss functions, so it is a point to take care about, a problem is equivalent only if it results in the same solution).
In $2$, first $C \rightarrow \infty$ is an incorrect statement, since then not only does it become mathematically unsound (atleast for me), it also means $2$ is the same as $1,3,4$. Why? $C$ is a weighting factor, it decides how much priority the algorithm minimizing loss will give to the term $C \sum_{i=1}^n \max(0,-y_il_i)$. If say $C=0$ then you are basically optimizing $||w||_2^2$ whose optimal value is at $w=0$. hence, we get nothing useful as optimizing $||w||_2^2$ has no connection with our objective of reducing misclassifications. But if, $0.5||w||_2^2 + C \sum_{i=1}^n \max(0,-y_il_i)$ is used the algorithm is now giving some weightage to minimizing both the $||w||_2^2$ as well as the misclassifications. If $C$ is very small the algorithm will prefer to minimize $||w||_2^2$ rather than $C \sum_{i=1}^n \max(0,-y_il_i)$. Thus giving a solution which may not be optimal, as per the dataset. But, if $C$ is very large the algorithm will mostly try to minimize $C \sum_{i=1}^n \max(0,-y_il_i)$, but still it will give some weightage in minimizng $||w||_2^2$, hence may not find an optimal hyperplane. If $C \rightarrow \infty$ it is matheatically ill defined (as per my knowledge) so I won't go into it.
But, all these aforementioned explanation (i.e the tradeoff between optimizaing two different objectives) will be useful only when there are some additional constraints, like instead of $\max(0,-y_il_i)$ it is $\max(0,1-y_il_i)$ where making $w$ bigger actually makes sense as it will also increase $l_i$ (i.e the same hyperplane denoted by a larger $w$ say $kw$ will make l_i large hence $-l_iy_i$ small if correctly classified, but this doesn't make sense if we are talking about the same hyperplane), hence the $||w||_2^2$ term opposes such an increase. This is a very famous formulation of loss used in SVM.
But, the challenging part in this question is the assumption 'linearly seperable' and also the additional constraint is missing (which is present in SVMs). So if $w^*,b^*$ incurs $0$ classification error so will $kw^*, b^*$ where $k$ is any scaling factor. And since $C$ is very large the second term in $2$ must be $0$. Thus, any $kw^*, b^*$ will work to make the second term 0. But, the first term now becomes $0.5k^{2}||w^*||_2^2$, where $w^*$ denotes the optimal hyperplane, and hence fixed. Thus the first term will choose $|k| \rightarrow 0$ or basically $k=0$ to minmize the 1st term and the second term now simply becomes $C\sum_{i=1}^n \max(0,-y_ib)$ which can be made $0$ by choosing $b=0$. Thus, a $0$ loss when $kw^*=0,b^*=0$ or a trivial solution and definitely not optimal as this will be true for all linearly seperable datasets.
I assumed and extra $b$ term to leverage linear seperability, without it the reasoning becomes easier and you can follow the exact same line of reasoning, albeit the problem might not be linearly seperable anymore. The solution may seem not very elegant but this is a standard line of reasoning in optimization problems where the optimization variable is changed from $w \rightarrow k$, where $w \in \mathbb{R^n}$ and $k \in \mathbb{R}$.
In short for $2$ the solution produced will be $w^*=0,b^*=0$ which will be the best solution but easily seen as trivial and non optimal at all.
",,user9947,,user9947,1/13/2021 10:53,1/13/2021 10:53,,,,0,,,,CC BY-SA 4.0 25647,2,,25632,1/8/2021 21:51,,0,,"Since the data is linearly separable linear model $y = w^Tx$ will be able to perfectly classify all the examples. That means that loss functions $L_1(w), L_3(w)$ and $L_4(w)$ will have a value of 0 (since all examples are correctly classified). For the loss $L_2(w)$ second term will be 0 if all examples are correctly classified. The first term of $L_2(w)$ \begin{equation} \frac{1}{2}||w||^2_2 \end{equation} will be not be minimized to 0 (unless optimal $w = \mathbf{0}$) so $L_2(w) > 0$. Optimizer will try to minimize $L_2(w)$ to be 0, so because of the penalty to $w$ it will try to find the best tradeoff between minimizing $w$ and loss due to misclassification so the resulting $w$ may not be the best to achieve optimal classification.
",20339,,20339,,1/8/2021 21:57,1/8/2021 21:57,,,,0,,,,CC BY-SA 4.0 25653,1,,,1/9/2021 8:48,,1,284,"In a continuous action space (for instance, in PPO, TRPO, REINFORCE, etc.), during training, an action is sampled from the random distribution with $\mu$ and $\sigma$. This results in an inherent exploration. However, during testing, when we no longer need to explore but exploit, the action should be deterministic, i.e. just $\mu$, right?
",32517,,32517,,1/9/2021 22:14,1/9/2021 22:14,Are actions deterministic during testing in continuous action space PPO?,I have never used MATLAB for ml before, so it is difficult for me to understand all your code. My first association to your problem is class imabalance. Since you seem to have got a handle on that, the problem could be dying ReLU or bloated activations. To check if the ReLU is dying, you could look at the activations of the early layers of your network. If many values are zero, it should be dying ReLU.
",43632,,,,,1/9/2021 15:35,,,,0,,,,CC BY-SA 4.0 25657,1,25667,,1/9/2021 16:22,,3,513,"I would like to employ DQN to solve a constrained MDP problem. The problem has constraints on action space. At different time steps till the end, the available actions are different. It has different possibilities as below.
Does this mean I need to learn 4 different Q networks for these possibilities? Also, correct me if I am wrong, it looks like if I specify the action size is 3, then it automatically assumes the actions are 0, 1, 2
, but, in my case, it should be 0, 3, 4
. How shall I implement this?
In a Markov Decision Process, is it possible that there exists no "dominated action"?
I define a dominated action the following way: we say that $(s,a)$ is a dominated action, if $\forall \pi, a \notin \text{argmax}\ q^{\pi}(s,.)$, where $\pi$ are policies.
For now, I am only considering the cases where all q-values are distinct and therefore the max is always unique. I also only consider the case of deterministic policies (mappings from state space to action space).
We can consider MDP in which each state has at least 2 actions available to get rid of the corner cases where there is only one possible policy.
I am struggling to find a counter-example or a proof.
",43682,,43682,,1/9/2021 20:23,1/9/2021 20:23,"Does there necessarily exist ""dominated actions"" in a MDP?",There are two relevant neural network designs for DQN:
Model q function directly $Q(s,a): \mathcal{S} \times \mathcal{A} \rightarrow \mathbb{R}$, so neural network has concatenated input of state and action, and outputs a single real value. This is arguably the more natural fit to Q learning, but can be inefficient.
Model all q values for given state $Q(s,\cdot): \mathcal{S} \rightarrow \mathbb{R}^{|\mathcal{A}|}$, so neural network takes input of current state and outputs all action values related to that state as a vector.
For the first architecture, you can decide which actions to evaluate by how you construct the minibatch. You pre-filter to the allowed actions for each state.
For the second architecture, you must post-filter the action values to those allowed by the state.
There are other possibilities for constructing variable-length inputs and outputs to neural networks - e.g. using RNNs. However, these are normally not worth the extra effort. A pre- or post- filter on the actions for a NN that can process the whole action space (including impossible actions) is all you usually need. Don't worry that the neural network may calculate some non-needed or nonsense values.
",1847,,1847,,1/9/2021 20:33,1/9/2021 20:33,,,,4,,,,CC BY-SA 4.0 25669,2,,25596,1/10/2021 0:13,,1,,"Similar to other answers, I don't know Matlab that well but you could try the following steps to debug your problem.
from your dataset, pull out a single image with a good amount of true positives in it. Duplicate that images B times (where B = Batch Size) and then try to train your network with only that small dataset. If you can't overfit to a single instance, then something is really wrong and you should validate all functional aspects of your network. If you can overfit, then it's probably more of an algorithmic or data imbalance issue.
Validate that your images are being correclty inputted into the network. You can do this by printing out any images right before they go into the training function. For the labels, make sure to manually inspect a few labels to be sure that the labels correctly match up the images.
Add a unit test or two to your loss function to validate that it is doing what it should be doing. Create a simiple example that you can easily validate.
Anything that isn't a design choice should be validated. Make sure that the weights are the correct sizes, each layer has the correct number of weights, the intermediate features have the correct shape, etc.
If you were able to overfit to a single image, and validated the functional aspects of your network then you may be facing a data imbalance issue. Look at your dataset and see what % of your instances are true positives vs. true negatives. If you have an exteme imbalance (like 10% 90%) or something like that, then build a dataset that is more balanced and see if you can fit your data. If you fit the data with that more balanced dataset, then there's plenty of ways to fix your data imbalance issue. Google around for data imbalance and you should get a few good ideas. Some include focal loss, upsampling, etc...
The receptive field on a network is basically the area that can be looked at by the network for a specific region. This is controlled by the size of the convolution filters, stride, etc. If the data that you are segmenting doesn't fit within the receptive field of the model, then you might be saturating the loss function. tldr try playing around with kernel sizes.
",17408,,,,,1/10/2021 0:13,,,,0,,,,CC BY-SA 4.0 25670,1,,,1/10/2021 0:54,,1,201,"I am currently trying to write a CNN from scratch, but I don't understand how to feed the information from a max-pooling layer to the next convolutional layer. Specifically, I don't know what to do with the 6 filtered and pooled images from the first convolutional and max-pooling layers. How do I feed those images into the next convolutional layer?
",43686,,2444,,1/11/2021 0:44,1/11/2021 0:44,How do you pass the image from one convolutional layer to another in a CNN?,I am wondering what the parameter $y$ in the function $g(y,\mu,\sigma)=\frac{1}{(2\pi)^{1/2}\sigma}e^{-(y-\mu)^{2/2\sigma^2}}$ stands for in Section 6 (page 14) of the paper introducing the REINFORCE family of algorithms.
Drawing an analogy to Equation 4 of the same paper, I would guess that it refers to the outcome (i.e. sample) of sampling from a probability distribution parameterized by the parameters $\mu$ and $\sigma$. However, I am not sure whether that is correct or not.
",37982,,2444,,1/10/2021 15:42,1/10/2021 15:42,"What does the parameter $y$ stand for in function $g(y,\mu,\sigma)$ related to REINFORCE algorithm?",It states that "To simplify notation, we focus on one single unit and omit the usual unit index subscript throughout"
So they are simply removing the i-th index from the equation for simplicity. So g is a function of a given instance "y" and the parameters μ and σ.
",43651,,,,,1/10/2021 11:06,,,,2,,,,CC BY-SA 4.0 25674,1,,,1/10/2021 11:44,,2,74,"I have an image and a mask. I want the image to be the same, but rotated, scaled and positioned like mask. What can I use?
I found a similar post about this issue, but unfortunately I did not find a proper answer. Are there any references where DQN is better than DoubleDQN, that is DoubleDQN does not improve DQN ?
",36055,,,,,1/10/2021 12:33,Can DQN outperform DoubleDQN?,