Take the sentence that was generated by your LSTM and feed it back into the LSTM as input. Then the LSTM will generate the next sentence. So the LSTM is using it's previous output as it's input. That's what makes it recursive. The intial word is just your base case. Also you should consider using GPT2 by open AI to do this. It's pretty impressive. https://openai.com/blog/better-language-models/
",32467,,,,,1/2/2020 21:07,,,,0,,,,CC BY-SA 4.0
17337,1,,,1/2/2020 21:12,,3,64,"I've been thinking about what ""mathematical model"" can be used to model every possible thing (including itself).
Examples: a simple neuron network models a function but doesn't model an algorithm. A list of instructions models an algorithm but doesn't model relations between elements...
You might be thinking ""maybe there is nothing that can model everything"" but in reality ""language"" does model everything including itself. The issue is that it's not an organized model and it's not clear how to create it from scratch (e.g. if you will send it to aliens that don't have any common knowledge to start with).
The structure formalization I'm looking for has to have a few necessary properties:
,0,11,,,,CC BY-SA 4.0
17339,2,,17324,1/2/2020 21:27,,0,,"Let $n=C*K_w*K_h$. Then you should only need $n$ filters. Not $2^n$ to keep all the information. If you just used the rows of the identity matrix as your filters than your convolution would just be making an exact copy so it definitely wouldn't be throwing away information. On the other hand, there will be a max pooling operation. To simplify the question let's suppose we have a 3 channels, and a 1 by 1 kernel. And then let's suppose it is just one convolution followed by global max pooling. Also, let's use your assumption that it's all binary. If you have $m$ filters then the final output will be $m$ dimensional no matter how many input points you have. So clearly information is being thrown away there. But that's not such a bad thing. Throwing away irrelevant information gets us closer to the features we need the problem at hand. The parts that get thrown away by max pooling correspond to features not being found in a particular part of the image.
",32467,,,,,1/2/2020 21:27,,,,4,,,,CC BY-SA 4.0
17340,2,,17317,1/2/2020 21:42,,0,,"A neural network is composed of continuous functions. Neural networks are regularized by adding an l2 penalty on the weights to the loss function. This means the neural network will try to make the weights as small as possible. The weights are also initiallized with a N(0, 1) distribution so the initial weights will also tend to be small. All of this means that neural networks will compute a continuous function that is as smooth as possible while still fitting the data. By smooth I mean that similar inputs will tend to have similar outputs when run through the neural network. More formally, $||x-y||$ small implies $||f(x)-f(y)||$ small where f represents the output from the neural network. This mean that if a neural network sees an novel input $x$ that is close to an input from the training data $y$, then $f(x)$ will tend to be close to $f(y)$. So the end result is that the neural network will classify $x$ based on what the labels for the nearby training examples were. So the neural network is actually a little like k-nearest neighbors in that way.
Another way for neural networks to generalize is using invariance. For example, convolutional neural networks are approximately translation invariant. So this means that if it sees and image where the object in question has been translated then it will still recognize the object.
But it's not giving us the exact function we want. The loss function is a combination of classification accuracy and making the weights small so that you can fit the data with a function that is as smooth as possible. This tends to generalize well for the reasons I said before but it's just an approximation. You can solve the problem more exactly using minimal assumptions with a Gaussian process but Guassian processes are too slow to handle large amounts of data.
",32467,,32467,,1/4/2020 21:42,1/4/2020 21:42,,,,0,,,,CC BY-SA 4.0
17341,2,,11542,1/2/2020 21:45,,1,,"If the AI is static (heuristic and fixed), it will always pursue the stated goal. However, such a system would be ""brittle"", and either break or produce bad output if confronted with input not previously defined, or outside its model.
If the AI evolves via learning, even where the goal is specific, its interpretation of that goal might change, and produce unexpected results. (The ""I, Robot"" scenario.)
If the AI is emergent, by which I mean it evolves in way that cannot be predicted, it might evolve new goals.
To answer the question directly:
Hypothetically, if there was an AGI or artificial superintelligence, or ultraintelligent machine tasked with protecting humans, and that AI perceived humans to be destroying themselves, that AI would, if able, take control of human society. (I don't see this as contradicting its goal.)
However, it must be stated that, in a condition of imperfect & incomplete information, where the problem is intractable, the AI is just guessing like we humans do, even if it makes better guesses, as in the case of narrowly intelligent AIs like AlphaGo.
",1671,,,,,1/2/2020 21:45,,,,9,,,,CC BY-SA 4.0
17342,2,,17313,1/2/2020 21:59,,0,,"I think if you got the dataset, then a standard 1d convolutional neural network would work to some extent. It's not that there is some property of nearby sounds that it would pick up on. It would just memorize all the sounds that tend to come from your desk. I think the coding part would be pretty standard stuff. But collecting the data will be hard. You have to get a really big labeled dataset of sounds coming from your desk and sounds coming beyond a 3 foot radius. This dataset has to be realistic and representative of the real world. Getting that dataset would be pretty tricky but it is doable if you put multiple microphones in your house in order to triangulate the exact positions of all sounds. It would be like GPS but using sounds waves instead of light waves.
",32467,,,,,1/2/2020 21:59,,,,0,,,,CC BY-SA 4.0
17343,2,,17306,1/2/2020 22:06,,1,,"GPT2 predicts the next word that people will say. https://openai.com/blog/better-language-models/
Facebook predicts what will make you keep using their site.
Youtube predicts what videos you will click on.
",32467,,,,,1/2/2020 22:06,,,,0,,,,CC BY-SA 4.0
17344,1,17367,,1/3/2020 1:14,,4,103,"I've been using several resources to implement my own artificial neural network package in C++.
Among some of the resources I've been using are
https://www.anotsorandomwalk.com/backpropagation-example-with-numbers-step-by-step/
https://mattmazur.com/2015/03/17/a-step-by-step-backpropagation-example/
https://cs.stanford.edu/people/karpathy/convnetjs/intro.html,
as well as several others.
My code manages to replicate the results in the first two resources exactly. However, these are fairly simple networks in terms of depth. Hence the following (detailed) question:
For my implementation, I've been working with the MNIST Database of handwritten digits (http://yann.lecun.com/exdb/mnist/).
Using the ANN package I wrote, I have created a simple ANN with 784 input neurons, one hidden layer with 16 neurons, as well as an output layer with ten neurons. I have implemented ReLU on the hidden layer and the ouput layer, as well as a softmax on the output layer to get probabilities.The weights and biases are each individiually initialized to random values in the range [-1,1]
So the network is 784x16x10.
My backpropagation incorporates weight gradient and bias gradient logic.
With this configuration, I repeatedly get about a 90% hit rate with a total average cost of ~0.07 on the MNIST training set comprising 60,000 digits, and a slightly higher hit rate of ~92.5% on the test set comprising 10,000 digits.
For my first implementation of an ANN, I am pretty happy with that. However, my next thought was:
""If I add another hidden layer, I should get even better results...?"".
So I created another artificial network with the same configuration, except for the addition of another hidden layer of 16 neurons, which I also run through a reLU. So this network is 784x16x16x10.
On this ANN, I get significantly worse results. The hit rate on the training set repeatedly comes out at ~45% with a total average error of ~0.35, and on the test set I also only get about 45%.
This leads me to either one or both of the following conclusions:
A) My implementation of the ANN in C++ is somehow faulty. If so, my bet would be it is somewhere in the backpropagation, as I am not 100% certain my weight gradient and bias gradient calculation is correct for any layers before the last hidden layer.
B) This is an expected effect. Something about adding another layer makes the ANN not suitable for this (digit classification) kind of problem.
Of course, A, B, or A and B could be true.
Could someone with more experience than me give me some input, especially on whether B) is true or not?
If B) is not true, then I know I have to look at my code again.
",32471,,,,,1/4/2020 11:48,Is it expected that adding an additional hidden layer to my 3-layer ANN reduces accuracy significantly?,,1,4,,,,CC BY-SA 4.0
17345,2,,17317,1/3/2020 3:06,,0,,"A fairly recent paper posits an answer to this:
Reconciling modern machine learning practice and the bias-variance trade-off.
Mikhail Belkin, Daniel Hsu, Siyuan Ma, Soumik Mandal
https://arxiv.org/abs/1812.11118
https://www.pnas.org/content/116/32/15849
I'm probably not qualified to summarize, but it sounds like their conjectured mechanism is: by having far more parameters than are needed even to perfectly interpolate the training data, the space of possible resulting functions expands to include ""simpler"" functions (simpler here obviously not meaning fewer parameters, but instead something like ""less wiggly"") that generalize better even while perfectly interpolating the training set.
That seems completely orthogonal to the more traditional ML approach of reducing capacity via dropout, regularization, etc.
",21542,,,,,1/3/2020 3:06,,,,0,,,,CC BY-SA 4.0
17346,1,,,1/3/2020 8:28,,2,21,"So I am trying to use a majority vote classifier combining different models and I was wondering if it is acceptable to use different training sets for the individual models (including different features) if these sets all come from one larger dataset?
Thanks
",32477,,,,,1/3/2020 8:28,Is it acceptable to use various training sets for the individual models when using a majority vote classifier?,,0,0,,,,CC BY-SA 4.0
17347,1,,,1/3/2020 10:44,,2,32,"I'm trying to determine the frequency from a signal with NN. I'm using the Adeline model for my project and I'm taking a few samples in each 0.1-volt step in a true signal and a noisy one.
First question: am I wrong?
Second question: my network works fine until the frequency of my sample for the test is equal to the frequency of my sample for the training. Otherwise, my network doesn't work and gives me the wrong answer.
What do I need to do for this model?
for solving this problem I must use nonlinear steps like logarithmic steps. but How to use logarithmic steps in MatLab?
Edition: I understand my problem is not Overfitting! I found that my samples step are linear and my samples are nonlinear so this is wrong
for solving this problem I must use nonlinear steps like logarithmic steps. but How to use logarithmic steps in MatLab?
",32481,,32481,,1/4/2020 14:15,1/4/2020 14:15,Determine Frequency from Noisy Signal With Neural Networks (With Adeline Model),,0,1,,,,CC BY-SA 4.0
17348,2,,17326,1/3/2020 10:55,,2,,"As you know, an LSTM language model takes in the past word and tries to predict the new one and continue over a loop. A sentence is divided into tokens and depending on different method, the tokens are divided differently. Some model maybe character based models which simply uses each character as input and output. In this case you can treat punctuation as one character and just run the model as normal. For word based model which is commonly used in many systems, we treat punctuation as it's own token. It is commonly called a end of sentence token. There is also a specific token for end of output. This makes the system knows when to finish and stop prediction.
Also, just so you know for language model trying to generate original text, they feed the output as the input of the next data point, but the output they choose is not necessarily the one with the best accuracy. They set a threshold and choose upon that. It can introduce diversity to the language model so taht even though the staring word is the same, the sentence/paragraph will be different and not the same one again and again.
For some state-of-the-art models, you can try GPT-2 as mentioned by @jdleoj23 . This is a character based(actually byte based but basically the same, it treats each unicode symbol individually) model that uses attention and transformers. The advantage of character based system is that even inputs that have spelling errors can be inputted into the model and new words not in the dictionary can be introduced.
However if you want to learn more about how language model works, and not just striving for the best performance, you should try implementing one simple one by yourself. You can try following this article which uses keras to make a language model.
https://machinelearningmastery.com/develop-word-based-neural-language-models-python-keras/
The advantage of making a simple one is taht you can actually understand the encoding process, the tokenization process, the model underneath and others instead of relying on other people's code. The article uses keras Tokenizer but you could try writing your own using regex and simple string processing.
Hope my help is useful for you.
",23713,,23713,,1/3/2020 11:06,1/3/2020 11:06,,,,0,,,,CC BY-SA 4.0
17349,1,,,1/3/2020 15:27,,3,338,"I am working on a project that requires time-series prediction (regression) and I use LSTM network with first 1D conv layer in Keras/TF-gpu as follows:
model = Sequential()
model.add(Conv1D(filters=60, activation='relu', input_shape=(x_train.shape[1], len(features_used)), kernel_size=5, padding='causal', strides=1))
model.add(CuDNNLSTM(units=128, return_sequences=True))
model.add(CuDNNLSTM(units=128))
model.add(Dense(units=1))
As an effect my model is clearly overfitting:
![]()
So I decided to add dropout layers, first I added layers with 0.1, 0.3 and finally 0.5 rate:
model = Sequential()
model.add(Dropout(0.5))
model.add(Conv1D(filters=60, activation='relu', input_shape=(x_train.shape[1], len(features_used)), kernel_size=5, padding='causal', strides=1))
model.add(Dropout(0.5))
model.add(CuDNNLSTM(units=128, return_sequences=True))
model.add(Dropout(0.5))
model.add(CuDNNLSTM(units=128))
model.add(Dense(units=1))
However I think that it has no effect on the network learning process, even though 0.5 is quite large dropout rate:
![]()
Is this possible that dropout has little/no effect on a training process of LSTM or maybe I do something wrong here?
[EDIT] Adding plots of my TS, general and zoomed in view.
![]()
I also want to add that the time of training increases just a bit (i.e. from 1540 to 1620 seconds) when I add the dropout layers.
",22659,,22659,,1/7/2020 14:00,3/2/2020 16:54,Can dropout layers not influence LSTM training?,,1,0,,,,CC BY-SA 4.0
17350,1,,,1/3/2020 15:43,,2,57,"What are some common approaches to estimate the transition or observation probabilities, when the probabilities are not exactly known?
When realizing a POMDP model, the state model needs additional information in terms of transition and observation probabilities. Often these probabilities are not known and an equal distribution is also not given. How can we proceed?
",27777,,2444,,1/3/2020 20:09,1/3/2020 20:09,What are some approaches to estimate the transition and observation probabilities in POMDP?,,0,1,,,,CC BY-SA 4.0
17351,1,,,1/3/2020 16:08,,1,493,"Is there some established Object Detection algorithm that is able to detect the four corners of an arbitrary quadrilateral (x0,y0,x1,y1,x2,y2,x3,y3) as opposed to the more typical perpendicular rectangular (x,y,w,h) ?
",21583,,,,,9/25/2021 10:03,"Object Detection Algorithm that detects four corners of arbitrary quadrilateral, not just perpendicular rectangular",,1,0,,,,CC BY-SA 4.0
17352,1,,,1/3/2020 16:51,,4,93,"I've written a program to analyse a given piece of text from a website and make conclusary classifications as to its validity. The code basically vectorizes the description (taken from the HTML of a given webpage in real time) and takes in a few inputs from that as features to make its decisions. There are some more features like the domain of the website and some keywords I've explicitly counted.
The highest accuracy I've been able to achieve is with a RandomForestClassifier, (>90%). I'm not sure what I can do to make this accuracy better except incorporating a more sophisticated model. I tried using an MLP but for no set of hyperparameters does it seem to exceed the previous accuracy. I have around 2000 datapoints available for training.
Is there any classifier that works best for such projects? Does anyone have any suggestions as to how I can bring about improvements? (If anything needs to be elaborated, I'll do so.)
Any suggestions on how I can improve on this project in general? Should I include the text on a webpage as well? How should I do so? I tried going through a few sites, but the next doesn't seem to be contained in any specific element whereas the description is easy to obtain from the HTML. Any help?
What else can I take as features? If anyone could suggest any creative ideas, I'd really appreciate it.
",32490,,32490,,1/4/2020 15:52,1/4/2020 15:52,Is there any classifier that works best in general for NLP based projects?,,2,0,,,,CC BY-SA 4.0
17353,2,,17349,1/3/2020 17:34,,2,,"A couple of points:
Have you firstly scaled your data, e.g. using MinMaxScaler? This could be one reason why your loss readings remain high.
Additionally, consider that while Dropout can be useful for reducing overfitting, it is not necessarily a panacea.
Let's take an example of using LSTM to forecast fluctuations in weekly hotel cancellations.
Model without Dropout
# Generate LSTM network
model = tf.keras.Sequential()
model.add(LSTM(4, input_shape=(1, previous)))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history=model.fit(X_train, Y_train, validation_split=0.2, epochs=20, batch_size=1, verbose=2)
Over 20 epochs, the model achieves a validation loss of 0.0267 without Dropout.
![]()
Model with Dropout
# Generate LSTM network
model = tf.keras.Sequential()
model.add(LSTM(4, input_shape=(1, previous)))
model.add(Dropout(0.5))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
history=model.fit(X_train, Y_train, validation_split=0.2, epochs=20, batch_size=1, verbose=2)
However, validation loss is slightly higher with Dropout at 0.0428.
![]()
- Make sure you have specified the loss function correctly. If you are forecasting a time series, then you are most likely working with interval data. Therefore, mean_squared_error is an appropriate loss function as one is trying to estimate the deviation between the predicted and actual values.
As a counterexample, binary_crossentropy would not be suitable as the time series is not a classification set. However, misspecifying the loss function is a common error. Therefore, you also want to make sure you are using the appropriate loss function and then work from there.
",22692,,22692,,3/2/2020 16:54,3/2/2020 16:54,,,,4,,,,CC BY-SA 4.0
17354,2,,17310,1/3/2020 18:47,,2,,"The whole idea behind those distributed optimization methods is that data should be local in every node/worker. Thus, if you only send the loss value to the central node, this node can't compute the gradients of this loss, and thus can't do any training. However, if you don't want to send gradients, a family of distributed optimization algorithms called consensus-based optimization can be used to only send the local weight of the model to neighbouring nodes, and those nodes use their local gradient and the models from their neighbours to update their local models.
",32493,,2444,,1/3/2020 21:17,1/3/2020 21:17,,,,0,,,,CC BY-SA 4.0
17355,2,,17256,1/3/2020 18:55,,3,,"In reallity any continous function on a compact can be approximated by a neural network having one hidden layer with a finite number of neurones (This is the Universal Approximation Theorem). Thus you only need one hidden layer to approximate the multiplication on a compact, note that you need to apply a non linear activation on the hidden layer to do this.
",32493,,21583,,1/4/2020 20:15,1/4/2020 20:15,,,,2,,,,CC BY-SA 4.0
17356,2,,17304,1/3/2020 19:08,,5,,"$\ell_{2,1}$ is a matrix norm, as stated in this paper.
For a certain matrix $A \in \mathbb{R}^{r\times c}$,
we have
$$\|A\|_{2,1} = \sum_{i=1}^r \sqrt{\sum_{j=1}^c A_{ij}^2}$$
You first apply $\ell_2$ norm along the columns to obtain a vector with r dimensions. Then, you apply $l_1$ norm to that vector to obtain a real number. You can generalize this notation to every norm $\ell_{p,q}$.
",32485,,53322,,3/10/2022 17:57,3/10/2022 17:57,,,,0,,,,CC BY-SA 4.0
17357,1,17375,,1/3/2020 20:52,,2,228,"Will CNN, LSTM, GRU and transformer be better classified as Computational Intelligence (CI) tools or Artificial General Intelligence (AGI) tools? The term CI arose back when some codes like neural networks, GA, PSO were considered doing magical stuff. These days CI tools do not appear very magical. Researchers want codes to exude AGI. Do the current state of art Deep Learning codes fall in the AGI category?
",12850,,2444,,1/3/2020 20:55,1/4/2020 23:41,"Are CNN, LSTM, GRU and transformer AGI or computational intelligence tools?",,1,0,,,,CC BY-SA 4.0
17358,2,,17352,1/3/2020 21:09,,1,,"The accuracy depends on various factors. Might not always be the algorithm. For example a cleaner data with a poor algorithm might still give better results and vice versa.
What are the preprocessing techniques you are using? This preprocessing techniques article is a good starting point for html data. And by vectorising I assume you mean word2vec, use a pre-trained word2vec model. Like google's word2vec model it's trained on a lot of data(about 100 billion words).
LSTM performs good whenever the intent of the sentence is important. Check out this.
Ram hit Vijay and Vijay hit Ram, might mean the same to most algorithms. Example, Naive Bayes.
",32408,,32408,,1/3/2020 21:24,1/3/2020 21:24,,,,0,,,,CC BY-SA 4.0
17360,1,,,1/3/2020 22:50,,3,891,"I am currently using a loss averaged over the last 100 iterations, but this leads to artifacts like the loss going down even when the current iteration has an average loss, because the loss 100 iterations ago was a large outlier.
I thought about using different interval lengths, but I wonder if an average over the last few iterations really is the right way to plot the loss.
Are there common alternatives? Maybe using decaying weights in the average? What are the best practices for visualizing the loss?
",25798,,2444,,12/29/2021 23:03,12/29/2021 23:03,What is the best way to smoothen out a loss curve plot?,,1,3,,,,CC BY-SA 4.0
17361,2,,17352,1/4/2020 1:36,,1,,"First of all, there are multiple factors on how well models will work. Amount of data, source of data, hyperparameters, model type, training time etc... All of these will affect the accuracy. However, no classifier will work best in general. It all depends on the different factors, and not one can satisfy all, at least for now.
For improving the accuracy, we need to first make those factors ideal so that the classification will have a higher accuracy.
First of all, how many data do you have? If you are using html webpage, you probably need at least 10000 data samples. If you have at least that amount of data you should be ok with overfitting. You also need to clean the data. One way to do it is to tokenize it. Tokenization of text data basically means to split the text into words and make a dictionary out of it. Then each word is encoded to a specific number where each same word have the same encoding. You are using the raw HTML as input, which have a lot of unnecessary information and tags and stuff, you can try removing those or completely remove all html tags if they are not required. The key to cleaning the data is to extract the pieces of information that is important and necessary for the model to work.
Then, you should explore the model. For a NLP (Natural Language Processing) task, your best bet is to choose a RNN (Recurrent Neural Network). This type of network have memory cells taht helps with text type data as text often have distant linkage in a paragraph, for example one sentence may use a ""she"" that refers to a person mentioned in two sentence before, and if you just feed every single encoding of words in a MLP, it would not have this memory for the network to learn long term connection between text. A RNN also is time dependent, meaning it processes each token one by one according to the direction of time. This makes the text more intuitive to the network as text is designed to be read forward, not all at once.
Your current method is to first vectorize the HTML code, then feed it into a random forest classifier. A random forest classifier works great, but it cannot scale when there is more data. The accuracy of a random forest classifier will stay mostly the same when data increase while in deep neural networks the accuracy will increase with the amount of data. However a deep neural network will require a large amount of data to start of with. If your amount of data is not too much (< 10000), this method should be your choice of method. However if you plan to add more data or if teh data is more, you should try a deep learning based method.
For deep learning based method, ULMFit is a great model to try. It uses a LSTM(Long Short Term Memory) network (which is a type of RNN) with a language model pretraining and many different method to increase the accuracy. You can try it with the fast.ai implementation. https://nlp.fast.ai/
If you wish to try a method that you can practically implement yourself, you could try to use a plain LSTM with one hot encoding as input. However, don't use word2vec to do preprocessing as your input data is html code. The word2vec model is for normal English text, not the html tags and stuff. Moreover a custom encodings will work better as in the training process you can train teh encoding as well.
Hope I can help you
",23713,,,,,1/4/2020 1:36,,,,8,,,,CC BY-SA 4.0
17363,2,,17351,1/4/2020 7:58,,1,,"You can use OpenCV's cv2.minAreaRect() to detect oriented/rotated rectangular bounding boxes. Below's an example result from OpenCV-Python-tutorials:
![]()
Alternatively, you could train a supervised object detection model to output 8 co-ordinate values (x0,y0,x1,y1,x2,y2,x3,y3) of the quadrilateral by training with a labeled oriented-bounding-box dataset. You could also create the bounding box labels yourself for the same by using tools such as VGG Annotator Tool among others.
",30644,,,,,1/4/2020 7:58,,,,0,,,,CC BY-SA 4.0
17364,1,,,1/4/2020 9:01,,2,59,"Imitation learning uses experiences of an (expert) agent to train another agent, in my understanding. If I want to use an on-policy algorithm, for example, Proximal Policy Optimization, because of it's on-policy nature we cannot use the experiences generated by another policy directly. Importance Sampling can be used to overcome this limitation, however, it is known to be highly unstable. How can imitation learning be used for such on-policy algorithms avoiding the stability issues?
",29879,,2444,,11/5/2020 22:33,11/5/2020 22:33,Can we use imitation learning for on-policy algorithms?,,0,0,,,,CC BY-SA 4.0
17365,1,,,1/4/2020 9:21,,2,56,"I have a hard time formulating this question(I'm not knowledgeable enough I think), so I'll give an example first and then the question:
You have a table of data, let's say the occupancy of a building during the course of the day; each row has columns like ""people_inside_currently"", ""apartment_id"", ""hour_of_day"", ""month"", ""year"", ""name_of_day""(monday-sunday), ""amount_of_kids"", ""average_income"" etc.
You might preprocess two columns into a column ""percent_occupied_during_whole_day"" or something like that, and you want to group the data points in accordance with this as the main focus.
What I'm wondering is: why use machine learning(particularly unsupervised clustering) for this? Why not just put it into an SQL database table(for example), calculate two columns into that new one, sort by descending order, and then split it into ""top 25%, next 25%, next 25%, last 25%"" and output this as ""categories of data""? This is simpler, isn't it? I don't see the value of, for instance, making a Principle Component Analysis on it, reducing columns to some ""unifying columns"" which you don't know what to call anymore, and looking at the output of that, when you can get so much clearer results by just simply sorting and dividing the rows like this? I don't see the use of unsupervised clustering, I've googled a bunch of terms, but only found tutorials and definitions, applications(which seemed unnecessarily complex for such simple work), but no explanation of this.
",20747,,,,,1/4/2020 9:21,Why machine learning instead of simple sorting and grouping?,,0,2,,,,CC BY-SA 4.0
17366,1,,,1/4/2020 9:38,,3,46,"I have a dataset in which class A has 99.8%, class B 0.1% and class C 0.1%. If I train my model on this dataset, it predicts always class A. If I do oversampling, it predicts the classes evenly. I want my model to predict class A around 98% of the time, class B 1% and class C 1%. How can I do that?
",32499,,,,,1/4/2020 9:38,Rarely predict minority class imbalanced datasets,,0,6,,,,CC BY-SA 4.0
17367,2,,17344,1/4/2020 11:48,,2,,"You probably got the back propagation wrong. I have done a test on the accuracy on adding an extra layer and the accuracy went up from 94% to 96% for me. See this for details:
https://colab.research.google.com/drive/17kAJ2KJ36grG9sz-KW10fZCQW9i2Tf2c
To run the notebook click Open in playground
and run the code. There is a commented line which add 1 extra layer. The syntax should be easy to understand even though it is in python.
For back propagation, you can try to see this python implementation of multi layer perceptron backpropagation.
https://github.com/enggen/Deep-Learning-Coursera/blob/master/Neural%20Networks%20and%20Deep%20Learning/Building%20your%20Deep%20Neural%20Network%20-%20Step%20by%20Step.ipynb
A network will not usually decrease it's accuracy by almost a half in normal scenario when you add an extra layer, though it is possible to have the network decrease accuracy when you add an extra layer due to overfitting. Though if this happen the performance drop won't be that dramatic.
Hope I can help you.
",23713,,,,,1/4/2020 11:48,,,,3,,,,CC BY-SA 4.0
17369,1,17781,,1/4/2020 13:19,,4,296,"
- I have items called 'Resources' from 1 to 7.
- I have to use them in different actions identified from 1 to 10.
- I can do a maximum of 4 actions each time. This is called 'Operation'.
- The use of a resource has a cost of 1 per each 'Operation' even if it is used 4 times.
- The following table indicates the resources needed to do the related actions:
| | Resources |
|--------|----------------------------------|
| Action | 1 | 2 | 3 | 4 | 5 | 6 | 7 |
|--------|----------------------------------|
| 1 | 1 | 0 | 1 | 1 | 0 | 0 | 0 |
| 2 | 1 | 1 | 0 | 0 | 1 | 0 | 0 |
| 3 | 1 | 0 | 1 | 0 | 0 | 1 | 0 |
| 4 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
| 5 | 1 | 0 | 1 | 1 | 0 | 1 | 0 |
| 6 | 1 | 1 | 1 | 0 | 0 | 0 | 0 |
| 7 | 0 | 1 | 0 | 0 | 0 | 0 | 0 |
| 8 | 1 | 0 | 1 | 0 | 1 | 0 | 0 |
| 9 | 0 | 1 | 0 | 1 | 0 | 0 | 0 |
| 10 | 1 | 1 | 1 | 0 | 0 | 0 | 1 |
The objective is to group all the 'Actions' in 'Operations' that minimize the total cost. For example, a group composed by actions {3, 7, 9} needs the resources {1, 2, 3, 4, 6} and therefore has a cost of 5, but a group composed by actions {4, 7, 9} needs the resources {2, 4} and therefore has a cost of 2.
It is needed to get done all the actions the most economically.
Which algorithm can solve this problem?
",6207,,6207,,1/4/2020 20:41,1/31/2020 17:50,Which algorithm to use to solve this optimization problem?,,2,5,,,,CC BY-SA 4.0
17370,1,,,1/4/2020 13:40,,3,135,"I am working on speaker identification problem using GMM (Gaussian Mixture Model). I have to just identify one user present in the given audio, so for second class noise or silent audio may use or not just like in image classification for an object we create a non-object class.
I have used a silent class is always showing the user is present ( which is not).
If any other model can give better accuracy fulfil the condition that only 30 sec of audio of a particular user is available and given test audio may has long size.
",15368,,,,,1/10/2020 7:14,Speaker Identification / Recognition for less size audio files,,0,2,,,,CC BY-SA 4.0
17371,1,17411,,1/4/2020 15:50,,3,3016,"How to calculate mean speed in FPS for an object detection model like YOLOv3 or YOLOv3-Tiny? Different object detection models are often presented on charts like this:
I am using the DarkNet framework in my project and I want to create similar charts for my own models based on YOLOv3. Is there some easy way to get mean FPS speed for my model with the ""test video""?
",30992,,23713,,1/6/2020 2:48,1/8/2020 10:30,Calculation of FPS on object detection task,,2,0,,,,CC BY-SA 4.0
17373,1,,,1/4/2020 18:24,,2,42,"How can a system recognize if two strings have the same or similar meaning?
For example, consider the following two strings
Wikipedia provides good information.
Wikipedia is a good source of information.
What methods are available to do this?
",32506,,2444,,1/4/2020 21:16,1/4/2020 21:16,How can a system recognize if two strings have the same or similar meaning?,,1,0,,,,CC BY-SA 4.0
17374,2,,17373,1/4/2020 18:51,,1,,"Getting the intent of the sentence is not an easy task. To get you started on what to do, have a look on word vectors. You can also download pre-trained word2vec models. They help in getting similarity of words and reasoning with words. To get the intent of a sentence, you can use LSTM.
Fun fact most NLP algorithms strip away punctuation with is sufficient for most cases, but to give a counter example.
The defendant, who looked apologetic, was found guilty.
The defendant who looked apologetic was found guilty.
They mean different things and are difficult to catch the intent even with the best algorithms.
PS: For those wondering the difference, in the second sentence it seems like there were two defendants, and it was the one who looked apologetic who was found guilty while the other walked away free.
",32408,,,,,1/4/2020 18:51,,,,0,,,,CC BY-SA 4.0
17375,2,,17357,1/4/2020 23:41,,0,,"CNNs, LSTMs, GRUs and transformers are or use artificial neural networks. The expression computational intelligence (CI) is often used interchangeably with artificial intelligence (AI). CI can also refer to a subfield or superfield of AI where biology is often an inspiration. See What is Computational Intelligence and what could it become? by Włodzisław Duch.
RNNs are Turing complete and CNNs have been shown to be universal function approximators (they can approximate any continuous function to an arbitrary accuracy given a sufficiently deep architecture), but that doesn't mean we will be able to create AGI with them, unless you believe that AGI is just a bunch of algorithms, but, IMHO, that alone doesn't produce AGI. See also the computational theory of mind.
To conclude, CNNs, LSTMs, GRUs and transformers are deep learning tools (so they could also be considered CI tools, given some definitions of CI), which might be useful for the development of AGI.
",2444,,,,,1/4/2020 23:41,,,,2,,,,CC BY-SA 4.0
17376,1,,,1/5/2020 0:42,,1,42,"I have two convex, smooth loss functions to minimise. During the training (a very simple model) using batch SGD (with tuned optimal learning rate for each loss function), I observe that the (log) loss curve of the loss 2 converges much faster and is much more smooth than that of the loss 2, as shown in the figure.
What can I say more about the properties of the two loss functions, for example in terms of smoothness, convexity, etc?
![]()
",22474,,22474,,1/5/2020 0:57,1/5/2020 0:57,Deduce properties of the loss functions from the training loss curves,,0,3,,,,CC BY-SA 4.0
17377,1,,,1/5/2020 9:01,,3,25,"I am proposing a modified version of Sequence-to-Sequence model with dual decoders. The problem that I am trying to solve is Neural Machine Translation into two languages at once. This is the simplified illustration for the model.
/--> Decoder 1 -> Language Output 1
Language Input -> Encoder -|
\--> Decoder 2 -> Language Output 2
What I understand about back propagation is that we are adjusting the weights of the network to enhance the signal of the targeted output. However, it is not clear to me on how to back propagate in this network because I am not able to find similar implementations online yet. I am thinking of doing the back propagation twice after each training batch, like this:
$$ Decoder\ 1 \rightarrow Encoder $$
$$ Decoder\ 2 \rightarrow Encoder $$
But I am not sure whether the effect of back propagation from Decoder 2 will affect the accuracy of prediction by Decoder 1. Is this true?
In addition, is this structure feasible? If so, how do I properly back propagate in the network?
",32511,,,,,1/5/2020 9:01,How to back propagate for implementation of Sequence-to-Sequence with Multi Decoders,,0,0,,,,CC BY-SA 4.0
17378,1,17380,,1/5/2020 13:34,,1,123,"I am trying to predict pseudo-random numbers using the past numbers with a multiplayer perceptron. The error while training is very low. However, as soon as I test it with a test set, the model overfits and returns very bad results. The correlation coefficient and error metrics are both not performing well.
What would be some of the ways to solve this issue?
For example, if I train it with 5000 rows of data and test it with 1000, I get:
Correlation coefficient 0.0742
Mean absolute error 0.742
Root mean squared error 0.9407
Relative absolute error 146.2462 %
Root relative squared error 160.1116 %
Total Number of Instances 1000
As mentioned, I can train it with as many training samples as I want and still have the model overfits. If anyone is interested, I can provide/generate some data and post it online.
",31766,,2444,,1/6/2020 14:08,1/6/2020 19:26,Why does my model overfit on pseudo-random numbers training data?,,1,9,,,,CC BY-SA 4.0
17379,1,17391,,1/5/2020 21:10,,1,96,"I'm evaluating the accuracy in detecting objects for my image data set using three deep learning algorithms. I have selected a sample of 30 images. To measure the accuracy, I manually count the number of objects in each image and then calculate recall and precision values for three algorithms. Following is a sample:
![]()
Finally to select the best model for my data set, can I calculate the mean Recall and mean Accuracy? For Example:
![]()
",32343,,,,,1/7/2020 2:45,Can we calculate mean recall and precision,,1,5,,,,CC BY-SA 4.0
17380,2,,17378,1/6/2020 2:36,,1,,"Simply said, predicting pseudo random number is just not possible for now. Pseudo random numbers generated now have a high enough ""randomness"" so that it cannot be predicted. Pseudo random numbers is the basis of modern cryptography which is widely used in the world wide web and more. It may be possible in the future through faster computers and stronger AI, but for now it is not possible. If you train a model to fit on pseudo random numbers, the model will just overfit and thus creating a scenario as shown in the question. The training loss will be very low while test loss will be extremely high. The model will just ""remember"" the training data instead of generalising to all pseudo random numbers, thus the high test loss.
Also, as a side note, loss is not represented by %, instead it is just a raw numeric value.
See this stack exchange answer for details.
",23713,,32408,,1/6/2020 19:26,1/6/2020 19:26,,,,1,,,,CC BY-SA 4.0
17381,2,,17371,1/6/2020 2:46,,1,,"You can use the dataset test set as ""frames"" of video. Test the images with your model and calculate the images per second of the result and that is the same as frames per second. However you should set the batch size to 1 as in the real world scenario. You should also display each image with teh corresponding boxes after inference and remove the accuracy calculation as to imitate the real world situation.
",23713,,,,,1/6/2020 2:46,,,,0,,,,CC BY-SA 4.0
17382,1,,,1/6/2020 7:54,,3,65,"![]()
This post refers to Fig. 1 of a paper by Microsoft on their Deep Convolutional Inverse Graphics Network:
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/11/kwkt_nips2015.pdf
Having read the paper, I understand in general terms how the network functions. However, one detail has been bothering me: how does the network decoder (or ""Renderer"") generate small scale features in the correct location as defined by the graphics code? For example, when training the dataset on faces, one might train a single parameter in the graphics code to control the (x,y) location of a small freckle. Since this feature is small, it will be ""rendered"" by the last convolutional layer where the associated kernels are small. What I don't understand is how the information of the location of the freckle (in the graphics code) propagates through to the last layer, when there are many larger-scale unpooling + convolutional layers in-between.
Thanks for the help!
",32505,,23713,,1/6/2020 8:41,1/6/2020 8:41,How are small scale features represented in an Inverse Graphics Network (autoencoder)?,,1,0,,,,CC BY-SA 4.0
17383,2,,17360,1/6/2020 8:00,,2,,"You can use the Exponential Moving Average method. This method is used in tensorbaord as a way to smoothen a loss curve plot.
The algorithm is as follow:
However there is a small problem doing it this way. As you can see S_t is initialized with the starting value, which makes the starting curve inaccurate.
The green curve is the ideal curve for the algorithm, but the purple curve is the predicted curve. The curve is not correct on the start. To solve this, a correction factor is added in, thus making the algorithm this:
This introduces WeightedCount which decreases over time to 0.
Exponential Moving Average is also used is other areas of deep learning, the most notable being some optimization algorithms. It is used in Adam, RMSProp and other similar optimizers to smooth out the gradients to make the path to minimal loss a more direct and straightforward path.
",23713,,,,,1/6/2020 8:00,,,,0,,,,CC BY-SA 4.0
17384,2,,17382,1/6/2020 8:37,,2,,"Simply said, there is no specific ""meaning"" to the features generated. They are simply features that are fitted through math and calculus, and nobody knows what they represent exactly, and will never knows. However we can run PCA (Principal Component Analysis) to see which feature is the most ""important"" of all, aka which feature affects the most in the output image. Then, you can try adjusting the value to manually see and guess what teh value do, but you will never know what exactly it does as it is an arbitrary feature, not specifically set by the network. One value may mean multiple things, or just things we human don't understand. See this amazing video about this for details:
https://youtu.be/4VAkrUNLKSo
This video explains what PCA does and also an example of the features generated by the network.
For small scales features, they may simply be ignored as they don't contribute much to the loss or accuracy, or maybe they are represented by a big dot or something else until the last few layers. With just 80 features one cannot fully represent a face with such details, and with the resolution the networks like these are trained on, small features like these probably won't be shown in the image.
",23713,,,,,1/6/2020 8:37,,,,2,,,,CC BY-SA 4.0
17385,1,17389,,1/6/2020 9:34,,3,386,"I trying to understand the Bellman equation for updating the Q table values. The concept of initially updating the value is clear to me. What is unclear is the subsequent updates to the value. Is the value replaced with each episode? It doesn't seem like this would learn from the past. Maybe average the value from the previous episode with the existing value?
Not specifically from the book. I'm using the equation
$$V(s) = \max_a(R(s, a) + \gamma V(s')),$$
where $\gamma$ is the learning rate. $0.9$ will encourage exploration, and $0.99$ will encourage exploitation. I'm working with a simple $3 \times 4$ matrix from YouTube
",32525,,2444,,1/6/2020 14:03,1/6/2020 16:05,Is the Q value updated at every episode?,,1,0,,,,CC BY-SA 4.0
17386,1,,,1/6/2020 10:40,,3,115,"Consider the following game on a MNIST dataset:
- There are 60000 images.
- You can pick any 1000 images and train your Neural Network without access to the rest of images.
- Your final result is prediction accuracy on all dataset.
How to formalize this process in terms of information theory? I know that information theory works with distributions, but maybe you can provide some hints how to think in terms of datasets instead of distributions.
- What is the information size of all datasets. My first idea was
that each image is iid from uniform distribution and information
content = -log2(1/60000). But common sense and empirical results
(training neural network) show that there are similar images and
very different images holding a lot more information. For example if
you train NN only on good looking images of 1 you will get bad
results on unusual 1s.
- How to formalize that the right strategy is to choose as much as possible different 1000 images. I was thinking to take image by image with the highest entropy relative to the images you already have. How to define distance function.
- How to show that all dataset contains N bits of information, training dataset contain M bits of information and there is a way to choose K images < 60000 that hold >99.9% of information.
",32526,,,,,1/6/2020 15:11,How to formalize learning in terms of information theory?,,1,0,,,,CC BY-SA 4.0
17388,2,,17386,1/6/2020 15:03,,2,,"In short: It is easy to quantify information, but it is not easy to quantify its usefulness
![]()
I'm not sure how exactly you are looking to formalise your experiment, but it might be helpful to consider these points:
There is no such thing as an absolute measure of information. The amount of information contained in some dataset is dependent on the underlying assumptions that are made when interpreting it, and therefore, the quantity of information conveyed is also dependent on the encoder/decoder (for example, a neural network). See the Wikipedia article on Kolmogorov Complexity.
Entropy is a useful measure of information content when you assume each sample is iid, but this would be a very bad assumption to make for natural images, since they are highly structured. For example, imagine an image with 50% black pixels and 50% white pixels that can be arranged in any configuration - not matter how you arrange them, wether it looks like random noise, a text paragraph, or a chequer board, the entropy value will be identical for each, even though our intuition tells us otherwise (see attached image). The discrepancy between our intuition and the entropy value arises because our intuition does not interpret the image through the ""lens"" of iid pixels, but rather, hierarchical receptive fields in the visual cortex (somewhat analogous to convolutional neural networks).
Calculating the entropy of pixel values in one image is somewhat useful, but calculating the ""entropy"" of a set of images would not be useful, because each image as a whole is treated as if it were a unique arbitrary symbol. I assume this is what you meant by ""the information size of all datasets""
KL-divergence is a distance function that is often used to compare two distributions. Intuitively, it represents the redundant bits generated by a non-ideal compression program that assumes an incorrect data distribution. However, KL-divergence between two natural images will not give you a particularly meaningful result.
If I am not mistaken, you want to find some information metric that will enable you to pick the smallest number of the most optimal images for training and get a good test score with the network. Is that correct? It is an interesting idea. However, in order to define such a metric, we might have to know in advance what features of an image are the most significant for classification, which in some ways defeats the point of using machine learning in the first place, where non-obvious and ""hidden"" features are exploited.
",32505,,32505,,1/6/2020 15:11,1/6/2020 15:11,,,,4,,,,CC BY-SA 4.0
17389,2,,17385,1/6/2020 16:05,,3,,"I think you are a bit confused about what is the update function and the target.
The equation you have there, and what is done in the video is the estimation of the true value of a certain state. In Temporal-Difference algorithms this is called the TD-Target.
The reason for your confusion might be that in the video he starts from the end state and goes backwards using that formula to get the final value of each state. But that is not how you update the values, that is where you want to get to at the end of iterating through the states.
The update formula may have several forms depending on the algorithm. For TD(0), which is a simple 1-step look ahead off-policy where what is being evaluated is the state (as in your case), the update function is:
$$
V(s) = (1 - \alpha) * V(s) + \alpha * (R(s,a) + \gamma V(s')),
$$
where alpha is the learning rate. What alpha will do is balance how much of your current estimate you want to change. You keep $1 - \alpha$ of the original value and add $\alpha$ times the td-target, which uses the reward for the current state plus the discounted estimate of the value of the next state. Normal values for alpha can be 0.1 to 0.3, for example.
The estimate will slowly converge into the real value of the state which is given by your equation:
$$
V(s) = \max_a(R(s, a) + \gamma V(s')).
$$
Also, the $\gamma$ is actually the discount associated with future states, as it is said in the video you referenced. It basically says how much importance you give to future states rewards. If $\gamma = 0$, then you only care about the reward in your current state to evaluate it (this is not what is used). On the other extreme if $\gamma = 1$ you will give as much value for a reward received in a state 5 steps ahead as you will to the current state. If you use some intermediate value you will give some importance to future rewards, but not as much as for the present one. The decay on the reward received on a state $n steps$ in the future is given by $\gamma^n$.
Another thing that I would correct is that the exploration - exploitation balance is not in any way related to $\gamma$. It is normally balanced by some policy, for example $\epsilon - greedy$. This one for example says that a certain % of the actions you take are random, which in turn makes you explore less valued states.
",24054,,,,,1/6/2020 16:05,,,,0,,,,CC BY-SA 4.0
17390,1,,,1/6/2020 16:15,,1,108,"I am using the shapenet dataset. From this dataset, I have 3d models in .obj format. I rendered the images of these 3d models using pyrender library which gives me an image like this :
![]()
Now I am using raycasting to voxelize this image. The voxel model I get is something like below :
![]()
I am not able to understand why I am getting the white or light brown colored artifacts in the boundary of the object.
The reason I could come up with was maybe the pixels at the boundary of the object contain two colors, so when I traverse the image as numpy array, I get an average of these two colors which gives me these artifacts. But I am not sure if this is the correct reason.
If anyone has any idea about what could be the reason, please let me know
",32534,,,,,1/6/2020 16:15,Rendering images and voxelizing the images,,0,6,,,,CC BY-SA 4.0
17391,2,,17379,1/6/2020 16:26,,3,,"For the precision metric for example you have:
$$
Precision = \frac{TP}{TP+FP},
$$
with TP = True Positive and FP = False Positive.
Imagine you have the following values:
Image 1: $TP = 2, FP = 3$
Image 2: $TP = 1, FP = 4$
Image 3: $TP = 3, FP = 0$
The precision scores as you calculated will be:
Image 1: $2/5$
Image 2: $1/5$
Image 3: $1$
Your average will be: $0.533$
On the other hand if you sum them all up and then calculate the precision value you get:
$P = \frac{6}{6+7} = 0.462$
This proves that averaging the precision scores is not the same as calculating the total precision in one go.
Since what you want is to know how precise your algorithm is, independently of the precision for each image, you should sum all the TP and FP and only then calculate the precision for each model. This way you will not have a biased average. The average would give the same weight to an image with a larger number of objects as to another image which had fewer objects.
",24054,,24054,,1/7/2020 2:45,1/7/2020 2:45,,,,2,,,,CC BY-SA 4.0
17393,1,17397,,1/6/2020 18:56,,3,107,"I'm trying to understand distributional RL, based on this article. In one of the equations, there is a symbol $\operatorname{sup dist}$.
\begin{align}
\operatorname{sup dist}_{s, a} (R(s, a) + \gamma Z(s', a^*), Z(s, a)) \\
s' \sim p(\cdot \mid s, a)
\end{align}
What does $\operatorname{sup dist}$ mean?
",32540,,2444,,1/7/2020 11:45,1/7/2020 11:45,What does the notation sup dist mean in distributional RL?,