diff --git "a/stack_exchange/AI/AIQ&A 2019.csv" "b/stack_exchange/AI/AIQ&A 2019.csv" new file mode 100644--- /dev/null +++ "b/stack_exchange/AI/AIQ&A 2019.csv" @@ -0,0 +1,60736 @@ +Id,PostTypeId,AcceptedAnswerId,ParentId,CreationDate,DeletionDate,Score,ViewCount,Body,OwnerUserId,OwnerDisplayName,LastEditorUserId,LastEditorDisplayName,LastEditDate,LastActivityDate,Title,Tags,AnswerCount,CommentCount,FavoriteCount,ClosedDate,CommunityOwnedDate,ContentLicense +9775,2,,9751,1/1/2019 7:01,,1,,"
If you have a gray scale image, that means you are getting data from one sensor. +If you have an RGB image, that means you are getting data from three sensors. +If you have a CMYK image, that means you are getting data from four sensors.
+ +So, channels can be considered as same information seen from different perspective. (here color)
+ +If you see how the kernel (for example 5*5*3) moves, it moves only in XY direction and not in the channel direction. So, you are trying to learn features in XY direction from all the channels together.
+ +But, if you exchange the dimensions like you mentioned your XY dimensions become 200*3 or 3*200 and your channels become 200. In this case you are moving kernel not in the actual XY spatial space of image. So, it doesn't make any sense according to me. You are contradicting the basic concept of CNN by doing so.
+ +The concept of CNN itself is that you want to learn features from the spatial domain of the image which is XY dimension. So, you cannot change dimensions like you mentioned.
+",20760,,,,,1/1/2019 7:01,,,,0,,,,CC BY-SA 4.0 +9778,2,,9442,1/1/2019 12:01,,0,,"To generate high level programming language code in the context of genetic programming the Grammatical Evolution technique could be a good start. It allows to generate syntactically correct samples according to a grammar, so there will be no (syntactic) garbage in a population.
+ +In the original implementation it has (very simple and) quite destructive mutation and crossover operators. This could be changed by making the operators more sophisticated, so they respect the actual tree-like structure of the samples and the grammar constraints, but effectively it will result in implementing the classical tree-based Genetic Programming system (which isn't bad).
+ +The evaluation of such samples should be done by executing them in an appropriate environment (the actual VM and the desired map or whatever).
+",15881,,,,,1/1/2019 12:01,,,,0,,,,CC BY-SA 4.0 +9779,2,,9354,1/1/2019 18:42,,0,,"In the Large-Scale Study of Curiosity-Driven Learning paper (the prequel to the Random Network Distillation work), in their discussion of Random Features, they reference 3 papers that discuss this:
+ +I just briefly glanced over these. For now, one interesting idea from [2] is to use randomly initialized networks for architecture search. To evaluate the architecture for the task, you don't have to train it; you can just randomly initialize it and measure its performance.
+",10985,,,,,1/1/2019 18:42,,,,0,,,,CC BY-SA 4.0 +9781,1,,,1/1/2019 22:07,,0,100,"I want to create a NHL game predictor and have already trained one neural network on game data.
+ +What I would like to do is train another model on player seasonal/game data and combine the two models to archive better accuracy.
+ +Is this approach feasible? If it is, how do I go about doing it?
+ +EDIT:
+ +I have currently trained a neural network to classify the probability of the home team winning a game on a dataset that looks like this:
+ +h_Won/Lost h_metric2 h_metric3 h_metric4 a_metric2 a_metric3 a_metric4 h_team1 h_team2 h_team3 h_team4 a_team1 a_team2 a_team3 a_team4
+ 1 10 10 10 10 10 10 1 0 0 0 0 1 0 0
+ 1 10 10 10 10 10 10 1 0 0 0 0 1 0 0
+ 1 10 10 10 10 10 10 1 0 0 0 0 1 0 0
+
+
+and so on.
+ +I am preparing a dataset of player-data for each game that will have the shape of this:
+ +Player PlayerID Won/Lost team opponent metric1 metric2
+ Henke 1 1 NY CAP 10 10
+
+
+Hopefully, this new dataset will have some accuracy on if team is going to have some predictive features that are good and recognised.
+ +Now, say I have these two trained Nural Networks and they both have an accuracy of 70% by them self. But I want to combine them both in the hopes to achieve better predictability. How is this archived? How will the test-dataset be structured?
+",21077,,21077,,1/1/2019 23:22,1/1/2019 23:22,is it possible to train several Neural Networks on different types of data and combine them?,The term you need is “model ensembles”, that’s the way models are combined. Pretty hard to be more specific since you don’t give a language or any other details.
+",21079,,,,,1/1/2019 23:08,,,,1,,,,CC BY-SA 4.0 +9783,2,,9725,1/2/2019 0:45,,3,,"The Project Summarized
+ +The project goal appears to be a common one: Routing correspondence in an efficient manner to maintain good but low cost customer and public relations. A few features of the project were mentioned.
+ +The requirements for current development were indicated. The current work is to develop an artificial network that places incoming messages into one of two categories accurately and reliably.
+ +Research and development is beginning along reasonable lines.
+ +First Obstacle and Feasibility
+ +The first obstacle encountered is that in QA using production environment data, 90% of the messages where left unclassified, 5% of the classifications were accurate, and the remaining 5% were inaccurately classified.
+ +It is correct that the even split of 5% accuracy and 5% inaccuracy indicates that information learned is not yet transferable to the quality assurance test phase using real production environment messages. In information theory phraseology, no bits of usable information were transferred and entropy remained unchanged on this first experiment.
+ +These kinds of disappointments are not uncommon when first approaching the use of AI in an existing business environment, so this initial outcome should not be taken as a sign that the idea won't work. The approach will likely work, especially with foul language, which is not dependent on cultural references, analogies, or other semantic complexity.
+ +Recognizing notices that are for audit purposes only, from a social network accounts or purchase confirmations, can be handled through rules. The rule creation and maintenance can theoretically be automated too, and some proprietary systems exist that do exactly that. Such automation can be learned using the appropriate training data, but real time feedback is usually employed, and those systems are usually model based. That is an option for further down the R&D road.
+ +The scope of the project is probably too small, but that's not a big surprise either. Most projects suffer from early overoptimism. A pertinent quote from Redford's The Melagro Beanfield War illuminates the practical purpose of optimism.
+ +++ +APPARITION
+ +I don't know if your friend knows what he's in for.
+ +AMARANTE
+ +Nobody would do anything if they knew what they were in for.
+
Initial Comments
+ +It is not necessary to reduce the number of message categories to two, but there is nothing wrong with starting R&D by refining approach and high level design with the simplest case.
+ +The last layer may be more training efficient if a binary threshold is used for the activation function instead of softmax, since there is only one bit of output needed when there are only two categories. This also forces the network training objective to be the definitive selection of a category, which may benefit the overall rate of R&D progress.
+ +There may be ways of improving outcomes by adding more metrics in the code to beyond just 'accuracy'. Others who work with such details every day may have more domain specific knowledge in this regard.
+ +Culture and Pattern Detection
+ +Insults and curse words are entirely different kinds of things. Foul language is a linguistic symbol or phrase that fits into a broadcasting or publishing category of prohibition. The rules of prohibition are well established in most languages and could be held in a configuration file along with the permutations of each symbol or phrase. In the case of sh*t, related forms include sh*tty, sh*thead, and so on.
+ +It is also useful to distinguish the sub-sets of foul language.
+ +The term foul language is a super-set of these.
+ +Distribution Alignment
+ +Learning algorithms and theory are based on probabilistic alignment of feature distributions between training and use. The distribution of training data must closely resembles the distribution found when the trained AI component is later used. If not, the convergence of learning processes on some optimal behavior defined by gain or loss functions may succeed but the execution of that behavior in the business or industry may fail.
+ +Internationalization
+ +Multilingual AI should usually be fully internationalized. Training and use of training with two distinct dialects will almost always perform poorly. That creates a data acquisition challenge.
+ +As stated above, classification and learning depend on the alignment of statistical distributions between data used in training and data processing relying on the use of what was learned. This is also true of human learning, so this requirement will not likely be overcome any time soon.
+ +All these forms of foul language must be programmed flexibly across these cultural dimensions.
+ +Once one of these is included in the model (which will be imperative) then there is no reason why the others cannot be included at little cost, so it is wise to begin with standard dimensions of flexibility. The alternative will likely lead to costly branching complexity to represent specific rules, which could have been made more maintainable by generalizing for international use up front.
+ +Insult Recognition
+ +Insults require comprehension beyond the current state of technology. Cognitive science may change that in the future, but projections are mere conjecture.
+ +Use of a regular expression engine with a fuzzy logic comparator is achievable and may appease the stakeholders of the project, but identifying insults may be infeasible at this time, and the expectations should be set with stakeholders to avoid later surprises. Consider these examples.
+ +The word combinations in these are not likely to be in some data set you can use for training, so Word2Vec will not help in these types of cases. Additional layers may assist with proper handling of the at least some of the semantic and referential complexity of insults, but only some.
+ +Explicit Answers to Explicit Questions
+ +++ +Is it possible to accomplish this task with a neural network?
+
Yes, in combination with excellence in higher level system design and best practices for internationalization.
+ +++ +Is the structure of this neural network correct for this task?
+
The initial experiments look like a reasonable beginning toward what would later be correct enough. Do not be discouraged, but don't expect the first pass at something like this to look much like what passes user acceptance testing a year from now. Experts can't pull that rate of R&D progress off, unless they hack and cobble something together from previous work.
+ +++ +Are 300k messages enough to train the neural network?
+
Probably not. In fact, 300m messages will not catch all combinations of cultural references, analogies, colloquialisms, variations in dialect, plays on words, and games that spammers play to avoid detection.
+ +What would really help is a feedback mechanism so that production outcomes are driving the training rather than a necessarily limited data set. Canned data sets are usually restricted in the accuracy of their probabilistic representation of social phenomena. None will likely infer dialect and other locale features to better detect insults. A Parisian insult may have nothing in common with a Creole insult.
+ +The feedback mechanism must be based on impressions in some way to become and remain accurate. The impressions must be labelled with all the locale data that is reasonably easy to collect and possibly correlated to the impression.
+ +This implies the use of rules acquisition, fuzzy logic control, reinforcement learning, or the application of naive Bayesian approaches somewhere appropriate within the system architecture.
+ +++ +Do I need to clean up the data from uppercase, special characters, numbers etc?
+
Numbers can be relevant. Because of historical events and religious texts, 13 and 666 might be indications of something offensive, respectively. One can also use numbers and punctuation to convey word content. Here are some examples of spam detection resistant click bait.
+ +The meaning of the term special character is vague and ambiguous. Any character in UTF-8 is legitimate for almost all Internet communications today. HTML5 provides additional entities beginning with an ampersand and ending with a semicolon. (See https://dev.w3.org/html5/html-author/charref.)
+ +Filtering these out is a mistake. Spammers leverage these standards to penetrate spam detection. In this example, the stroke similarities of a capital ell (L) and those of the British pound symbol can be exploited to produce spam detection resistant click bait.
+ +Removing special characters that fit within the Internet standards of UTF-8 and HTML entities will likely lead to disaster. It is recommended not to follow that part of the predecessor's design.
+ +Regarding emoticons and other ideograms, these are linguistic elements that may represent in text encoding the volume, pitch, or tone modulation of phonetics, or they may represent face or body language. In many languages ideograms are used in place of words. For a global system running in parallel with the blogsphere, emoticons are part of linguistic expression.
+ +For that reason, they are not significantly different than word roots, prefixes, suffixes, conjugations, or word pairs as linguistic elements which can also express emotion as well as logical reasoning. For the learning algorithm to learn categorization behavior in the presence of ideograms, the ideograms must remain in training features and later in real time processing of those features using the results of training.
+ +Additional Information
+ +Some additional information is covered in this existing post: Spam Detection using Recurrent Neural Networks.
+ +Since spam detection is closely related to fraud detection, the spammer fraudulently acting like a relationship already exists with their recipients, this existing post may be of assistance too: Can we implement GAN (Generative adversarial neural networks) for classication problem like Fraud detecion?
+ +Another resource that may help is this: https://www.tensorflow.org/tutorials/representation/word2vec
+",4302,,4302,,1/4/2019 0:04,1/4/2019 0:04,,,,0,,,,CC BY-SA 4.0 +9784,2,,9677,1/2/2019 5:28,,1,,"I've discovered Doc2Vec which does something similar to what I am trying to accomplish. This doesn't exactly answer my question of why the network I was trying to build doesn't work, but at least it shows how indexed outputs can be pulled from a network, with open source to show how it is built.
+ +https://datascience.stackexchange.com/questions/23969/sentence-similarity-prediction
+",20930,,,,,1/2/2019 5:28,,,,0,,,,CC BY-SA 4.0 +9786,1,9788,,1/2/2019 8:58,,2,475,"I'm currently working on a regression problem and I have 10 inputs/attributes.
+ +What should I do if there are correlations between different features of the input data? Does the correlation between inputs affect the performance (e.g. accuracy) of the model?
+",21084,,2444,,8/19/2019 22:43,8/19/2019 22:43,Does the correlation between inputs affect the model performance?,Non-correlation does not imply independence, that is, if two features are not correlated (i.e. zero correlation), it does not mean that they are independent. But (non-zero) correlation implies dependence (see https://stats.stackexchange.com/q/113417/82135 for more details). So, if you have non-zero correlation between two features, it means they are dependent. If they are dependent, then one feature gives you information about the other and vice-versa: in a certain way, one of the two is, at least partially, redundant.
+ +Unnecessary features might not affect the performance (e.g. the accuracy) of a model. However, if you reduce the number of features, the learning process might actually be faster.
+ +You may want to try some dimensionality reduction technique, in order to reduce the number of features.
+",2444,,2444,,6/4/2019 0:46,6/4/2019 0:46,,,,0,,,,CC BY-SA 4.0 +9790,2,,5308,1/2/2019 17:46,,0,,"Algorithms can learn to cheat:
+++",1671,,-1,,6/17/2020 9:57,1/2/2019 17:46,,,,0,,,,CC BY-SA 4.0 +9794,1,9801,,1/2/2019 20:05,,0,94,""A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.”
+"...a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new." +
+
Source: This clever AI hid data from its creators to cheat at its appointed task (TechCrunch)
I’m a researcher and I’m currently conducting a research project. I will conduct a study where I would like to trigger different emotions using chatbots on a smartphone (e.g. on Facebook Messenger).
+ +Are there any existing chatbots which are able to trigger different emotions intentionally (also negative ones)?
+",21103,,,,,1/2/2019 23:30,Chatbots triggering emotions,Check out source code to DeepImagePrior it does a remarkable job guessing what's missing to repair images with a variety of damage.
+",3370,,,,,1/2/2019 20:09,,,,0,,,,CC BY-SA 4.0 +9797,2,,7222,1/2/2019 20:33,,0,,"This is a huge growth area in the impact of AI on HR -- see all the companies we've found that do candidate matching for instance (disclaimer I work for CognitionX). Under the hood, there are techniques that don't rely on vocabulary such as Facebook's FastText but need more training data.
+ +Here are some other resources
+Job matching using unsupervised learning (k-nearest neighbour)
+ see paper
-Artificial chemistry and the origins of life +-Self-assembly, growth, and development +-Self-replication and self-repair +-Systems and synthetic biology +-Perception, cognition, and behavior +-Embodiment and enactivism +-Collective behaviors of swarms +-Evolutionary and ecological dynamics +-Open-endedness and creativity +-Social organization and cultural evolution +-Societal and technological implications +-Philosophy and aesthetics +-Applications to biology, medicine, business, education, or entertainment
+ +See: Artificial Life Forum (MIT Press) | International Society for Artificial Life
+",1671,,1671,,1/2/2019 21:44,1/2/2019 21:44,,,,0,,,,CC BY-SA 4.0 +9800,4,,,1/2/2019 21:44,,0,,For question about artificial systems that exhibit the behavioral characteristics of natural living systems.,1671,,1671,,1/2/2019 21:44,1/2/2019 21:44,,,,0,,,,CC BY-SA 4.0 +9801,2,,9794,1/2/2019 23:30,,1,,"Emotions can of course be triggered by lots of different things. I think the most rich source could well be socialbots like Mitsuku.com and Zo.ai -- Steve Worswick is the owner of Mitsuku and may be interested in helping you by doing (appropriately filtered) chat log queries. You can get him on Twitter at @Mitsuku.
+",3370,,,,,1/2/2019 23:30,,,,5,,,,CC BY-SA 4.0 +9802,2,,9766,1/3/2019 0:17,,0,,"Simple answer is tweaking an image in unnoticeable ways that completely fool software. Eg a cat that is identified as 99% likely ""to be guacamole"" https://mashable.com/2017/11/02/mit-researchers-fool-google-ai-program/#CU7dSAfQ5sqY
+",3370,,,,,1/3/2019 0:17,,,,0,,,,CC BY-SA 4.0 +9808,1,,,1/3/2019 14:20,,7,2542,"I understand the minimax algorithm, but I am unable to understand deeply the minimax algorithm with alpha-beta pruning, even after having looked up several sources (on the web) and having tried to read the algorithm and understand how it works.
+ +Do you have a good source that explains alpha-beta pruning clearly, or can you help me to understand the alpha-beta pruning (with a simple explanation)?
+",21125,,2444,,2/26/2019 9:40,2/26/2019 9:40,Can someone help me to understand the alpha-beta pruning algorithm?,In the DQN paper, it is written that the state-space is high dimensional. I am a little bit confused about this terminology.
+Suppose my state is a high dimensional vector of length $N$, where $N$ is a huge number. Let's say I solve this task using $Q$-learning and I fix the state space to $10$ vectors, each of $N$ dimensions. $Q$-learning can easily work with these settings as we need only a table of dimensions $10$ x number of actions.
+Let's say my state space can have an infinite number of vectors each of $N$ dimensions. In these settings, Q-learning would fail as we cannot store Q-values in a table for each of these infinite vectors. On the other hand, DQN would easily work, as neural networks can generalize for other vectors in the state-space.
+Let's also say I have a state space of infinite vectors, but each vector is now of length $2$, i.e., small dimensional vectors. Would it make sense to use DQN in these settings? Should this state-space be called high dimensional or low dimensional?
+",21131,,2444,,2/3/2021 19:09,2/4/2021 11:00,What is a high dimensional state in reinforcement learning?,This is a question related to Neural network to detect "spam"?. +I'm wondering how it would be possible to handle the emotion conveyed in text. In informal writing, especially among a juvenile audience, it's usual to find emotion expressed as repetition of characters. For example, ""Hi"" doesn't mean the same as ""Hiiiiiiiiiiiiiii"" but ""hiiiiii"", ""hiiiiiiiii"", and ""hiiiiiiiiii"" do.
+ +A naive solution would be to preprocess the input and remove the repeating characters after a certain threshold, say, 4. This would probably reduce most long ""hiiiii"" to 4 ""hiiii"", giving a separate meaning (weight in a context?) to ""hi"" vs ""long hi"".
+ +The naivete of this solution appears when there are combinations. For example, +haha vs hahahahaha or lol vs lololololol. Again, we could write a regex to reduce lolol[ol]+ to lolol. But then we run into the issue of hahahaahhaaha where a typo broke the sequence.
+ +There is also the whole issue of Emoji. Emoji may seem daunting at first since they are special characters. But once understood, emoji may actually become helpful in this situation. For example, 😂 may mean a very different thing than 😂😂😂😂😂, but 😂😂😂😂😂 may mean the same as 😂😂😂😂 and 😂😂😂😂😂😂.
+ +The trick with emojis, to me, is that they might actually be easier to parse. Simply add spaces between 😂 to convert 😂😂😂😂 to 😂 😂 😂 😂 in the text analysis. I would guess that repetition would play a role in training, but unlike ""hi"", and ""hiiii"", Word2Vec won't try to categorize 😂 and 😂😂 as different words (as I've now forced to be separate words, relying in frequency to detect the emotion of the phrase).
+ +Even more, this would help the detection of ""playful"" language such as 😠😂😂😂, where the 😠 emoji might imply there is anger, but alongside 😂 and especially when repeating 😂 multiple times, it would be easier for a neural network to understand that the person isn't really angry.
+ +Does any of this make sense or I'm going in the wrong direction?
+",17272,,,,,2/28/2020 1:02,Handling emotion in informal text (Hi vs HIIIIII!!!!)?,Yes, it makes sense to use DQN in state space with small number of dimensions as well. It doesn't really matter how big your state dimension is, but if you have state with 2 dimensions for instance you wouldn't use convolutional layers in your neural net like its used in the paper you mentioned, you can use ordinary fully connected layers, it depends on the problem.
+",20339,,,,,1/3/2019 18:01,,,,0,,,,CC BY-SA 4.0 +9816,1,,,1/3/2019 20:09,,0,869,"I am trying to implement a Deep Q Network to play Asteroids. Unfortunately, I am not sure how to calculate the Q value exactly, if I am exploring. For example, the agent is exploring for 1 second (otherwise makes no sense; I cannot let it just explore one step). Unfortunately, it makes a mistake at 0.99s, and the reward collapses.
+ +At the moment, I am using the following formula to evaluate or update the Q value:
+ +$$Q_{new,t} = reward + \gamma Q_{max,t+1}$$
+ +But how do I know the max Q value of the next step? I could consider the best Q value the network says, but this is not necessarily true.
+ +You can see the current implementation at the following URL: +https://github.com/SuchtyTV/RLearningBird/blob/master/src/main/java/rlgame/Brain.java.
+",19062,,2444,,2/16/2019 0:14,2/16/2019 0:14,How do I update the Q values of a Deep Q Network when exploring?,My understanding is the input neurons seem to seem to compute a weighted sum moving from one layer to another.
+ +
+$$ \sum_i a_i w_i = a'_{k} $$
But to compute this weighted sum the sum must be discrete. Is there any known method to compute the sum when the activation is a continuous function? Is the below formula of any consequence problems in artificial intelligence? Can anyone give a specific problem where it might be useful?
+ +Let $b_r = \sum_{d \mid r} a_d\mu(\frac{m}{d})$. We prove that if the $b_r$'s are small enough, the result is true (where $\mu$ is the mobius function).
+ +++ +Claim: If $\lim_{n \to \infty} \frac{\log^2(n)}{n}\sum_{r=1}^n |b_r| = 0$ and $f$ is smooth, then $$\lim_{k \to \infty} \lim_{n \to \infty} \sum_{r=1}^n a_rf\left(\frac{kr}{n}\right)\frac{k}{n} = \left(\lim_{s \to 1} \frac{1}{\zeta(s)}\sum_{r=1}^\infty \frac{a_r}{r^s}\right)\int_0^\infty f(x)dx.$$
+
I will not go into the proof of this over but for those who are interested: https://math.stackexchange.com/questions/2888976/a-rough-proof-for-infinitesimals I will merely state what the formula means:
+ +Consider we have a curve $f(x)$ now if one wishes to perform a weighted sum in the limiting case of this function.
+ + + +Consider the curve $f(x)$. Then splitting it to $k/n= h$ intervals then adding the first strip ($d_1$ times): $ f(h) \cdot d_1$. Then the second strip ($d_2$ times) $ f(2h) \cdot d_2$ times ... And so on . Hence. $d_r$ can be thought of as the weight at $f(rh)$.
+",21136,,21136,,3/25/2020 16:52,3/25/2020 16:52,Method to compute the sum when the activation is a continuous function?,These kinds of repetitions in text can place recurrence demands on learning algorithms that may or may not be handled without special encoding.
+ +These have the same meaning on one level, but different emotional content and therefore different correlations to categories when detecting the value of an email, which in the simplest case is the placement of a message in one of two categories.
+ +This is colloquially called spam detection, although not all useless emails are spam and some messages sent by organizations that broadcast spam may be useful, so technically the term spam is not particularly useful. The determinant should usually be the return on investment to the recipient or the organization receiving and categorizing the message.
+ +++ +Is reading the message and potentially responding likely of greater value than the cost of reading it?
+
That is a high level paraphrase of what the value or cost function must represent when AI components are employed to learn about or track close to (in continuous learning) some business or personal optimality.
+ +The question proposes a normalization scheme that truncates long repetitions of short patterns in characters, but truncation is necessarily destructive. Compression of some type that will both preserve nuance and work with the author's use of Word2Vec is a more flexible and comprehensive approach.
+ +In the case of playful sequences of characters it is anthropomorphic to imagine that an artificial network will understand playfulness or anger, however existing learning devices can certainly learn to use character sequences that humans would call playful or angry in the function that emerges to categorize the message containing them. Just remember that model free learning is not at all like cognition, so the term understanding is placing an expectation on the mental capacities of the AI component that the AI component may not possess.
+ +Since no indication that a recurrent or recursive network will be used but rather the entire message is represented in a fixed width vector, so the question becomes which of these two approaches will produce the best outcomes after learning.
+ +This second approach produces reasonable behavior with other cases mentioned, such as ""😠😂😂😂"" pre-processed into ""😠😂 [2x😂]"". What the algorithm in Word2Vec will do with each of these two choices and how its handling of them will affect outcomes is difficult to predict. Experiments must be run. Three things are advisable courses of action.
+ +For tabular Q-learning, the q-values for state s and action a are updated according to
+ +$$ +Q(s, a) \gets Q(s, a) + \alpha [(r + max_{a'} Q(s', a')) - Q(s,a)] +$$
+ +where $\alpha$ is the learning rate and $(r + max_{a'} Q(s', a')) - Q(s,a)$ is the difference between the current estimate of the q-value, $Q(s,a)$, and the target, $r + max_{a'} Q(s', a')$.
+ +The target q-value is based on the greedy policy, not the exploratory policy. Q-learning is theoretically guaranteed to converge to the optimal policy for any behavior policy (like $\epsilon$-greedy) that is guaranteed to visit every state and action pair an infinite number of times. See Section 6.5 of the Sutton and Barto book for more details.
+ +In contrast to Q-learning, the target q-value for SARSA is $r + Q(s', a')$, where $a'$ is chosen from an exploratory behavior policy like $\epsilon$-greedy. For SARSA the learned q-values are dependent on the behavior policy and therefore not guaranteed to converge to the optimal policy. A behavior policy that intentionally acted randomly for multiple consecutive actions, as in your example Asteroids exploratory policy, would likely lead to learning different q-values than would be learned for an $\epsilon$-greedy behavior policy.
+ +Unfortunately Q-learning's theoretical guarantees of convergence to an optimal policy go out the window when nonlinear function approximation is introduced, as is the case for deep neural networks. Nevertheless, in the Deep Q-Networks paper, the q-value function is updated using a target value based on the maximum q-value for the next state. Specifically, if $Q(s, a, w)$ is a q-value function parameterized by weights $w$, then the weights are updated by
+ +$$ +w \gets w + \alpha [(r + max_{a'} Q(s', a', w^-)) - Q(s, a, w)] \nabla_w Q(s,a,w) +$$
+ +where $w^-$ are the parameters of the target network used to stabilize training. (See the paper for more details). This update rule is chosen to minimizes the loss function
+ +$$ +L(w) = E[(r + max_{a'} Q(s', a', w^-)) - Q(s, a, w)]^2 +$$
+ +For your own implementation, it may be helpful to see a code example of the Deep Q-Networks parameter updates. A tensorflow implementation is available in the function build_train
in the OpenAI Baselines DeepQ code.
I am looking for an non-ML method for two chat bots to communicate to each other about a specific topic. I am looking for an ""explainable AI"" method, as opposed to a ""black-box"" one (like a neural network).
+",20378,,2444,,5/1/2019 17:03,5/1/2019 17:03,How do I create chatbots without machine learning?,The easiest non-ML way would be to use a finite state machine. You could model various states of your conversation topics, and certain utterances of your bots could advance the bot's internal model along different paths. The complexity depends on the complexity of the topic.
+ +You can then enhance the transitions with probabilities, and later move on towards ML by transforming it into an HMM.
+ +However, even simple topics will probably lead to fairly complex state machines. But you should be able to keep track of what is going on in your conversation nevertheless.
+ +Update: just to make it a bit clearer, I was thinking along the lines of having states for particular stages in the conversation. You could either have one model for the whole conversation, or one per participant.
+ +Initially, there would be a state 'greeting'. Possible transitions would be to a further state 'greeting' (the response of the person who has been greeted), or that could be skipped to states such as 'statement', 'question', etc. 'Question' would have transitions to 'answer', 'ignore question', 'counter/clarification question' etc. The level of detail depends on your application.
+",2193,,2193,,1/7/2019 9:26,1/7/2019 9:26,,,,2,,,,CC BY-SA 4.0 +9826,2,,9808,1/4/2019 12:43,,2,,"Suppose that you have already search a part of the complete search tree, for example the complete left half. This may not yet give you the true game-theoretic value for the root node, but it can already give you some bounds on the game-theoretic value that the player to play in the root node (let's say, the max player) can guarantee by moving into that part of the search tree. Those bounds / guarantees are:
+ +The intuitive idea behind alpha-beta pruning is to prune chunks of the search tree that become uninteresting for either player because they already know they can guarantee better based on the $\alpha$ or $\beta$ bounds.
+ +For a simple example, suppose $\alpha = 1$, which means that the maximizing player already has explored a part of the search tree such that it can guarantee at least a value of $1$ by playing inside that part (the minimizing player has no options inside that entire tree to reduce the value below $1$, if the maximizing player plays optimally in that part).
+ +Suppose that, in the current search process, we have arrived at a node where the minimizing player is to play, and it has a long list of child nodes. We evaluate the first of those children, and find a value of $0$. This means that, under the assumption that we reach this node, the minimizing player can already guarantee a value of $0$ (and possibly get even lower, we didn't evaluate the other children yet). But this is worse (for the maximizing player) than the $\alpha = 1$ bound we already had. Without evaluating any of the other children, we can already tell that this part of the search tree is uninteresting, that the maximizing player would make sure that we never end up here, so we can prune the remaining children (which could each have large subtrees below them).
+",1641,,,,,1/4/2019 12:43,,,,2,,,,CC BY-SA 4.0 +9828,1,9942,,1/4/2019 13:39,,9,3467,"There are several activation functions, such as ReLU, sigmoid or $\tanh$. What happens when I mix activation functions?
+ +I recently found that Google has developed Swish activation function which is (x*sigmoid). By altering activation function can it increase accuracy on small neural network problem such as XOR problem?
+",21143,,2444,,5/15/2019 15:16,12/15/2022 20:40,What happens when I mix activation functions?,I am not sure if I understood the q learning algorithms correctly. +Therefore I would give a concrete example and ask if someone can tell me how to update the q value correctly.
+ +First I initialized a Neural Network with random weights. It shall henceforth evaluate the Q Value for all possible actions(4) given a State S.
+ +Then the following happens. The agent is playing and is exploring. +For 3 steps the Q Values evaluated were: +(0,-1,-5,0), (0,-1,0,0), (0,-.6,0,0)
+ +The reward given was: 0,0,1 +The action took were: (1.,1.,1.) +In the random walk example (same reward given), it was: (1.,2.,3.)
+ +So what are the new Q - Values, assuming a discount factor of 0.99 and the learning rate 0.1?
+ +The States for Simplicity are only one number: 1,1.3,2.4 Where 2.4 is the state who ends the game...
+ +The same example holds for exploiting. Is the algorithm the same here?
+ +Here you see my last implementation:
+ + public void rlearn(ArrayList<Tuple> tupels, double learningrate, double discountfactor) {
+
+ //newQ = sum of all rewards you have got through
+ for(int i = tupels.size()-1; i > 0; i--) {
+ MLData in = new BasicMLData(45);
+ MLData out = new BasicMLData(5);
+
+ //Add State as in
+ int index = 0;
+ for(double w : tupels.get(i).statefirst.elements) {
+ in.add(index++, w);
+ }
+
+ //Now start updating Q - Values
+ double qnew = 0;
+ if(i <= tupels.size()-2){
+ qnew = tupels.get(i).rewardafter + discountfactor*qMax(tupels.get(i+1));
+ } else {
+ qnew = tupels.get(i).rewardafter;
+ }
+
+ tupels.get(i).qactions.elements[tupels.get(i).actionTaken] = qnew;
+ //Add Q Values as out
+ index = 0;
+ for(double w : tupels.get(i).qactions.elements) {
+ out.add(index++, w);
+ }
+ bigset.add(in, out);
+ }
+}
+
+
+Edit: This is the qMax - function:
+ + private double qMax(Tuple tuple) {
+ double max = Double.MIN_VALUE;
+ for(double w : tuple.qactions.elements) {
+ if(w > max) {
+ max = w;
+ }
+ }
+ return max;
+}
+
+",19062,,19062,,1/5/2019 10:58,1/7/2019 21:44,Concrete Example for Q Learning,Usually when people write about having a high-dimensional state space, they are referring to the state space actually used by the algorithm.
+ +++ +Suppose my state is a high dimensional vector of $N$ length where $N$ is a huge number. Let's say I solve this task using $Q$-learning and I fix my state space to $10$ vectors each of $N$ dimensions. $Q$-learning can easily work with these settings as we need only a table of dimensions $10$ x number of actions.
+
In this case, I'd argue that the ""feature vectors"" of length $N$ are quite useless. If there are effectively only $10$ unique states (which may each have a very long feature vector of length $N$)... well, it seems like a bad idea to make use of those long feature vectors, just using the states as identity (i.e. a tabular RL algorithm) is much more efficient. If you end up using a tabular approach, I wouldn't call that a high-dimensional space. If you end up using function approximation with the feature vectors instead, that would be a high-dimensional space (for large $N$).
+ +++ +Let's also say I have a state space of infinite vectors but each vector is now of length $2$ i.e. very small dimensional vectors. Would it make sense to use DQN in these settings ? Should this state-space be called high dimensional or low dimensional ?
+
This would typically be referred to as having a low-dimensional state space. Note that I'm saying low-dimensional. The dimensionality of your state space / input space is low, because it's $2$ and that's typically considered to be a low value when talking about dimensionality of input spaces. The state space may still have a large size (that's a different word from dimensionality).
+ +As for whether DQN would make sense in such a setting.. maybe. With such low dimensionality, I'd guess that a linear function approximator would often work just as well (and be much less of a pain to train). But yes, you can use DQN with just 2 input nodes.
+",1641,,,,,1/4/2019 16:20,,,,2,,,,CC BY-SA 4.0 +9832,2,,9819,1/4/2019 16:40,,2,,"In terms of the normal use cases for machine learning, the equation does not have much utility, because:
+ +++ +Consider we have a curve $f(x)$ now if one wishes to . . .
+
In most AI problems, we don't usually have such a curve as input that can be treated analytically. For instance, there is no such input curve to describe a natural image received by a sensor.
+ +In the vast majority of cases in AI problems, the form of inputs - whether it is images, text, mapping data, robotic telemetry, is going to be multi-dimensional discrete samples from highly complex functions where we don't know the analytical form and can only construct approximations from a set of basis functions. The resulting combination of basis functions could be treated as continuous, integrated etc, but as it would have been constructed from discrete data, the end result would be a lot of computation to end up with something probably less accurate than working direct with the discrete samples. In a lot of cases, the raw data is discrete by definition (e.g. whether someone clicked on a link or replied to a message), so the form of $f(x)$ would be discrete by definition of the problem, and calculus not really applicable.
+ +There might be some interesting use cases in analog signal processing. Using your formula or a variation of it for instance it should be possible to create an analog neural network learning system - a robotic brain that worked with continuous signals and only contained analog components. Such a system would have some interesting advantages - speed of processing, probably low power consumption compared to digital approach, but the accuracy and precision might be lower. E.g. imagine a bot that could steer towards/away from light sources (compared to a digital one that might recognise the faces of people and steer towards/away from them). I would not be at all surprised to find that someone had done just that already, although I am not sure how to search for it.
+",1847,,1847,,1/4/2019 16:45,1/4/2019 16:45,,,,0,,,,CC BY-SA 4.0 +9833,2,,3879,1/4/2019 17:02,,2,,"The error in the code is simply having a $+$ rather than a $-$ sign. Line 4 of the algorithm says:
+ +$$E\left[ g^2 \right]_t = \rho E\left[ g^2 \right]_{t - 1} + (1 - \rho) g_t^2,$$
+ +but your code implements (note the $+$ inside the brackets at the end):
+ +$$E\left[ g^2 \right]_t = \rho E\left[ g^2 \right]_{t - 1} + (1 + \rho) g_t^2.$$
+ +A correct implementation, with only that minor change, would be:
+ +import math
+
+Eg = Ex = 0
+p = 0.95
+e = 1e-6
+x = 1
+history = [x]
+
+for t in range(100):
+ g = 2*x
+ Eg = p*Eg + (1-p)*g*g
+ Dx = -(math.sqrt(Ex + e) / math.sqrt(Eg + e)) * g
+ Ex = p*Ex + (1-p)*Dx*Dx
+ x = x + Dx
+ history.append(x)
+
+print(history)
+
+
+On my end, that code leads to a value of approximately $0.597$. It looks like, with these hyperparameters, you'll need more like 400 or 500 iterations to get really close to $0$, but it steadily gets there.
+ +++ +For example, the paper claims that the update step Δx will have the same unit as x, if x has some hypothetical unit. While this is probably a desireable property, it is as far as I'm concerned not true, since the premise that RMS[Δx] has the same unit as x is incorrect to begin with, since RMS[Δx]_0 = sqrt(E[Δx]_0 + ϵ) = sqrt(0 + ϵ) which is a unitless constant, so all Δx become unitless rather than having the same unit as x. (Correct me if I'm wrong.)
+
Suppose that we use the symbol $u$ for this hypothetical unit that $x$ has. Line 5 of the algorithm says:
+ +\begin{aligned} +\Delta x_t &= - \frac{\text{RMS}\left[ \Delta x \right]_{t-1}}{\text{RMS} \left[ g \right]_t} g_t\\ +&= - \frac{\sqrt{E\left[ \Delta x^2 \right]_{t-1} + \epsilon}}{\sqrt{E\left[ g^2 \right]_{t} + \epsilon}} g_t. +\end{aligned}
+ +We can get rid of the $\epsilon$ terms, their addition does not change the unit of whatever they are added to:
+ +$$- \frac{\sqrt{E\left[ \Delta x^2 \right]_{t-1}}}{\sqrt{E\left[ g^2 \right]_{t}}} g_t.$$
+ +As you stated correctly, for the very first iteration, we have $E\left[ \Delta x^2 \right]_{t-1} = 0$. Technically in the very first iteration it could have any unit we like (or no unit at all), based on whatever unit we choose to assign to the $0$ constant it is initially set to. Let's just say we assign it the unit $u^2$ (by saying that that is the unit of the $0^2$ constant we initialize it to). This is convenient because it allows us to immediately figure out the unit in all cases rather than just the $t = 0$ case, this is the unit that it has to have if we also still want things to work out for $t > 0$.
+ +The gradient $g_t$ has a unit $\frac{1}{u}$, which means that $E[g^2]_t$ has a unit $\frac{1}{u^2}$, and $\sqrt{E[g^2]_t}$ then again has the unit $\frac{1}{u}$. If we replace all the quantities by their units, we then get:
+ +$$\frac{u}{\frac{1}{u}} \times \frac{1}{u} = u.$$
+",1641,,1641,,1/6/2019 19:06,1/6/2019 19:06,,,,4,,,,CC BY-SA 4.0 +9834,1,,,1/4/2019 18:46,,3,302,"What are the strengths of the Hierarchical Temporal Memory model compared to competing models such as 'traditional' Neural Networks as used in deep learning? And for those strengths are there other available models that aren't as bogged down by patents?
+",21155,,2444,,7/28/2019 10:05,7/28/2019 14:28,What are the strengths of the Hierarchical Temporal Memory model compared to competing models?,After reading an excellent BLOG post Deep Reinforcement Learning: Pong from Pixels and playing with the code a little, I've tried to do something simple: use the same code to train a logical XOR gate.
+ +But no matter how I've tuned hyperparameters, the reinforced version does not converge (gets stuck around -10). What am I doing wrong? Isn't it possible to use Policy Gradients, in this case, for some reason?
+ +The setup is simple:
+ +The code (forked from original and with minimal modifications) is here: https://gist.github.com/Dimagog/de9d2b2489f377eba6aa8da141f09bc2
+ +P.S. Almost the same code trains XOR gate with supervised learning in no time (2 sec).
+",20941,,20941,,12/3/2019 21:43,12/3/2019 21:43,How to train a logical XOR with reinforcement learning?,I'm learning machine learning by looking through other people's kernel on Kaggle, specifically this Mushroom Classification kernel.
+The author first applied PCA to the transformed indicator matrix. He only used 2 principal components for visualization later. Then I checked how much variance it has maintained, and found out that only 16% variance is maintained.
+in [18]: pca.explained_variance_ratio_.cumsum()
+out[18]: array([0.09412961, 0.16600686])
+
+But the test result with 90% accuracy suggests it works well.
+If variance stands for information, then how can the ML model work well when so much information is lost?
+",21166,,2444,,4/1/2021 11:15,8/19/2023 19:05,Why does PCA work well while the total variance retained is small?,Most Deep Q-learning implementations I have read are based on Deep Q-Networks (DQN). In DQN, the q-value network maps an input state to a vector of q-values, one for each action:
+ +$$ +Q(s, \mathbf{w}) \to \mathbf{v} +$$
+ +where $s$ is the input state from the environment, $\mathbf{w}$ are the parameters of the neural network, and $\mathbf{v}$ is a vector of q-values, where $v_i$ is the estimated q-value of the ith action. In the Sutton and Barto book, the q-value function is written as $Q(s, a, \mathbf{w})$, which corresponds to the network output for action $a$.
+ +Unlike tabular Q-learning, Deep Q-learning updates the parameters of the the neural network according to the gradients of the loss function with respect to the parameters. DQN uses the loss function
+ +$$ +L(\mathbf{w}) = [(r + \gamma max_{a'} Q(s', a', \mathbf{w^-})) - Q(s, a, \mathbf{w})]^2 +$$
+ +where $\gamma$ is the discount rate, $a$ is the selected action (either greedily or randomly for an $epsilon$-greedy behavior policy), $s'$ is the next state, $a'$ is the argmax action for the next state, and $\mathbf{w^-}$ is an older version of the network weights $\mathbf{w}$ that is used to help stabilize training.
+ +In deep Q-learning, training directly updates parameters, not q-values. Parameters are updated by taking a small step in the direction of the gradient of the loss function
+ +$$ +\mathbf{w} \gets \mathbf{w} + \alpha [(r + \gamma max_{a'} Q(s', a', \mathbf{w^-})) - Q(s, a, \mathbf{w})] \nabla_w Q(s, a, \mathbf{w}) +$$
+ +where $\alpha$ is the learning rate.
+ +In frameworks like tensorflow or pytorch the derivative is calculated automatically by giving the loss function and model parameters directly to an optimizer class which uses some variation of mini-batch gradient descent. In eagerly executed tensorflow updating the parameters for a mini-batch might look something like
+ +batch = buffer.sample(batch_size)
+observations, actions, rewards, next_obervations = batch
+
+with tf.GradientTape() as tape:
+ qvalues = model(observations, training=True)
+ next_qvalues = target_model(next_obervations)
+ # r + max_{a'} Q(s', a') for the batch
+ target_qvalues = rewards + gamma * tf.reduce_max(next_qvalues, axis=-1)
+ # Q(s, a) for the batch
+ selected_qvalues = tf.reduce_sum(tf.one_hot(actions, depth=qvalues.shape[-1]) * qvalues, axis=-1)
+ loss = tf.reduce_mean((target_qvalues - selected_qvalues)**2)
+
+grads = tape.gradient(loss, model.variables)
+optimizer.apply_gradients(zip(grads, model.variables))
+
+
+Though I am not familiar with the Encog neural network framework you are using, based on the example Brain.java
file from your Github repo and Chapter 5 of the Encog User Manual and the Encog neural network examples on Github it looks like weights are updated as follows:
Propagation
instance, train
, is constructed with a network and training set. Different subclasses of Propagation
use different loss functions to update the network parameters.train.iterate()
is called to run the network on the inputs, calculate the loss between the network outputs and target outputs, and update the weights according to the loss.For DQN, a training set is constructed from a random sample from the experience replay buffer to help stabilize training. A training set could also be the trajectory of an episode, which is what the tupels
argument in the example code of the question appears to be.
The input would be the statefirst
member of each element of tupels
. Since the network produces a vector of q-values, the target output must also be a vector of q-values.
The target output element for the selected action is $r + \gamma max_{a'} Q(s', a', \mathbf{w^-})$, In the example code of the question, this is
+ +double qnew = 0;
+if(i <= tupels.size()-2){
+ qnew = tupels.get(i).rewardafter + discountfactor*qMax(tupels.get(i+1));
+} else {
+ qnew = tupels.get(i).rewardafter;
+}
+tupels.get(i).qactions.elements[tupels.get(i).actionTaken] = qnew
+
+
+The target output elements for actions that were not selected should be $Q(s, b, \mathbf{w})$, where $b$ is one of the non-selected actions. This should have the effect of ignoring the q-values of non-selected actions by making the network output equal to the target output.
+ +So what are the new Q - Values, assuming a discount factor of 0.99 and the learning rate 0.1?
+ +Assuming you mean target outputs by the new Q - Values, and given the trajectory of actions, (1, 1, 1)
, and q-value vectors from the question, the concrete target outputs are (0, 0 + 0.99 * 0, -5, 0)
, (0, 0 + 0.99 * 0, 0, 0)
, and (0, 1 + 0, 0, 0)
.
Because it selects both Xtrain
and Xtest
from the space of two selected principal components. Hence, the 90% accuracy is in that 2-D selected space.
This fact that the ratio in PCA stands the information, depends on the distribution of the data and it's not true at all.
+",4446,,4446,,1/5/2019 10:43,1/5/2019 10:43,,,,0,,,,CC BY-SA 4.0 +9848,1,9851,,1/5/2019 10:56,,1,266,"Let's say I have a $2 \times 2$ pixel of grayscale picture, where there is one edge such that the left pixel contains a value, 30, and the right pixels contain a value 0 (in red below). And for edge detection I have zero-padded the input image and then used the Sobel vertical filter to find out the vertical edges and apply ReLU to the output. The output is a $2 \times 2$ matrix with all pixel values $0$. So that should mean there is no edge in the picture whereas in actual case it has one. Where am I going wrong?
+ + +",21172,,2444,,6/5/2019 21:40,6/5/2019 21:42,Understanding the application of Sobel kernel followed by ReLU to a zero-padded image,Consider the following loss function
+ +$$ +L(\mathbf{w}) = [(r + \gamma max_{a'} Q(s', a', \mathbf{w^-})) - Q(s, a, \mathbf{w})]^2 +$$
+ +where $Q(s, a, \mathbf{w^-})$ and $Q(s, a, \mathbf{w})$ are represented as neural networks, where $w^-$ and $w$ are the corresponding weights.
+ +But how do you calculate $max_{a'} Q(s', a', \mathbf{w^-})$? Do you really need to hold always an older version of the network? If yes, why and how old should it be?
+",19062,,2444,,2/15/2019 20:21,2/15/2019 20:21,"How do I calculate $max_{a′}Q(s′,a′,w−)$ when it is represented as a neural network?",I assume that you would like to use convolution with padding with the same size of output matrix as this picture. You messed up calculations with convolution with ""full padding"". If we imagine these two matrices as windows that slide on each other, you can see that the filter is symmetrically inverted. I used a little bit different filter to show you in a better way how it works (I changed the last row to [-3, 0 3]).
+ +Assuming these matrices:
+ + + +You should add to your picture matrix two rows and columns of zeros:
+ + + +Then you can start matrix multiplication, but notice that the filter is symmetrically inverted. The result shown as matrix $4 \times 4$ is the convolution with ""full padding"". The $2 \times 2$ matrix in the middle is a result of convolution with the ""same padding"", that you requested.
+ + + +Next step:
+ + + +Some iterations later:
+ + + +And later:
+ + + +And finally:
+ + + +For convolution with the same size, the result will be the small $2 \times 2$ matrix in the middle.
+ + + +After using the ReLU function, the result will be exactly the same.
+ +So with using of your filter, the result would look like [[0,90], [0,90]].
+",21171,,2444,,6/5/2019 21:42,6/5/2019 21:42,,,,2,,,,CC BY-SA 4.0 +9852,2,,9849,1/5/2019 18:32,,1,,"You calculate the max by calculating your estimates for all possible actions for the next state, and taking the highest value.
+ +The details depend a little on your neural network architecture:
+ +If you have a network that takes the state vector as input and outputs all possible action values $(\hat{q}(s,a_0), \hat{q}(s,a_1), \hat{q}(s,a_2) ...)$, then you can run it once forward with the next_state
as input to getthe array $(\hat{q}(s',a_0), \hat{q}(s',a_1), \hat{q}(s',a_2) ...)$, and take the maximum element (you don't need to care which action $a'$ caused it). You will then have the problem that you now have a loss for $Q(s, a, \mathbf{w})$ for the single action $a$ just taken, but no data any of the alternative actions. The loss for these alternative actions needs to be set to zero - if you are training the NN using a normal supervised learning approach, that means you need to keep the full output of the network that you ran forward to calculate $Q(s, a, \mathbf{w})$ then substitute in this new estimated value against the action $a$ and train using this modified vector.
If you have a network that takes the state vector and action combined as input and outputs a single estimate $\hat{q}(s,a)$, then you have to run that network once for each possible action from the next state and take the maximum value. You would typically do this as a small batch prediction for better performance. In this case your training data is simple to construct as you only have the loss for and train the network against one state/action combination.
Overall the first option (all action values at once) is usually a lot more efficient, but slightly more complex to code the training routine.
+ +++ +do you really need to hold always an older version of the network. If yes why and how old should it be?
+
You don't have to, but it is highly advisable to have this target network (so called because it helps generate your TD Target values), because Q learning using neural networks is often unstable. This is due to the bootstrap calculations where estimates are based on other estimates plus a little bit of observed data at each step. There is a strong possibility for runaway feedback due to training a neural network on something that includes its own output.
+ +How old should it be? That's sadly a hyper-parameter of the architecture that you will need to establish through experiment on each new problem. I have worked with maximum age values from 100 to 10000 in my own simple experiments. Note this is not usually a rolling age - you don't keep 1000 copies of the network weights. Just keep one frozen copy, and after N steps replace it with a copy of the most recent one.
+ +One alternative to this copy/freeze/copy approach is to update the target network towards the learning network on every step by a small factor. E.g. $\mathbf{w}^{\_} = (1 - \beta)\mathbf{w}^{\_} + \beta \mathbf{w}$ where $\beta$ might be $0.001$
+ +In addition, you should be using experience replay for training data, and not training directly online. The combination of experience replay and using a frozen or slowly adapting target network makes a large difference to the stability of deep Q learning in practice.
+",1847,,1847,,1/5/2019 19:24,1/5/2019 19:24,,,,5,,,,CC BY-SA 4.0 +9854,1,9865,,1/5/2019 21:24,,1,1116," +d5 captures c6
+Quiescence search returns about 8.0 as evaluation because after dxc6 and bxc6 Qxd6 would be played (then Qxd6 by black). A normal player would not play this move but quiescence search includes it in the evaluation and it would result in this end state:
+which would result in a huge advantage for black.
Is my interpretation of quiescence search wrong?
+",19783,,-1,,6/17/2020 9:57,5/13/2020 20:49,Does quiescence search even improve the minimax algorithm?,Why is the actor-critic algorithm limited to using on-policy data? Or can we use the actor-critic algorithm with off-policy data?
+",21180,,2444,,2/15/2019 19:38,2/15/2019 19:43,Why is the actor-critic algorithm limited to using on-policy data?,Is there a ReLU-like activation function that concatenates positive and negative values? What is its name? Apparently, it doubles the output dimension.
+",21203,,2444,,6/4/2020 15:24,6/4/2020 15:24,Is there a ReLU-like activation function that concatenates positive and negative values?,Your logic is flawed because you negated ""stand-pat"" (i.e. do nothing) and alpha-beta. Let's take a look at the pseudocode (https://www.chessprogramming.org/Quiescence_Search#Pseudo_Code):
+ +int Quiesce( int alpha, int beta ) {
+ int stand_pat = Evaluate();
+ if( stand_pat >= beta )
+ return beta;
+ if( alpha < stand_pat )
+ alpha = stand_pat;
+
+ until( every_capture_has_been_examined ) {
+ MakeCapture();
+ score = -Quiesce( -beta, -alpha );
+ TakeBackMove();
+
+ if( score >= beta )
+ return beta;
+ if( score > alpha )
+ alpha = score;
+ }
+ return alpha;
+}
+
+
+Your Qxd6 capture will make return a score far below the alpha. The line:
+ +++ +if( score > alpha )
+
will prevent your blunder being reported. Instead the engine would report either stand_pat
(do nothing), or something like Nf3, Nc3 etc.
I'm trying to understand how the dimensions of the feature maps produced by the convolution are determined in a ConvNet.
+Let's take, for instance, the VGG-16 architecture. How do I get from 224x224x3 to 112x112x64? (The 112 is understandable, it's the last part I don't get)
+I thought the CNN was to apply filters/convolutions to layers (for instance, 10 different filters to channel red, 10 to green: are they the same filters between channels ?), but, obviously, 64 is not divisible by 3.
+And then, how do we get from 64 to 128? Do we apply new filters to the outputs of the previous filters? (in this case, we only have 2 filters applied to previous outputs) Or is it something different?
+ +",19094,,2444,,6/19/2021 12:19,6/19/2021 12:19,How are the dimensions of the feature maps produced by the convolutional layer determined in VGG-16?,I have time series data where I use a sliding window to detect anomalies in those windows. A sliding window is an interval of the dataset that steps one datapoint for each iteration. Datapoints are seen multiple times in this way equal to the size of the window.
+ +In short, the algorithm works like this:
+ +I want to keep the sliding window method since it is necessary for the performance of the algorithm.
+ +However, one anomaly occurs multiple times in the sliding window. When the anomaly appears in the sliding window for the first time it's on the 'right' side of the window.
+ +How do we measure accuracy of anomaly detection in this case?
+ +We could say that detecting the anomaly once in the window is enough or detect it wl times. What's best practice?
+",21223,,16229,,6/5/2019 19:34,6/5/2019 19:34,Performance measure on windowed time series data,It seems I have found it. It is called concatenated ReLU (CReLU).
+ +++ +Concatenated ReLU has two outputs, one ReLU and one negative ReLU, concatenated together. In other words, for positive x it produces [x, 0], and for negative x it produces [0, x]. Because it has two outputs, CReLU doubles the output dimension.
+
There is also Negative CReLU. It seems that the difference is only the sign.
+ +$$\text{NCReLU}(x) = (\rho(x) , −\rho(−x) )$$
+",21203,,2444,,6/4/2020 15:11,6/4/2020 15:11,,,,0,,,,CC BY-SA 4.0 +9874,2,,9870,1/7/2019 16:34,,1,,"The 64 here is the number of filters that are used.
+The picture is kind of misleading in that it leaves out the transition of the maxpool.
+
Below is a text description of the size of the features as they go through the network with the number of filters in bold.
+ +For learning image features with CNNs, we use 2D Convolutions. Here 2D does not refer to the input of the operation, but the output.
+ +Consider you have an input tensor of size 224 x 224 x 3. Say for example you have 64 different convolution kernels. Theses kernels are also 3 dimensional. Each kernel will produce a 2D matrix as output. Since you have 64 different kernels/filters, you will have 64 different 2D matrices. In other words, you got a tensor with depth 64 as output.
+ + + +I would suggest you to go through this question:
+ +Understanding 1D, 2D and 3D convolutions
+",21229,,,,,1/7/2019 16:45,,,,0,,,,CC BY-SA 4.0 +9876,2,,9838,1/7/2019 17:30,,3,,"Reinforcement learning is used when we know the outcome we want, but not how to get there which is why you won't see a lot of people using it for classification (because we already know the optimal policy, which is just to output the class label). You knew that already, just getting it out of the way for future readers!
+ +As you say, your policy model is fine - a fully connected model that is just deep enough to learn XOR. I think the reward gradient is a little shallow - when I give a reward of +1 for ""3 out 4"" correct and +2 for ""4 out of 4"", then convergence happens (but very slowly).
+",17770,,17770,,1/8/2019 18:03,1/8/2019 18:03,,,,0,,,,CC BY-SA 4.0 +9883,2,,9860,1/8/2019 2:33,,1,,"It's because, in the actor-critic algorithm, the objective function is an expectation under the $\tau$ of the policy. If we want to use off-policy data, we have to resort to importance sampling relative to the other policy.
+",21180,,2444,,2/15/2019 19:43,2/15/2019 19:43,,,,1,,,,CC BY-SA 4.0 +9887,1,9888,,1/8/2019 10:43,,2,185,"Which algorithms, between ant colony or classical routing algorithms, have a better time complexity for the shortest path problem?
+ +In general, can we compare efficiency of these two types of algorithm for the shortest path problem in a graph?
+",19910,,2444,,5/27/2019 22:06,5/27/2019 22:06,"Which algorithms, between ant colony or classical routing algorithms, have a better time complexity for the shortest path problem?",No. In general, you can't find a tight bound for evolutionary algorithms, and it is one of the main difference of these algorithms with the classical algorithms.
+ +You should notice that it does not mean you can't find when the evolutionary algorithms are finished! But, you can't find a tight bound for the algorithms time complexity to reach to the optimal solution or how much that solution is near to the optimal solution (in contrast to the approximation algorithms).
+",4446,,4446,,1/8/2019 11:56,1/8/2019 11:56,,,,4,,,,CC BY-SA 4.0 +9890,1,14031,,1/8/2019 11:57,,3,1443,"On recommendation of Kanak on stackoverflow I am posting this question here:
+ +Currently I am experimenting with various loss functions and optimizers for my binary image segmentation problem. The loss functions that I use in my Unet however give different output segmentation maps.
+ +I have a highly imbalanced dataset, thus I am trying dice loss for which the customized function is given below.
+ + def dice_coef(y_true, y_pred, smooth=1):
+ """"""
+ Dice = (2*|X & Y|)/ (|X|+ |Y|)
+ = 2*sum(|A*B|)/(sum(A^2)+sum(B^2))
+ ref: https://arxiv.org/pdf/1606.04797v1.pdf
+ """"""
+ intersection = K.sum(K.abs(y_true * y_pred), axis=-1)
+ return (2. * intersection + smooth) / (K.sum(K.square(y_true), -1) + K.sum(K.square(y_pred), -1) + smooth)
+
+ def dice_coef_loss(y_true, y_pred):
+ return 1 - dice_coef(y_true, y_pred)
+
+
+Binary cross entropy results in a probability output map, where each pixel has a color intensity that represents the chance of that pixel being the positive or negative class. However, when I use the dice loss function, the output is not a probability map but the pixels are classed as either 0 or 1.
+ +My questions are:
+ +1.How is it possible that these different loss functions have these vastly different results?
+ +Both responses I got are correct but do not answer exactly what I was looking for.
+ +The answer to my question is : each filter is a 2D convolution. It is applied to every channel from previous node (so we get N 2D matrices). Then all of these matrices are added up to make a final matrix (1 matrix for 1 filter). Finally, the output is all filters' matrices in parallel (like channels).
+ +The hard part was to find the ""sum up"", since many websites speak of it as a 3D convolution (which is not !).
+",19094,,,,,1/8/2019 12:15,,,,0,,,,CC BY-SA 4.0 +9894,2,,4889,1/8/2019 16:18,,3,,"If you are using a softmax distribution for your classification, then you could determine what your baseline max probability is for correctly classified samples, and then infer if a new sample doesn't belong to any of your known classes if its max probability is below some kind of threshold.
+ +This idea comes from a research paper that does a much better job of explaining the process than what I just said: A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
+",21265,,,,,1/8/2019 16:18,,,,0,,,,CC BY-SA 4.0 +9895,1,,,1/8/2019 16:48,,1,116,"I have this question in my head: does the current level of AI development allow us to spot faked or photoshoped images? (i.e forged ID card or personal documents).
+ +If it is possible, what is such a process to follow in order to build an AI that achieves this task?
+",19059,,19059,,1/24/2019 10:25,4/8/2019 7:46,Is it possible to spot photoshoped or edited photos using AI?,Is ""emotion"" ever used in AI?
+ +Psychologists have a lot to say about emotion and it's functional utility for survival - but I've never seen any AI research that uses something resembling ""emotion"" inside an algorithm. (Yes, there's some work done on trying to classify human emotions, called ""emotional intelligence"", but that's extrememly different from /using/ emotions within an algorithm) For example, you could imagine that a robot might need fuel and be ""very thirsty"" - causing it to prioritize different tasks (seeking fuel). Emotions also sometimes don't just focus on objectives/priorities - but categorize how much certain classifications are ""projected"" into a particular emotions.
+For example, maybe a robot that needs fuel might be very ""afraid"" of going towards cars because it's been hit in the past - while it might be ""frustrated"" at a container that doesn't open properly.
+It seems very natural that these things are helpful for survival - and they are likely ""hardcoded"" in our genes (since some emotions - like sexual attraction - seem to be mostly unchangeable by ""nurture"") - so I would think they would have a lot of general utility in AI.
Not a bad question but we can solve this with a little thought experiment. Consider what it means to be ""afraid"", or to even ""feel"". It's a DESIRE for something. That something is what pushes us towards general survival. It forces us to focus on what is important right now. And it's relative to our immediate environment & generalized to our abstract conceptualization.
+ +The difference with modern ai paradigms is that they are very structured/rigid in their objectives. There's no general sense of ""okayness"" or generalized sense of guidance on what it should do. This would require a radically different approach to AI design & infrastructure.
+ +Being that most companies are trying to make money, there's not a lot to be gained by experimenting with ""feeling"" machines.
+",1720,,,,,1/8/2019 23:35,,,,0,,,,CC BY-SA 4.0 +9900,2,,9897,1/9/2019 1:20,,2,,"Current Simulation of Emotional Behavior
+ +Emotion is used in AI in very limited ways in leading edge natural language systems. For instance, advanced natural language systems can calculate the probability that a particular segment of speech originates from an angry human. This recognition can be trained using labels from bio-monitors. However, the mental features of a human with soft skills tuned from years of experience with people is not nearly simulated in computers as of this writing.
+ +We will not see computers becoming counselors (as once believed) or directors of movies or courtroom judges or customs officials any time soon. Nonetheless, the processes behind emotion are not entirely undiscovered, and there is definite interest in simulating them in computers. Much of that work is company confidential.
+ +The emergence of emotional sophistication in computers likely to begin in the context of sexuality, primarily because flirtation is powerful and primordial emotional expression will probably be easier to simulate in natural language than higher emotional expressions such as love or chaotic ones like rage. Sexy AI will likely be exploited by what businesses might consider legitimate marketing activity.
+ +It is also going to be exploited by the sex industry. The ethical and moral analysis of sexy AI beyond the scope of the question but will probably gain the attention of public media as it unfolds, and that has already begun on FaceBook, originating from third party attacks using fictitious identities.
+ +The Science of Emotion
+ +Emotion isn't a scientific quantity. From an AI perspective, emotion is a quality an individual might recognize through visual and audio queues, specifically through the natural language and affect of another individual. (Affect is a visual clue about a person's emotional and general mental state.)
+ +An individual can also learn to recognize those clues in her or his self. They can be detected by replaying one's own speech as heard through the ear, by linguistic analysis of thoughts not spoken, or through the detection of muscle tension or vital signs. Those skilled in meditation can detect emotional predecessors closer to their causal centers in the brain and control them more directly before emotions even arise.
+ +In the brain, emotion is not in a single geometric location. We cannot say, ""That emotion of compassion comes from this group of neurons in Jerome's brain."" We cannot say, ""Sheri is angry at this 3D coordinate in her cerebral cortex."" Emotions are also not strictly system wide either. An individual can be annoyed without going into rage, leaving most of the brain chemistry and electrical signaling unaffected.
+ +Emotions are not entirely electrical and not entirely chemical. On the electric side, emotional states can occur simultaneously through separate circuit pathway connecting distinct and only distantly related regions of the brain. On the chemical side, there is the synaptic chemistry that is part of the electrical signal pathways. There are also many regional signaling systems using specialized pathways that are neither circulatory (blood) nor primary electrical (neuron) pathways. Serotonin is one of dozens of chemical signaling compounds that operate regionally in this way.
+ +Emotions, being a largely social set of phenomena, should not be characterized as purely Darwinian. Although related to survival, emotional processing and communications impact mate selection and, more generally, social patterns within a community, including altruistic and collaborative activity.
+ +Emotions don't always lead to survival. In some cases, emotional states may lead to death prior to reproduction. One could say that emotional balance and the ability to interact on emotional levels may improve odds of having offspring. Imbalance to the degree of any of hundreds of emotional extremes can lead to childlessness.
+ +Emotional intelligence is different than using emotions within an algorithm, but not extremely so.
+ +Discussion of emotional intelligence is one of many advancements in the concept of intelligence since the formation of one-dimensional conceptions of intelligence. Those nineteenth century conceptions, such as IQ and G-factor are poorly supported by genetic evidence and anthropological theory. Mathematically unproven and naive concepts like general intelligence rest on those one-dimensional concepts.
+ +Emotional intelligence is a form of mental capability related to emotional balance. If a person's cognitive skills are honed with respect to their emotions and the assessment of the emotional states of others, then they have greater emotional intelligence than someone who cannot read the affect and linguistic clues of another and cannot integrate cognitive and emotional skill to balance of their own emotions.
+ +Cybernetic Analysis
+ +The interface between natural emotion and artificial emotion fits within the realm of cybernetics, the conceptual study of the interface between humans and machines. Such interaction is clearly related to both algorithms and topology, two important concepts in AI research and development.
+ +Emotion has an algorithmic context because there is clearly some combination of neurons and chemistry that produce this algorithmic difference between a reactive person and one who has developed emotional intelligence.
+ + emotion[person] = recognize_emotion[person]
+ if emotion[person] = anger
+ be_in_responses(angry)
+
+ emotion[person] = recognize_emotion[person]
+ if emotion[person] = anger
+ be_in_responses(extra_calm)
+
+
+The former is reactive and the later exhibits emotional intelligence. The acquisition of the later skill may be cognitive and conscious or it may be intuitive and unconscious. In either case, the actual algorithms at a lower level may be entirely different than those shown above, yet the external behavior of the person as marshaled by the brain is essentially one of those shown.
+ +The plural, algorithms, is used rather than the singular, algorithm, because it is unlikely that a single synchronous algorithm is involved. The brain is a massively parallel processor. Emotional processing is likely best expressed in artificial form as hundreds of thousands of algorithms operating in parallel and forming millions of balances within the system — multidimensional and highly parallel stasis.
+ +This is why emotional recognition and emotional responses are not very sophisticated in computer systems as of this writing. The balances have much social nuance. It may be easier to simulate rational thought than emotional thought.
+ +Desire as a Systemic Behavior
+ +Hunger and thirst may sometimes be called feelings, but they are not strictly emotional. The detection of the need for air, energy, nutrients, and water may stimulate emotional states if the needs are unmet and other emotional states if met. A person may become frustrated and irritable when lacking something essential and confronted with another person's less important agenda. A robot may someday do the same. A person may become elated and generous when all such essentials have recently been made available in surplus. A robot may someday do the same. These relationships are expressed in the question this way.
+ +++ +Emotions also sometimes don't just focus on objectives/priorities — but categorize how much certain classifications are ""projected"" into a particular emotions.
+
That statement in the question and its explanation is true in some respects. If a robot that needs fuel but is afraid of passing in front of a moving vehicle because it has been hit in the past can be seen in more than one way.
+ +In AI design, these three would be handled in different ways.
+ +Maintaining Scientific Perspective
+ +Emotion is not hard coded into the brain circuitry or DNA. The reality is significantly more complex.
+ +The DNA provides parameters to a genetic expression system that leads to protein synthesis that leads to brain structure and function that leads to the ability to learn emotional responses that lead to improved social behavior that may lead to higher probabilities of gene pool survival.
+ +Applying digital system traditions to biological process can be counter productive, like anthropomorphic views of programs. Artificial networks don't actually learn; they converge. Nothing is hard coded into biology because the term code applied to DNA isn't anything like a page of Java or Python code.
+ +It is true that some behavioral predispositions are strong forms of stasis within the course of a species. An organism will normally exhibit a strong desire to acquire resources from the biosphere, such as oxygen, proteins, nutrients, carbs, fats, and water. A robot might replace those with a voltage to use for a charge and lubricants for moving parts. An organism will normally exhibit a string desire to reproduce. A robot might be given a simulation of that recursive process and wish to build another like itself.
+ +These are not hard coded in biology. They form a kind of stasis within a population. Some humans don't want children. Some are hospitalized for anorexia nervosa. Some commit suicide by asphyxiation. The statistical mean produces the behavior of the species, not a fixed behavior identical across individuals within the species.
+ +Nature and Nurture
+ +Nature and nurture are useful umbrella terms for general categories of causality in biology and may have equivalents in future robotic products, but they are broad generalities. There are no nature algorithms or nurture algorithms or algorithms that balance nature and nurture. That is where topology is of paramount conceptual importance.
+ +Topology of Algorithmic Components
+ +There is massive interaction between many systems operating independently in multiple dimensions. The visualization of such interactive structure would look more like the topology of all the web sites in a country than a machine learning block diagram. If somehow coded into one algorithm it is possible that all the silicon from all the sand on earth converted to random access memory (RAM) might be insufficient to hold the code expressing the algorithm. Perhaps not. Perhaps a simplicity underlies the interactive system design of life. Perhaps we'll someday know. Perhaps not.
+ +The elegance in the design of life on earth is that multiple independent processes are tuned by billions of years of trial and error to inter-operate and support complex organic processes with billions of moving parts at a molecular level.
+ +Veins of Interdisciplinary Research
+ +Study of these are important for biology, for bioinformatics, for cognitive science, and for artificial intelligence. Emotional recognition and integration of emotional reaction and control into natural communications is part of this research and development.
+",4302,,4302,,1/9/2019 9:54,1/9/2019 9:54,,,,0,,,,CC BY-SA 4.0 +9903,1,9923,,1/9/2019 8:59,,6,1812,"I'm new to machine learning, and AI in general (but with 20+ years for programming). I'm wondering if machine learning is a good general approach to find the seed of a random number generator.
+Suppose I have a list of 2000 numbers. Is there a machine learning algorithm to correctly guess the next number?
+Just to be clear, as there are many random number generator algorithms, I'm talking about rand
and srand
from the stdlib.
I'm reading the book ""Reinforcement Learning: An Introduction"" (by Andrew Barto and Richard S. Sutton).
+ +The authors provide the pseudocode of the prioritized sweeping algorithm, but I do not know what is the meaning of Model(s, a)
. Does it mean that Model(s, a)
is the history of rewards gained when we are in state s
and the action a
is taken?
Does R, S_new = Model(s,a)
mean that we should take a random sample from rewards gained in state s
and action a
is taken?
In my view intelligence begins once the thoughts/actions are logical rather than purely randomn based. The learning environments can be random but the logic seems to obey some elusive rules. There is also the aspect of a parenting that guides through some really bad decisions by using the collective knowledge. All of this seems to hint that intelligence needs intelligence to coexist and a sharing communication network for validation/rejection.
+ +Personally I believe that we must keep the human intelligence in a parental role for long enough time until at least the AI had fully assimilated our values. The actual danger is to leave the artificial intelligence parenting another AI and loose control of it. This step is not necessary from our perspective but can we resist the temptation and try it eventually, only time will tell.
+ +Above all we must remember the purpose of AI. I think the purpose should always be to help humans achieve mastery of the environment while ensuring our collective preservation.
+ +AI should not be left unsupervised as we would not give guns to kids, do we?
+ +To resume it all AI needs an environment and supervision where to learn and grow. The environment can vary but the supervision must stay in place.
+ +Are initiated thoughts/actions by the means of guidance and supervision considered random?
+ +Lastly I believe that the sensible think to do is to only develop artificial intelligence that is limited by our own beliefs and values rather than searching for something greater than us.
+ +It seems not possible to create greater than our intelligence without letting it go exploring! +Exploring has greater access to random actions and can go against the intended purpose.
+",21285,,,,,1/25/2019 22:17,Is learning possible without random thoughts and actions?,I think pseudocode was made for tabular case with an assumption of deterministic environment. $Model(s, a)$ would then be a table with information of the next state and reward after taking action $a$ from state $s$. The size of that table would be same as the size of Q table. Because the environment is deterministic you wouldn't take a random sample because there is only one possible transition so you would take the transition remembered in model table.
+",20339,,,,,1/9/2019 10:46,,,,5,,,,CC BY-SA 4.0 +9908,1,9915,,1/9/2019 11:11,,3,278,"When trying to map artificial neuronal models to biological facts it was not possible to find an answer regarding the biological justification of randomly initializing the weights.
+ +Perhaps this is not yet known from our current understanding of biological neurons?
+",21269,,,,,1/10/2019 8:00,How do biological neurons weights get initialized?,I'm doing a research on a finite-horizon Markov decision process with $t=1, \dots, 40$ periods. In every time step $t$, the (only) agent has to chose an action $a(t) \in A(t)$, while the agent is in state $s(t) \in S(t)$. The chosen action $a(t)$ in state $s(t)$ affects the transition to the following state $s(t+1)$.
+ +In my case, the following holds true: $A(t)=A$ and $S(t)=S$, while the size of $A$ is $6 000 000$ (6 million) and the size of $S$ is $10^8$. Furthermore, the transition function is stochastic.
+ +Would Monte Carlo Tree Search (MCTS) an appropriate method for my problem (in particular due to the large size of $A$ and $S$ and the stochastic transition function?)
+ +I have already read a lot of papers about MCTS (e.g. progressive widening and double progressive widening, which sound quite promising), but maybe someone can tell me about his experiences applying MCTS to similar problems or about appropriate methods for this problem (with large state/action space and a stochastic transition function).
+",21287,,2444,,2/15/2019 19:34,3/9/2021 2:04,Is Monte Carlo Tree Search appropriate for problems with large state and action spaces?,I am not an DL expert but these are my short thoughts on it:
+ +I think this is because it is believed (from an information theoretic point of view) to be the good way to avoid that the network falls into some wired state from beginning on. Remember: DNNs are nonlinear approximators for continuous functions. So they have some storage capacity to learn an amount of n function to map from input to output. When you look on topic like data leakage you will see that NNs quickly try to cheat you if they can :D. The optimization applied during training will heavily be affected by the init state. So starting with an random initialization at least avoids that your neurons do all the same at the beginning etc.
+ +Biological reasoning: +From the viewpoint of a neurobiologist I can recommend you to read Hebbian rule and how neural systems work (eg. google how neurons find targets) in general and then to compare it to what is known about how dendrite cells in the cerebrum develop their interconnections in the first 3 years after birth. In summary there are behavioral patterns in nature which could look similar, inspiring and even reasonable. But, I would say the reason why this random init. is recommend is backed by mathematical and information theoretical assumptions rather then pure biological arguments.
+",21290,,21290,,1/10/2019 8:00,1/10/2019 8:00,,,,0,,,,CC BY-SA 4.0 +9911,2,,9909,1/9/2019 16:57,,1,,"MCTS is often said to be a good choice for problems with large branching factors... but the context where that sentiment comes from is that it originally became popular for playing the game of Go, as an alternative to older game-playing approaches such as alpha-beta pruning. The branching factor of Go is more like 250-300 though, which is often viewed as a large branching factor for board games. It's not such an impressive branching factor anymore when compared to your branching factor of $6,000,000$...
+ +I don't see MCTS working well out of the box when you have 6 million choices at every step. Maybe it could do well if you have an extremely efficient implementation of your MDP (e.g. if you can simulate millions of roll-outs per second), and if you have a large amount of ""thinking time"" (probably in the order of hours or days) available.
+ +To have any chance of doing better with such a massive branching factor, you really need generalization across actions. Are your 6 million actions really all entirely different actions? Or are many of them somehow related to each other? If you gather some experience (a simulation in MCTS, or just a trajectory with Reinforcement Learning approaches), can you generalize the outcome to other actions for which you did not yet collect experience?
+ +If there is some way of treating different actions as being ""similar"" (in a given state), you can use a single observation to update statistics for multiple different actions at once. The most obvious way would be if you can define meaningful features for actions (or state-action pairs). Standard Reinforcement Learning approaches (with function approximation, maybe linear or maybe Deep Neural Networks) can then relatively ""easily"" generalize in a meaningful way across lots of actions. They can also be combined with MCTS in various ways (see for example AlphaGo Zero / Alpha Zero).
+ +Even with all that, a branching factor of 6 million still remains massive... but generalization across actions is probably your best bet (which may be done inside MCTS, but really does need a significant number of bells and whistles on top of the standard approach).
+",1641,,,,,1/9/2019 16:57,,,,0,,,,CC BY-SA 4.0 +9912,1,9913,,1/9/2019 17:28,,6,562,"In the book Reinforcement Learning: An Introduction (2nd edition) Sutton and Barto define at page 104 (p. 126 of the pdf), equation (5.3), the importance sampling ratio, $\rho _{t:T-1}$, as follows:
+$$\rho _{t:T-1}=\prod_{k=t}^{T-1}\frac{\pi(A_k|S_k)}{b(A_k|S_k)}$$
+for a target policy $\pi$ and a behavior policy $b$.
+However, on page 103, they state:
+++The target policy $\pi$ [...] may be deterministic [...].
+
When $\pi$ is deterministic and greedy it gives $1$ for the greedy action and 0 for all other possible actions.
+So, how can the above formula give something else than zero, except for the case where policy $b$ takes a path that $\pi$ would have taken as well? If any selected action of $b$ is different from $\pi$'s choice, then the whole numerator is zero and thus the whole result.
+",21299,,2444,,11/5/2020 22:01,11/5/2020 22:08,How can the importance sampling ratio be different than zero when the target policy is deterministic?,You're correct, when the target policy $\pi$ is deterministic, the importance sampling ratio will be $\geq 1$ along the trajectory where the behaviour policy $b$ happened to have taken the same actions that $\pi$ would have taken, and turns to $0$ as soon as $b$ makes one ""mistake"" (selects an action that $\pi$ would not have selected).
+ +Before importance sampling is introduced in the book, I believe the only off-policy method you will have seen is one-step $Q$-learning, which can only propagate observations back along exactly one step. With the importance sampling ratio, you can often do a bit better. You're right, there is a risk that it turns to $0$ rather quickly (especially when $\pi$ and $b$ are very different from each other), at which point it essentially ""truncates"" your trajectory and ignores all subsequent experience... but that still can be better than one-step, there is a chance that the ratio will remain $1$ for at least a few steps. It will occasionally still only permit $1$-step returns, but also sometimes $2$-step returns, sometimes $3$-step returns, etc., which is often better than only having $1$-step returns.
+ +Whenever the importance sampling ratio is not $0$, it can also give more emphasis to the observations resulting from trajectories that would be common under $\pi$, but are uncommon under $b$. Such trajectories will have a ratio $> 1$. Emphasizing such trajectories more can be beneficial, because they don't get experienced often under $b$, so without the extra emphasis it can be difficult to properly learn what would have happened under $\pi$.
+ +Of course, it is also worth noting that your quote says (emphasis mine):
+ +++ +The target policy $\pi$ [...] may be deterministic [...]
+
It says that $\pi$ may be deterministic (and in practice it very often is, because we very often take $\pi$ to be the greedy policy)... but sometimes it won't be. The entire approach using the importance sampling ratio is well-defined also for cases where we choose $\pi$ not to be deterministic. In such situations, we'll often be able to propagate observations over significantly longer trajectories (although there is also a risk of excessive variance and/or numeric instability when $b$ selects actions that are highly unlikely according to $b$, but highly likely according to $\pi$).
+",1641,,1641,,1/10/2019 8:40,1/10/2019 8:40,,,,10,,,,CC BY-SA 4.0 +9914,1,,,1/10/2019 3:38,,2,1701,"In Deep Learning by Goodfellow et al., I came across the following line on the chapter on Stochastic Gradient Descent (pg. 287):
+++The main question is how to set $\epsilon_0$. If it is too large, the +learning curve will show violent oscillations, with the cost function +often increasing significantly.
+
I'm slightly confused why the loss function would increase at all. My understanding of gradient descent is that given parameters $\theta$ and a loss function $\ell (\vec{\theta})$, the gradient update is performed as follows:
+$$\vec{\theta}_{t+1} = \vec{\theta}_{t} - \epsilon \nabla_{\vec{\theta}}\ell (\vec{\theta})$$
+The loss function is guaranteed to monotonically decrease because the parameters are updated in the negative direction of the gradient. I would assume the same holds for SGD, but clearly it doesn't. With a high learning rate $\epsilon$, how would the loss function increase in its value? Is my interpretation incorrect, or does SGD have different theoretical guarantees than vanilla gradient descent?
+",19403,,2444,,1/8/2022 16:50,1/10/2022 10:39,Why can the learning rate make the loss increase in stochastic gradient descent?,In short
+ +I mentioned in another post, how the Artificial Neural Network (ANN) weights are a relatively crude abstraction of connections between neurons in the brain. Similarly, the random weight initialization step in ANNs is a simple procedure that abstracts the complexity of central nervous system development and synaptogenesis.
+ +A bit more detail (with the most relevant parts italicized below)
+ +The neocortex (one of its columns, more specifically) is a region of the brain that somewhat resembles an ANN. It has a laminar structure with layers that receive and send axons from other brain regions. Those layers can be viewed as ""input"" and ""output"" layers of an ANN (axons ""send"" signals, dendrites ""receive""). Other layers are intermediate-processing layers and can be viewed as the ANN ""hidden"" layers.
+ +When building an ANN, the programmer can set the number of layers and the number of units in each layer. In the neocortex, the number of layers and layer cell counts are determined mostly by genes (however, see: Human echolocation for an example of post-birth brain plasticity). Chemical cues guide the positions of the cell bodies and create the laminar structure. They also seem to guide long term axonal connections between distant brain regions. The cells then sprout dendrites in certain characteristic ""tree-like"" patterns (see: NeuroMorpho.org for examples). The dendrites will then form synapses with axons or other cell bodies they encounter along the way, generally based on the encountered cell type.
+ +This last phase is probably the most analogous to the idea of random weight initialization in ANNs. Based on where the cell is positioned and its type, the encountered other neurons will be somewhat random and so will the connections to them. These connections are probably not going to be very strong initially but will have room to get stronger during learning (probably analogous to initial random weights between 0 and ~0.1, with 1 being the strongest possible connection). Furthermore, most cells are either inhibitory or excitatory (analogous to negative and positive weights).
+ +Keep in mind this randomization process has a heavy spatial component in real brains. The neurons are small and so they will make these connections to nearby neurons that are 10-200 microns away. The long-distance connections between brain regions are mostly ""programmed-in"" via genes. In most ANNs, there is generally no distance-based aspect to the initialization of connection weights (although convolutional ANNs implicitly perform something like distance-based wiring by using the sliding window).
+ +There is also the synaptic pruning phenomenon, which might be analogous to creating many low weight connections in an ANN initially (birth), training it for some number of epochs (adolescence), and then removing most low-weight connections (consolidation in adulthood).
+",21307,,,,,1/10/2019 4:04,,,,1,,,,CC BY-SA 4.0 +9918,1,9928,,1/10/2019 8:29,,2,1843,"I have non-smooth loss function $f(x) = \min(x, 0.5)$.
+Can gradient descent be used for training neural networks with such functions? Can gradient descent be used for fairly general, mathematically not-nice functions?
+PyTorch or TensorFlow can calculate numerically gradients from almost any function, but it is acceptable practice to use general, not-nice loss functions?
+",8332,,2444,,10/14/2021 14:59,10/14/2021 14:59,Can gradient descent training be used for non-smooth loss functions?,In the paper Markov games as a framework for multi-agent reinforcement learning (which introduces the minimax Q Learning algorithm), at the bottom left of page 3, my understanding is that the author suggests, for a simultaneous 1v1 zero-sum game, to do Bellman iterations with $$V(s)=\min_{o}\sum_{a}\pi_{a}Q(s,a,o)$$ with $\pi_{a}$ the probability of playing action $a$ for the maximizing player in his best mixed strategy to play in state $s$.
+ +If my understanding is correct, why does the opponent in this equation play a pure strategy ($\min_{o}$) rather than his best mixed strategy in state $s$. This would instead give $$V(s)=\sum_{o}\sum_{a}\pi_{a}\pi_{o}Q(s,a,o)$$ with $\pi_{o}$ the opponent's best mixed strategy in state $s$. Which of these two formulations is correct and why? Are they somehow equivalent?
+ +The context of this question is that I am trying to use minimax Q learning with a Neural Network outputting the matrix $Q(s,a,o)$ for a simultaneous zero-sum game. I have tried both methods and so far have seen seemingly equally bad results, quite possibly due to bugs or other errors in my method.
+",21311,,2444,,2/21/2019 14:43,2/21/2019 14:43,Using the opponent's mixed strategy in estimating the state value in minimax Q learning,The spectrum of human sensory inputs seems to fall within certain ranges suggesting normalization is built-in into biological NNs?
+ +It also adapts to circumstantial conditions, e.g. people living in a city with certain factory smell eventually don't perceive the smell anymore, at least consciously (within working memory) / it adapts to a new baseline?
+",21269,,,,,1/11/2019 7:59,Is input normalization built-in into mammals sensory neurons?,Maybe Deep Reinforced Learning?
+ +I am not sure but AND gate could be solved by your implementation. I have other feeling with OR gates. Just think - first we need to have information about two conditions and then we can check for complex solutions. +First of all I thought about Neural Network with one hidden layer. Sounds perfect.
+ +I think you will understand when you check this Tensorflow-Keras code:
+ +iterations = 50
+
+model = Sequential()
+model.add(Dense(16, input_shape=(None, 2), activation='relu')) # our hidden layer for OR gate problem
+model.add(Dense(2, activation='sigmoid'))
+model.summary()
+opt = Adam(0.01)
+model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['acc'])
+# mean_squared_error categorical_crossentropy binary_crossentropy
+
+for iteration in range(iterations):
+ x_train = np.array([[0, 0], [0, 1], [1, 0], [1, 1]]) # table of inputs
+ y_train = np.array([[1, 0], [0, 1], [0, 1], [1, 0]]) # outputs in categorical (first index is 0, second is 1)
+
+ r = np.random.randint(0, len(x_train)) # random input
+ r_x = x_train[r]
+ r_x = np.array([[r_x]])
+ result = model.predict(r_x)[0] # predict
+ best_id = np.argmax(result) # get of index of ""better"" output
+
+ input_vector = np.array([[x_train[r]]])
+ isWon = False
+ if (best_id == np.argmax(y_train[r])):
+ isWon = True # everything is good
+ else:
+ # answer is bad!
+ output = np.zeros((2))
+ output[best_id] = -1
+ output = np.array([[output]])
+ loss = model.train_on_batch(input_vector, output)
+
+ print(""iteration"", iteration, ""; has won?"", isWon)
+
+
+When ""answer"" of agent is good - we are not changing anything (but we could train network with best action as 1 for stability).
+ +When answer is bad, we set action as bad - other actions have more probability for be chosen.
+ +Sometimes learning need to have more than 50 iterations but it is only my proposition. Play with hidden layer neuron count, learn rate and iterations.
+ +Hope will help you :)
+",9101,,,,,1/10/2019 12:57,,,,2,,,,CC BY-SA 4.0 +9923,2,,9903,1/10/2019 13:40,,3,,"Machine Learning is a bad fit to this problem.
+ +Even simple PRNGs that are not suitable for use in simulators (such as rand()
) are varied enough that it is very hard to reverse engineer them statistically using generic techniques - essentially what 90% of ML does is fit a generic model to data statistically by altering parameters. The remaining 10% might do things in specialist manner, such as saving all the data and picking best option.
In theory most ML approaches would eventually solve a PRNG, however that would typically involve iterating through the entire state space of the PRNG multiple times. The statistical relationship between internal state, next state and output of a PRNG is complex by design, so that this is the only ""black box"" statistical approach, and this is clearly not feasible for any real implementation of a random number generator, which is going to have at least $2^{31}$ states on modern machines. Perhaps older 16-bit PRNGs, with a single value for state might be tractable.
+ +An AI advanced enough to attempt to reverse engineer the output logically based on purely the data and researching how RNGs work is too advanced for current ML techniques to consider.
+ +That leaves approaches that might try to construct a similar RNG, such as Genetic Programming (where the genome is converted to executable code). The trouble with this approach is there is no heuristic for a RNG that measures how close its output is to a target. A single bit of state difference or any tiny but meaningful change in generated RNG design will produce output that has no similarities with the target output whatsoever. Without such a measure you have no fitness function, and no way to attempt a guided search using the many discrete optimisation tools from AI.
+ +Instead the usual approach to ""breaking"" a PRNG is to analyse the algorithm. Knowing the algorithm of many non-cryptographic PRNGs can allow predicting the internal state of the generator, sometimes in very few steps (for really simple Linear Congruential Generators that might be just a single step!).
+",1847,,1847,,1/10/2019 14:30,1/10/2019 14:30,,,,0,,,,CC BY-SA 4.0 +9924,1,,,1/10/2019 13:44,,3,1383,"An artificial intelligence (AI) is often defined as something that can learn over time and can imitate human behaviors.
+If an Expert system (e.g. MYCIN) that only involves if-then-else statements qualifies to be an AI, then every program we write in our daily lives that involves some condition-based question answering should be an AI. Right? If not, then what should be an exact and universal definition for AI. How can a software qualify to be called AI?
+",21316,,2444,,11/17/2021 14:19,11/17/2021 14:19,"If expert systems are a bunch of if-then-else statements, then how are they termed as AI?",Disclaimer: I'm not a student in computer science and most of my knowledge about ML/NN comes from YouTube, so please bear with me!
+ +Let's say we have a classification neural network, that takes some input data $w, x, y, z$, and has some number of output neurons. I like to think about a classifier that decides how expensive a house would be, so its output neurons are bins of the approximate price of the house.
+ +Determining house prices is something humans have done for a while, so let's say we know a priori that data $x, y, z$ are important to the price of the house (square footage, number of bedrooms, number of bathrooms, for example), and datum $w$ has no strong effect on the price of the house (color of the front door, for example). As an experimentalist, I might determine this by finding sets of houses with the same $x, y, z$ and varying $w$, and show that the house prices do not differ significantly.
+ +Now, let's say our neural network has been trained for a little while on some random houses. Later on in the data set. it will encounter sets of houses whose $x, y, z$ and price are all the same, but whose $w$ are different. I would naively expect that at the end of the training session, the weights from $w$ to the first layer of neurons would go to zero, effectively decoupling the input datum $w$ from the output neuron. I have two questions:
+ +For a classical neural network, the network has no ""memory,"" so it might be very difficult for the network to realize that $w$ is a worthless input parameter.
+ +Any information is much appreciated, and if there are any papers that might give me insight into this topic, I'd be happy to read them.
+",21319,,,,,4/17/2019 9:36,Can neural networks learn to ignore an input datum?,Yes, for many sensory inputs there is indeed something similar to normalization. But its not rally the same as in classical data analytics compared to what eg min/max normalization does or other technics.
+ +Lets look on some examples and considerations:
+ +mammals don't perceive heat or loudness in a linear way. This is because already many sensory receptors have chemical / physical limits. Double decibels will not perceived with double intensity. Inside your ear, the small hammer and abil will brace to protect you. --> its like normalization with logarithmic effects applied.
heat perception is more like a difference integration than a absolute temperature measurement. Its measured via H+ ions flow in mitochondria in the cell (if i recall correctly)
On the neuronal side gradual signals in the dendrites (analog signal) sum up gradually to later form an spike at the axon hill. where in turn a fire frequency is then encoded - the maximum frequency of this serves as a a natural maximum limit. I remember that grasshoppers increase axon fire frequency when objects started covering more ommatidial area on their ""eye"". The more of their ""eyes"" are covered by the shadow the more input on the neuron --> higher fire rate.
a lot of sensory input is post processed in higher cerebral areas. Eg. compared to what is expect able and heuristics are applied to compare a signal with former events.
when doing computational data analysis we may want go for accuracy and maximum comparability. Mostly on all data that could be available. --> eg. with respect to properties of a standard normal distribution. Hence we put some effort to be accurate and know the true parameters, remove outliers and so on --> big data comes into play here. +Nature in contrast strives often for efficiency with the means of reaching the minimal required with minimal resources.
Summary: +Compared to normalization in an analytical sense (eg. mean, min-max or other feature normalization techniques), nature is often only interested in the current difference between stimuli. And this only within some relevant range. Other information is not integrated. And normalization with the goal of making measurement points comparable only happens within this range of the mapping function provided by the sensor/neuron/receptor whatever!
+ +So this should also answer your question about, why you are not smelling something in the city after a while any more. However, this for sure happens at higher cerebral regions (it might also be that your smell receptors saturate) its the same principle. Your consciousness just saves energy by not concentrating on something that is anyway not changing.
+ +If you want to read more have a look here: https://en.wikipedia.org/wiki/Weber%E2%80%93Fechner_law
+",21290,,21290,,1/11/2019 7:59,1/11/2019 7:59,,,,3,,,,CC BY-SA 4.0 +9928,2,,9918,1/10/2019 17:37,,4,,"Gradient descent and stochastic gradient descent can be applied to any differentiable loss function irrespective of whether it is convex or non-convex. The ""differentiable"" requirement ensures that trainable parameters receive gradients that point in a direction that decreases the loss over time.
+ +In the absence of a differentiable loss function, the true gradient must be approximated through other methods. For example, in classification problems, the 0-1 loss function is considered the ""true"" loss, but it is non-convex and difficult to optimize. Instead, surrogate loss functions act as tractable proxies for true loss functions. They are not necessarily worse; negative log-likelihood loss gives a softmax distribution over $k$ classes rather than just the classification boundary.
+ +For your problem specifically, $f(x,a)=min(x,a)$ is not a differentiable loss function. It is not differentiable at $x=0.5$, but the gradient could be estimated through the subgradient. In practice, this works because neural networks often don't achieve the local/global minima of a loss function but instead asymptotically decreasing values that achieve good generalization error. Tensorflow and PyTorch use subgradients when fed non-differentiable loss functions. You could also use a smooth approximation of the $min$ function (see this thread) to get better gradients.
+",19403,,,,,1/10/2019 17:37,,,,0,,,,CC BY-SA 4.0 +9929,2,,5774,1/11/2019 3:20,,1,,"I have just the same problem, and I was trying to derive the backpropagation for the convolutional layer with stride, but it doesn't work.
+When you do the striding in the forward propagation, you chose the elements next to each other to convolve with the kernel, then take a step $>1$. This results in the fact that in the backpropagation, in the reverse operation, the delta matrix elements will be multiplied by the kernel elements, (with the rotation) but not as strided, but you are picking elements that are not next to each other, something like $DY_{11} * K_{11} + DY_{13} * K_{12} + DY_{31} * K_{21} + DY_{33} * K_{22}$, which is NOT the equivalent as a convolution with a stride $>1$.
+So as far as I am concerned, if I would like to implement the ConvNet by myself to get a better grasp of the concept, I have to implement a different method for the backprop, if I allow strides.
+",21330,,2444,,12/30/2021 13:38,12/30/2021 13:38,,,,0,,,,CC BY-SA 4.0 +9933,1,,,1/11/2019 9:20,,3,156,"I understand why deep generative models like DBN ( deep belief nets ) or DBM ( deep boltzmann machines ) are able to capture underlying structures in data and use it for various tasks ( classification, regression, multimodal representations etc ...).
+ +But for the classification tasks like in Learning deep generative models, I was wondering why the network is fine-tuned on labeled-data like a feed-forward network and why only the last hidden layer is used for classification?
+ +During the fine-tuning and since we are updating the weights for a classification task ( not the same goal as the generative task ), could the network lose some of its ability to regenerate proper data? ( and thus to be used for different classification tasks ? )
+ +Instead of using only the last layer, could it be possible to use a partition of the hidden units of different layers to perform the classifications task and without modifying the weights? For example, by taking a subset of hidden units of the last two layers ( sub-set of abstract representations ) and using a simple classifier like an SVM?
+ +Thank you in advance!
+",21335,,21335,,1/16/2019 15:59,1/16/2019 15:59,Why is the last layer of a DBN or DBM used for classification task?,I have checked out many methods and papers, like YOLO, SSD, etc., with good results in detecting a rectangular box around an object, However, I could not find any paper that shows a method that learns a rotated bounding box.
+Is it difficult to learn the rotated bounding box for a (rotated) object?
+Here's a diagram that illustrates the problem.
+ +For example, for this object (see this), its bounding box should be of the same shape (the rotated rectangle is shown in the 2nd right image), but the prediction result for the YOLO will be Ist right.
+Is there any research paper that tackles this problem?
+",16313,,2444,,1/28/2021 23:38,1/28/2021 23:41,Is it difficult to learn the rotated bounding box for a (rotated) object?,I'm using an object detection neural network and I employ data augmentation to increase a little my small dataset. More specifically I do rotation, translation, mirroring and rescaling.
+ +I notice that rotating an image (and thus it's bounding box) changes its shape. This implies an erroneous box for elongated boxes, for instance on the augmented image (right image below) the box is not tightly packed around the left player as it was on the original image.
+ +The problem is that this kind of data augmentation seems (in theory) to hamper the network to gain precision on bounding boxes location as it loosens the frame.
+ +Are there some studies dealing with the effect of data augmentation on the precision of detection networks? Are there systems that prevent this kind of thing?
+ +Thank you in advance!
+ +(Obviously, it seems advisable to use small rotation angles)
+ + +",19859,,21337,,1/12/2019 10:25,1/13/2019 11:56,How data augmentation like rotation affects the quality of detection?,Let's consider a classic feedforward neural network $F$ with input dimension $d$, output dimension $k$, $L$ layers $l_i$ with $m$ neurons each. ReLu activation.
+ +This means that, given a point $x \in R^d$ its image $F(x) \in R^k$. Let's now assume i add some gaussian noise $\eta_i$ in EVERY hidden layer $l_i(x)$ at the same time, where the norm of this noise is 5% the norm of its layer computed on the point $x$. Let's call this new neural network $F_*$
+ +I know that, empirically, neural networks are resistant to this kind of noise, especially on the first layers. How can i show this theoretically?
+ +The question i'm trying to answer is the following:
+ +After having injected this noise $\eta_i$ in every layer $l_i(x)$, how far the output $F_{*}(x)$ will be from the output of the original neural network $F(x)$?
+",21338,,21338,,1/12/2019 13:23,1/12/2019 13:23,Are Neural Network layers resistent to noise?,If I'm performing a text classification task using a model built in Keras, and, for example, I am attempting to predict the appropriate tag for a given Stack Overflow question:
+++How do I subtract 1 from an integer?
+
And the ground-truth tag for this question is:
++ ++
But my model is predicting:
++ ++
If I were to retrain my model, but this time add the above question and tag in both the training and testing data, would the model be guaranteed to predict the correct tag for this question in the test data?
+I suppose the tl;dr is: Are neural networks deterministic if they encounter identical data during training and testing?
+I'm aware it's not a good idea to use the same data in both training and testing, but I'm interested from a hypothetical perspective, and for gaining more insight into how neural networks actually learn. My intuition for this question is "no", but I'd really be interested in being pointed to some relevant literature that expands/explains that intuition.
+",21347,,2444,,1/17/2021 17:23,1/17/2021 17:23,Will a neural network always predict the correct label if it sees the exact same input during training and testing?,Here's a recent paper that does what you're looking for. It looks like they achieve this simply by adding a couple rotated prior boxes and regressing the angles in between. This is similar to what standard object detectors do in terms of creating a bunch of prior box shapes and regressing the actual sizes.
+",17408,,,,,1/11/2019 22:57,,,,0,,,,CC BY-SA 4.0 +9939,1,,,1/12/2019 0:29,,1,64,"The Markov property is the dependence of a system's future state probability distribution solely on the present state, excluding any dependence on past system history.
+ +The presence of the Markov property saves computing resource requirements in terms of memory and processing in AI implementations, since no indexing, retrieval, or calculations involving past states is required.
+ +However, the Markov property is often an unrealistic and too strong assumption.
+ +Precisely, what limitations does the Markov property place on real-time learning?
+",4302,,2444,,2/13/2019 2:35,2/13/2019 2:35,What limitations does the Markov property place on real time learning?,The general answer to the behavior of combining common activation functions is that the laws of calculus must be applied, specifically differential calculus, the results must be obtained through experiment to be sure of the qualities of the assembled function, and the additional complexity is likely to increase computation time. The exception to such increase will be when the computational burden of the combination is small compared to the convergence advantages the combination provides.
+This appears to be true of Swish, the name given to the activation function defined as
+$$f(x) = x \, \mathbb{S}(\beta x) \; \text{,}$$
+where $f()$ is the Swish activation function and $\mathbb{S}$ is the sigmoid function. Note that Swish is not strictly a combination of activation functions. It is formed through the addition of a hyper-parameter $\beta$ inside the sigmoid function and a multiplication of the input to the sigmoid function result.
+It does not appear to be developed by Google. The originally anonymously submitted paper (for double blind review as a ICLR 2018 paper), Searching for Activation Functions, was authored by Prajit Ramachandran, Barret Zoph, and Quoc V. Le around 2017. This is their claim.
+++Our experiments show that the best discovered activation function, ... Swish, ... tends to work better than ReLU on deeper models across a number of challenging datasets.
+
Any change in activation function to any one layer will, except in the astronomically rare case, impact accuracy, reliability, and computational efficiency. Whether the change is significant cannot be generalized. That's why new ideas are tested against data sets traditionally used to gauge usefulness1.
+Combining activation functions to form new activation functions is not common. For instance, AlexNet does not combine them.2. It is, however, very common to use different activation functions in different layers of a single, effective network design.
+Footnotes
+[1] Whether these traditions create a bias is another question. Those who follow the theory of use case analysis pioneered by Swedish computer scientist Ivar Hjalmar Jacobson or 6 Sigma ideas would say that these tests are unit test, not functional tests against real world use cases, and they have a point.
+[2] To correct any misconceptions that may arise from another answer, AlexNet, the name given to the approach outlined in ImageNet Classification with Deep Convolutional Neural Networks (2012) by Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton from the University of Toronto, does not involve combining activation functions to form new ones. They write this.
+++The output of the last fully-connected layer is fed to a 1000-way softmax which produces a distribution over the 1000 class labels.
+...
+The ReLU non-linearity is applied to the output of every convolutional and fully-connected layer. The internal layers are pure ReLU and the output layer is Softmax.
+
There are also convolution kernels and pooling layers in the AlexNet approach's series of layers used by them, and the design has entered common use since their winning of the ImageNet competition in 2012. Other approaches have won subsequent competitions.
+",4302,,-1,,6/17/2020 9:57,1/12/2019 1:34,,,,0,,,,CC BY-SA 4.0 +9944,1,,,1/12/2019 6:27,,2,394,"How does DARTS compare to ENAS? Which one is better or what advantages does they each have?
+ +Links:
+ +After training, all standard models are deterministic (the process each input goes thru is set).
+ +In essence, during training the model attempts to learn the distribution of the training dataset. Whether it is able to depends on the size of the model, if it is big enough, it can simply ""memorize"" all the training samples and result in perfect accuracy on the training set.
+ +Normally this is considered to be terrible (called overfitting) and many regularization techniques attempt to prevent it. Eventually when training a model, you are giving it the training distribution as an example but you hope that it will be able to estimate the real distribution out of it.
+",20399,,,,,1/12/2019 8:51,,,,0,,,,CC BY-SA 4.0 +9946,2,,2841,1/12/2019 9:16,,-1,,"This May not be what you were looking for, but technically yes. Although not for Speed and Strength. But you could randomly guess new Mathematical/Physical/chemical solutions to become more efficient in random guessing (basically anything that allows the machine to compute faster and to maybe simulate the effect of those findings) thus technically achieving something similar to a singularity, without having to have any Intelligence at all actually (or just on a human Level), since you could just brute force all.
+ +Is this efficient? No, not even close to being in any way feasible. +Does it work? Technically, Yes.
+ +It would be a singularity of sorts, since it improves itself continuously, but it wouldn‘t need to improve its own intelligence.
+ +Of course, some findings might make it possible to become more intelligent, but let‘s just assume it doesn‘t apply those findings to itself.
+",21191,,,,,1/12/2019 9:16,,,,0,,,,CC BY-SA 4.0 +9947,2,,9937,1/12/2019 10:13,,3,,"No, Neural Networks do not have such a guarantee. In fact, I don't believe any kind of classifier in the entire field of Machine Learning has such a guarantee, though some may be slipping my mind...
+ +For an easy counterexample, consider what happens if you have two instances with precisely identical inputs, but different output labels. If your classifier is deterministic (in the sense that there is no stochasticity in the procedure going from input to output after training), which a Neural Network is (unless, for example, you mess up a Dropout implementation and accidentally also apply dropout after training), it cannot possibly generate the correct output for both of those instances, even if they were presented as examples thousands of times during training.
+ +Of course the above is an extreme example, but similar intuition applies to more realistic cases. There can be cases where getting the correct prediction on one instance would reduce the quality of predictions on many other instances if they have somewhat similar inputs. Normally, the training procedure would then prefer getting better predictions on the larger number of instances, and settle for failure on another instance.
+",1641,,,,,1/12/2019 10:13,,,,2,,,,CC BY-SA 4.0 +9949,2,,9933,1/12/2019 11:20,,1,,"One of the big realizations that deep learning models brought in recent years was that we can train the feature extractors and classifiers simultaneously. In fact most people have stopped separating the 2 tasks and simply refer to all the process as training the model.
+ +However, if you dive in to every single model architecture, it will always be constructed from the first part which is the feature extractor which outputs the embedding output - (which is basically the x encoded features of the input), and second part consisting of the final layer the model - the classifier which uses the embedding layer encoding to predict the class of the input.
+ +The goal of the first part is to reduce the dimensionality of the input to just the most impotent features for the final task. The goal of the classifier is to use those features to output the final score/class etc.
+ +This is why usually only this layer is fine-tuned, because we don't want to damage the trained feature extractor, just update the classifier to fit a slightly different distribution.
+ +I'm pretty sure that in your mentioned case, for generation they do not use the classification layer, so updating it shouldn't have any affect on the model's generative abilities.
+ +Regarding your last question, yes it is possible, ones you extracted the features with the model, you can use any kind of classifier on them.
+",20399,,,,,1/12/2019 11:20,,,,0,,,,CC BY-SA 4.0 +9954,1,9970,,1/12/2019 14:55,,3,458,"Some examples of low-variance machine learning algorithms include linear regression, linear discriminant analysis, and logistic regression.
+Examples of high-variance machine learning algorithms include decision trees, k-nearest neighbors, and support vector machines.
+Source:
+ +What makes a machine learning algorithm a low variance one or a high variance one? For example, why do decision trees, k-NNs and SVMs have high variance?
+",15368,,2444,,6/23/2020 21:24,6/24/2020 14:36,What makes a machine learning algorithm a low variance one or a high variance one?,For a neural turing machine, there is an attention distribution over the memory cells. A read operation consists of multiplying the memory cell's value by its respective probability, and adding these results for all memory cells.
+ +Suppose we only did the above operation for memory cells with a probability greater than 0.5, or suppose we concatenated the results instead of adding them. Can this be implemented/ trained with stochastic gradient descent? Or would it not be differentiable?
+ +Thanks!
+",21375,,,,,1/12/2019 22:28,Is discrete reading in neural turing machines differentiable?,++ +The problem is that this kind of data augmentation seems (in theory) to hamper the network to gain precision on bounding boxes location as it loosens the frame.
+
Yes, it is clear from your examples that the bounding boxes become wider. Generally, including large amounts of data like this in your training data will mean that your network will also have a tendency to learn slightly larger bounding boxes. Of course, if the majority of your training data still has tight boxes, it should stell tend towards learning those... but likely slightly wider ones than if the training data did not include these kinds of rotations.
+ +++ +Are there some studies dealing with the effect of data augmentation on the precision of detection networks? Are there systems that prevent this kind of thing?
+ +(Obviously, it seems advisable to use small rotation angles)
+
I do not personally work directly in the area of computer vision really, so I'm not sufficiently familiar with the literature to point you to any references on this particular issue. Based on my own intuition, I can recommend:
+ +Why is the e-function used to decide whether to accept a worse solution or not? +To be more specific: Why was $e$ chosen as basis?
+ +The propability to accept a worse solution is described with: +$p=e^{-\frac{E(y)-E(x)}{kT}}$
+ +$E(y)$ is the energy from the old solution +$E(x)$ is the energy from new solution $T$ is a constant temprature decreasing with a constant factor k in every iteration.
+",19413,,21157,,1/14/2019 9:20,1/14/2019 9:20,Simulated Annealing: Why is e-function used as propability function to decide to accept a worse solution,You can find the explanation by asking some question about the function. Suppose, the value of $\frac{E(y)-E(x)}{kT} >> 0$ is much more greater than zero. What does it mean? It means the value of $E(y)$ is much greater than $E(x)$ related to the $kT$ that is as a measure of temperature decreasing. Now, you want in this situation a probability which is near to zero. Hence, $e^{-\frac{E(y)-E(x)}{kT}}$ could be a good value for the probability of selection of worse solutions!
+ +Why $e$ instead of $2$ or other values greater than $1$? Because it could be a good function in optimization problems as its derivative is more simple than others!
+",4446,,,,,1/13/2019 17:53,,,,0,,,,CC BY-SA 4.0 +9965,1,,,1/13/2019 18:10,,1,93,"I am currently working with classical roboticists who insist on inverse kinematics, and what I (perhaps mistakenly) call the old way of thinking about robots accomplishing tasks. +Much of the relatively recent research focuses on Robots using Brain models such as Multiple timescales (Artificial Intelligence models) that segment sequences and reproduce them, having learned them. The problem I face is this bunch of roboticists insist that a robot already knows the sequence, and training it to be reproduced is redundant, since a Robot can already reproduce the sequence anyway. +How accurate would you rate this assessment of using AI in robotics? +Are there any advantages of using AI to learn sequences for robot control?
+",21397,,,,,1/13/2019 18:10,Using Artificial Intelligence for Robot movement instead of regular Inverse Kinematics,I would appreciate your help with this (naive) question of mine.
+ +Given the set of points located on a circle, $x_{i}, y_{i}$ as the input data, Can a deep/machine learning algorithm infer that radius of the circle is constant ? In other words, given the data $x_{i}, y_{i}$ is there way that algorithm discovers the constraint: $x_{i}^2 + y_{i}^2 = \text{constant}$ ?
+ +I would also appreciate any related reference on the subject.
+",21399,,,,,10/6/2021 21:06,Extracting algebraic constraints from the input data,I have data that are a result of rules that are exceptionless. I want to my program to 'look' at my data and figure out those rules. However, the data might contain what might look like an exception (rule within a rule) but that is too, true for all occasions e.g.
+ +All men of the dataset with x common characteristics go out for a beer on Thursday after work. That is true for all men with those characteristics. However, they will cancel their plans if their wife is sick. That last condition might initially look as an exception to the rule (go out for beer on Thursdays), but it is not as long as it is true for all men with those x characteristics.
+ +So the question is: Which approach/method would be suitable for this?
+",19393,,,,,1/13/2019 20:34,How can I model regularity?,What this is talking about is how much a machine learning algorithm is good at ""memorizing"" the data. Decision trees, for their nature, tend to overfit very easily, this is because they can separate the space along very non-linear curves, especially if you get a very deep tree. Simpler algorithms, on the other hand, tend to separate the space along linear hyper surfaces, and therefore tend to under-fit the data and may not give very good prediction, but may behave better on new unseen data which is very different from the training data.
+",177,,,,,1/14/2019 1:42,,,,0,,,,CC BY-SA 4.0 +9973,1,9995,,1/14/2019 5:38,,10,820,"I was wondering if machine learning algorithms (CNNs?) can be used/trained to differentiate between small differences in details between images (such as slight differences in shades of red or other colours, or the presence of small objects between otherwise very similar images?)? And then classify images based on these differences? If this is a difficult endeavour with our current machine learning algorithms, how can it be solved? Would using more data (more images) help?
+ +I would also appreciate it if people could please provide references to research that has focused on this, if possible.
+ +I've only just begun learning machine learning, and this is something that I've been wondering from my research.
+",16521,,2444,,5/30/2020 12:40,5/30/2020 12:46,Can machine learning algorithms be used to differentiate between small differences in details between images?,How do you distinguish between a complex and a simple model in machine learning? Which parameters control the complexity or simplicity of a model? Is it the number of inputs, or maybe the number of layers?
+Moreover, when should a simple model be used instead of a complex one, and vice-versa?
+",7681,,2444,,9/20/2020 10:44,9/20/2020 10:44,How do you distinguish between a complex and a simple model in machine learning?,If you want to find a proper architecture for your model, you can use the NAS (neural architecture search) methods instead of running some naive models to find a model and involving to decide which model is more complex or simpler. Some methods which used in NAS to find a proper architecture are:
+ +I am reading about CANN. However, I do not seem to grasp what it is. Maybe someone who has worked with it can explain it? I found out about it while reading about RatSLAM. I understand that it helps to keep long/short term memory.
+",14863,,2444,,4/12/2022 8:39,4/12/2022 8:39,What is a continuous-attractor neural network?,What are the current NLP/NLU techniques that can extract metaphors from texts?
+For example
+++His words cut deeper than a knife.
+
Or a simpler form like:
+++",21415,,2444,,1/15/2021 0:30,7/22/2021 22:15,How to recognise metaphors in texts using NLP/NLU?,Life is a journey that must be travelled no matter how bad the roads and accommodations.
+
I was thinking of something of the sort:
+ +Build a program (call this one fake user) that generates lots and lots and lots of data based on the usage of another program (call this one target) using stimuli and response. For example, if the target is a minesweeper, the fake user would play the game a carl sagan number of times, as well as try to click all buttons on all sorts of different situations, etc...
run a machine learning program (call this one the copier) designed to evolve a code that works as similar as possible to the target.
kablam, you have a ""sufficiently nice"" open source copy of the target.
Is this possible?
+ +Is something else possible to achieve the same result, namely, to obtain a ""sufficiently nice"" open source copy of the original target program?
+",20976,,2444,,6/28/2019 16:54,2/10/2023 21:42,Is it possible to use AI to reverse engineer software?,Remarkably, more or less the scenario you describe is not only feasible and has already been demonstrated (detailed explanation and fascinating videos at link).
+ +However, the fidelity of the copy is currently quite limited:
+
So for now, your copy will be quite low quality. However, there is a big exception to this rule: if the software you are copying is itself based on machine learning, then you can probably make a high-quality copy quite cheaply and easy, as I and my co-authors explain in this short article.
+ +Interesting question and I'm quite sure that the correct answer will change rapidly over the next few years.
+",17770,,,,,1/14/2019 13:10,,,,10,,,,CC BY-SA 4.0 +9986,2,,9982,1/14/2019 13:58,,3,,"This is still a research topic in linguistics. A quick google search brings up a couple of papers that might be useful:
+However, you probably won't get an off-the-shelf tool that recognises metaphors for you.
+To add more details, the problem with metaphors is that you cannot detect them by surface structure alone. Any sentence could (in theory) be a metaphor. This is different from a simile, which can usually be spotted easily through the word like, as in she runs like the wind. Obviously, like on its own is not sufficient, but it's a good starting point to identify possible candidates.
+However, his words cut deeper than a knife is -- on the surface -- a normal sentence. Only the semantic incongruence between words as the subject and cut as the main verb creates a clash. In order to detect this automatically, you need to identify possible semantic features of the verbal roles and look for violations of the expected pattern.
+The verb cut would generally expect an animate object, preferably human, or an instrument with a blade (the knife cuts through the butter) as its actor or subject. But it also can include (water)ways: the canal cuts through the landscape, the road cuts through the field. The more closely you look, the more exceptions/extensions you will find for your initial assumption.
+And every extension/exception will water down the accuracy of your metaphor detection algorithm.
+The second example is similar: Life is a journey. You could perhaps use a thesaurus and see what the hyperonyms of life are. Then you could do the same with journey, and see if they are compatible. A car is a vehicle is not a metaphor, because vehicle is a hyperonym of car. But journey is not a hyperonym of life, so could be a metaphor. But I would think that this is still very tricky to get right. In this case, the absence of a determiner might be a hint, as it's not a life is a journey -- you might restrict yourself to bare nouns for this type of metaphor. But this is also not a firm rule.
+In short, it is a hard problem, as you need to look at the meaning, rather than just the structure or word choice. And meaning is not easy to deal with in NLP, despite decades of work on it.
+",2193,,2444,,1/15/2021 0:27,1/15/2021 0:27,,,,0,,,,CC BY-SA 4.0 +9987,1,9988,,1/14/2019 16:03,,3,883,"I want to start a project for my artificial intelligence class about speaker recognition. Basically, I want to train my AI to detect if it's me who's speaking or somebody else. I would like some suggestions or libraries to work with.
+",21421,,16229,,1/21/2019 20:58,10/18/2019 23:01,Training an AI to recognize my voice (or any voice),The human voice is based on the neural muscular control of vocal apparatus made up of many parts.
+ +These coordinated muscular manipulations produce envelopes (controlling) of audio that can be characterized by periodic and transient wave forms.
+ +Voices are unique to the learning state of neural activity and anatomic attributes, which is a way of saying that vocal habits and the physical attributes of the voice supports the distinguishing of vocal identity.
+ +The detection of distinguishing features of voices by the ear is equally complex. In a room full of people talking, the brain can learn to track a single voice.
+ +It is important to note that performing voice recognition to determine the identity of the human source is significantly different than performing voice recognition to produce text. To produce text accurately, the NLP must determine language elements and construct a semantic network that represents the vocal content or a text from that representation to be accurate in the case of like sounding words. Fortunately, the identification of the speaker is easier in some ways than the accurate voice to text. Unfortunately, the identification of the speaker has general limitations discussed below.
+ +The first stage of hearing in the ear is mechanical, involving the length of hairs along the cochlear surface, which is like a radio tuner that discriminates all frequencies within a range simultaneously. The software equivalent is a spectrum derived by applying a root mean square to the result of an FFT (fast Fourier transform) to provide magnitudes.
+ +$$ m_f := \sqrt{t_f^2 + {(it_f)}^2} $$
+ +The phase component of the FFT results ($\, \arctan(t, it) \,)$ can be discarded, since it is not correlated with neural control of voice.
+ +The application of the FFT to speech (as with any changing audio) requires windowing over the audio samples using one of the windowing tapers, such as the Hann window or Blackman window. The input is the audio stream or file contents as a sequence of pressure samples, the audio. The output is a sequence of spectra, each containing the volume of each frequency in the vocal range, from about 30 Hertz to 15 K Hertz.
+ +This series of spectra can be fed into the initial layer of one of the more advanced RNNs (recurrent neural networks), such as the LSTM (long short term memory) networks, its bidirectional version, the B-LSTM, or a GRU (gated recurrent network), which is touted as training equally well with less time or computing resource consumption.
+ +The identity of the speaker is the label. The series of spectra are the features.
+ +Using the PAC (probably approximately correct) learning framework, it may be possible to estimate, in advance of experimentation, the minimum number of words the speaker must speak to produce a particular accuracy and reliability in use of the learned parameters from the network training.
+ +It will take some study to set up the hyper-parameters and design the layers of the network in terms of depth (number of layers) and width sequence (number of cells per layer, which may vary from layer to layer).
+ +The use case limitation of this system is that each speaker must read some text that provides adequate training example sequences of adequate length, so that there are sufficient number of overlapping windows for the FFT to transform into spectra so that the training converges reasonably.
+ +There is no way around the individual user training as there is with recognition of linguistic content, which can be trained across a large set of speakers to recognize content somewhat independent of the speaker. The system can be adjusted and improved to minimize the amount of speech required, but information theory constraints keep that quantity from ever approaching zero.
+ +No network, whether artificial or biological, can learn something from nothing. Claude Shannon and John von Neumann realized decades ago that there is a kind of conservation of information, just as there is a conservation of matter and energy in space below nuclear reaction thresholds. This led to the definition of a bit and the formulation of information as a quantity of bits corresponding to a narrowing of probability that the information provides.
+ +$$ b_i = - \log_2 {\frac {P(x|i)} {P(x)}} $$
+",4302,,4302,,1/14/2019 18:46,1/14/2019 18:46,,,,0,,,,CC BY-SA 4.0 +9990,1,,,1/14/2019 18:11,,2,2417,"I was trying to understand the loss function of GANs, but I found a little mismatch between different papers.
+This is taken from the original GAN paper:
+++The adversarial modeling framework is most straightforward to apply when the models are both multilayer perceptrons. To learn the generator's distribution $p_{g}$ over data $\boldsymbol{x}$, we define a prior on input noise variables $p_{\boldsymbol{z}}(\boldsymbol{z})$, then represent a mapping to data space as $G\left(\boldsymbol{z} ; \theta_{g}\right)$, where $G$ is a differentiable function represented by a multilayer perceptron with parameters $\theta_{g} .$ We also define a second multilayer perceptron $D\left(\boldsymbol{x} ; \theta_{d}\right)$ that outputs a single scalar. $D(\boldsymbol{x})$ represents the probability that $\boldsymbol{x}$ came from the data rather than $p_{g}$. We train $D$ to maximize the probability of assigning the correct label to both training examples and samples from $G$. We simultaneously train $G$ to minimize $\log (1-D(G(\boldsymbol{z})))$ :
+In other words, $D$ and $G$ play the following two-player minimax game with value function $V(G, D)$ :
+
$$ +\min _{G} \max _{D} V(D, G)=\mathbb{E}_{\boldsymbol{x} \sim p_{\text {data }}(\boldsymbol{x})}[\log D(\boldsymbol{x})]+\mathbb{E}_{\boldsymbol{z} \sim p_{\boldsymbol{z}}(\boldsymbol{z})}[\log (1-D(G(\boldsymbol{z})))] +$$
+Equation (1) in this version of the pix2pix paper
+++The objective of a conditional GAN can be expressed as +$$ +\begin{aligned} +\mathcal{L}_{c G A N}(G, D)=& \mathbb{E}_{x, y}[\log D(x, y)]+\\ +& \mathbb{E}_{x, z}[\log (1-D(x, G(x, z))], +\end{aligned} +$$ +where $G$ tries to minimize this objective against an adversarial $D$ that tries to maximize it, i.e. $G^{*}=$ $\arg \min _{G} \max _{D} \mathcal{L}_{c G A N}(G, D)$.
+To test the importance of conditioning the discriminator, we also compare to an unconditional variant in which the discriminator does not observe $x$ : +$$ +\begin{aligned} +\mathcal{L}_{G A N}(G, D)=& \mathbb{E}_{y}[\log D(y)]+\\ +& \mathbb{E}_{x, z}[\log (1-D(G(x, z))] . +\end{aligned} +$$
+
Putting aside the fact that pix2pix is using conditional GAN, which introduces a second term $y$, the 2 formulas are quite resemble, except that in the pix2pix paper, they try to get minimax of ${\cal{L}}_{cGAN}(G, D)$, which is defined to be $E_{x,y}[...] + E_{x,z}[...]$, whereas in the original paper, they define $\min\max V(G, D) = E[...] + E[...]$.
+I am not coming from a good math background, so I am quite confused. I'm not sure where the mistake is, but assuming that $E$ is expectation (correct me if I'm wrong), the version in pix2pix makes more sense to me, although I think it's quite less likely that Goodfellow could make this mistake in his amazing paper. Maybe there's no mistake at all and it's me who do not understand them correctly.
+",3098,,2444,,12/9/2021 9:25,12/9/2021 9:25,Mismatch between the definition of the GAN loss function in two papers,What is meant by both papers is that we have two agents (generator and discriminator) playing a game with the value function V
defined as a sum of the expectations (i.e. an expectation of the outcome value defined as a sum of two terms, or actually a logarithm of a product...). The generator uses a strategy G
encoded in the parameters of its neural network (θg
), the discriminator uses a strategy D
encoded in the parameters of its neural network (θd
). Our goal is to (hopefully) find such a pair of strategies (a pair of parameter sets θgmin
and θdmax
) that produce the minimax value.
While trying to find the (θgmin
, θdmax
) pair using gradient descent, we actually have two loss functions: one is the loss function for G
, parameterized by θg
, another is the loss function for D
, parameterized by θd
, and we train them alternatively on minibatches together.
If you look at the Algorithm 1 in the original paper, the loss function for the discriminator is -log(D(x; θd)) - log(1 - D(G(z); θd)
, and the loss function for the generator is log(1 - D(G(z; θg))
(in both cases, in the original paper, x
is sampled from the reference data distribution and z
is sampled from noise):
The ideal value for the loss function of the discriminator is 0, otherwise it's greater than 0. The ""loss"" function of the generator is actually negative, but, for better gradient descent behavior, can be replaced with -log(D(G(z; θg))
, which also has the ideal value for the generator at 0. It is impossible to reach zero loss for both generator and discriminator in the same GAN at the same time. However, the idea of the GAN is not to reach zero loss for any of the game agents (this is actually counterproductive), but to use that ""double gradient descent"" to ""converge"" the distribution of G(z)
to the distribution of x
.
Attentive Recurrent Comparators (2017) by Pranav Shyam et al. is an interesting paper that helps to answer the question you're wondering, along with a blog post that helps to describe it in easier terms.
+ +The way it's implemented is actually rather intuitive. If you have ever played a ""what is different"" game with two images usually what you'd do is look back and forth between the images to see what the difference is. The network that the researchers created does just that! It looks at one image and then remembers important features about that images and looks at the other image and goes back and forth.
+",17408,,2444,,5/30/2020 12:32,5/30/2020 12:32,,,,0,,,,CC BY-SA 4.0 +9996,1,,,1/15/2019 4:16,,2,98,"I am trying to generate a model that uses several physicochemical properties of a molecule (including number of atoms, number of rings, volume, etc.) to predict a numeric value $Y$. I would like to use PLS Regression, and I understand that standardization is very important here. I am programming in Python, using scikit-learn.
+ +The type and range for the features varies. Some are int64 while others are floating point numbers. Some features generally have small (positive or negative) values, while others have a very large value. I have tried using various scalers (e.g. standard scaler, normalize, min-max scaler, etc.). Yet, the R2/Q2 are still low.
+ +I have a few questions:
+ +Is it possible that by scaling, some of the very important features lose their significance, and thus contribute less to explaining the variance of the response variable?
If yes, if I identify some important features (by expert knowledge), is it OK to scale other features but those? Or scale the important features only?
Some of the features, although not always correlated, have values that are in a similar range (e.g. 100-400), compared to others (e.g. -1 to 10). Is it possible to scale only a specific group of features that are within the same range?
Cartesian Bias and Pipeline Efficiency
+ +You are experiencing a techno-cultural artifact of Cartesian-centric imaging running all the way back to the dawn of coordinate systems. It is the momentum of practice as a consequence of applying Cartesian 2D coordinates to rasterize images appearing at the focal planes of lenses from the dawn of television and the earliest standards of raster based capture and display.
+ +Although some work was done toward adding tilt to bounding rectangles in the late 1990s and since, from a time and computing resource conservation perspective, it is computationally and programmatically less costly to include the four useless triangles of pixels and keep the bounding box orthogonal with the pixel grid.
+ +Adding a tilt angle to the bounding boxes is marginally competitive when detecting ships from a satellite only because two conditions offset the inefficiencies in that narrow domain. The ship appears as an oblong rectangle with rounded corners from a satellite positioned in geosynchronous orbit. In the general case, adding a tilt angle can slow recognition significantly.
+ +Biology Less Biased
+ +An interesting side note is that the neural networks of animal and human vision systems do not have that Cartesian-centricity, but that doesn't help this question's solution, since non-orthogonal hardware and software is virtually nonexistent.
+ +Early Non-Cartesian Research and Today's Rasterization
+ +Gerber Scientific Techonology research and development in the 1980s (South Windsor, Connecticut, U.S.) investigated vector capture, storage, and display, but the R&D was not financially sustainable for a mid-side technology corporation for the reasons above.
+ +What remains, because it is economically viable and necessary from an animation point of view, is rasterization on the end of the system that converts vector models into frames of pixels. We see this in on the rendering SVG, VRML, and the original intent of CUDA cores and other hardware rendering acceleration strategies and architectures.
+ +On the object and action recognition side, the support of vector models directly from imaging is much less developed. This has not been a major stumbling block for computer vision because the wasted pixels at one tilt angle may be of central importance at another tilt angle, so there are no actual wasted input pixels if the centering of key scene elements is widely distributed in translation and tilt, which is often the case in real life (although not so much in hygienically pre-processed datasets).
+ +Conventions Around Object Minus Camera Tilt and Skew from Parallax
+ +Once edge detection, interior-versus-exterior, and 3D solid recognition come into play, the design of CNN pipelines and the way kernels can do radial transformation without actually requiring $\; \sin, \, \cos, \, \text{and} \, \arctan \;$ functions evaporate the computational burden of the Cartesian nature of pixel tensors. The end result is that the bounding box being orthogonal to the image frame is not as problematic as it initially appears. Efforts to conserve the four triangles of pixels and pre-process orientation is often a wasted effort by a gross margin.
+ +Summary
+ +The bottom line is that efforts to produce vector recognition from roster inputs have been significantly inferior in terms of resource and wait time burden, with the exception of insignificant gains in the narrow domain of naval reconnaissance satellite images. Trigonometry is expensive, but convolution kernels, especially now that they are moving from software into hardware accelerated computing paths in VLSI, is computable at lower costs.
+ +Past and Current Work
+ +Below is some work that deals with tilting with regard to objects and the effects of parallax in relation to the Cartesian coordinate system of the raster representation. Most of the work has to do with recognizing 3D objects in a 3D coordinate system to project trajectories and pilot or drive vehicles rationally on the basis of Newtonian mechanics.
+ +Efficient Collision Detection Using Bounding Volume Hierarchies of k-DOPs, James T. Klosowski, Martin Held, Joseph S.B. Mitchell, Henry Sowizral, and Karel Zikan, 1998
+ +Sliding Shapes for 3D Object Detection in Depth Images, Shuran Song and Jianxiong Xiao, 2014
+ +Amodal Completion and Size Constancy in Natural Scenes, Abhishek Kar, Shubham Tulsiani, Joao Carreira and Jitendra Malik, 2015
+ +HMD Vision-based Teleoperating UGV and UAV for Hostile +Environment using Deep Learning, Abhishek Sawarkar1, Vishal Chaudhari, Rahul Chavan, Varun Zope, Akshay Budale and Faruk Kazi, 2016
+ +Ship rotated bounding box space for ship extraction from high-resolution optical satellite images with complex backgrounds, Z Liu, H Wang, L Weng, Y Yang, 2016
+ +Amodal Detection of 3D Objects: +Inferring 3D Bounding Boxes from 2D Ones in RGB-Depth Images, Zhuo Deng, 2017
+ +3D Pose Regression using Convolutional Neural Networks, Siddharth Mahendran, 2017
+ +Aerial Target Tracking Algorithm Based on Faster R-CNN Combined with Frame Differencing, Yurong Yang, Huajun Gong, Xinhua Wang and Peng Sun, 2017
+ +A Semi-Automatic 2D solution for Vehicle Speed Estimation from Monocular Videos, Amit Kumar, Pirazh Khorramshahi, Wei-An Lin, Prithviraj Dhar, Jun-Cheng Chen, Rama Chellappa, 2018
+",4302,,4302,,1/16/2019 9:10,1/16/2019 9:10,,,,0,,,,CC BY-SA 4.0 +9998,2,,8885,1/15/2019 6:23,,4,,"The key is: VAE usually use a small latent dimension, the information of input is so hard to pass through this bottleneck, meanwhile it tries to minimize the loss with the batch of input data, you should know the result -- VAE can only have a mean and blurry output.
+ +If you increase the bandwidth of the bottleneck, i.e. the size of latent vector, VAE can get a high reconstruction quality, e.g. Spatial-Z-VAE
+",21409,,,,,1/15/2019 6:23,,,,1,,,,CC BY-SA 4.0 +9999,2,,7215,1/15/2019 6:52,,3,,"Principles of Computational Modelling in Neuroscience by David Sterratt, Bruce Graham, Andrew Gillies and David Willshaw discuss it in Chapter 7 (The synapse) and also in Chapter 8 (Simplified models of neurons). Especially in chapter 8, they discuss how to add excitatory or inhibitory synapses to integrate and fire neuron.
+ +There are various ways to add inhibitory synapse: either substrate voltage, inject negative current.
+",21436,,2444,,5/23/2020 18:28,5/23/2020 18:28,,,,0,,,,CC BY-SA 4.0 +10000,2,,1987,1/15/2019 7:59,,5,,"By cheating... theta
is $\arctan(y,x)$, $r$ is $\sqrt{(x^2 + y^2)}$.
In theory, $x^2$ and $y^2$ should work, but, in practice, they somehow failed, even though, occasionally, it works.
+ + +",21439,,2444,,2/26/2019 17:24,2/26/2019 17:24,,,,2,,,,CC BY-SA 4.0 +10001,2,,9838,1/15/2019 8:09,,2,,"There is some confusion between reinforcement and convergence in this question.
+ +The XOR problem is of interest in a historical context because the reliability of gradient descent is identity (no advantage over an ideal coin toss) for a single layer perceptron when the data set is are the permutations representing the Boolean XOR operation. This is an information theory way of saying a single layer perceptron can't be used to learn arbitrary Boolean binary operations, with XOR and XAND as counterexamples where convergence is not only not guaranteed but productive of functional behavior only by virtue of luck. That is why the MLP was an important extension of the perceptron design. It can be reliably taught an XOR operation.
+ +Search results for images related to deep reinforced learning provide a survey of design diagrams representing the principles involved. We can note that the use case for a reinforcement learning application is distinctly different from that of MLPs and their derivatives.
+ +Parsing the term and recombining to produce the conceptual frameworks that were originally combined to produce DRL, we have deep learning and reinforcement learning. Deep learning is really a set of techniques and algorithmic refinements for the combination of artificial network layers into more successful topologies that perform useful data center tasks. Reinforcement learning is
+ +Sutton states in his slides for the University of Texas (possibly there to get away from the Alberta winters), ""RL is learning to control data."" His is an overly broad definition, since MLPs, CNNs, and GRU networks all learn a function which is controlling data processing when the learned parameters are then leveraged in their intended use cases. This is where the perspective of the question may be based on the misinformative nature of these excessively broad definitions.
+ +The distinction of reinforced learning is the idea that a behavior can be reinforced during use. There may be actual parallel reinforcement of beneficial behavior (as in more neurologically inspired architectures) or learning may occur in a time slicing operating system and share the processing hardware with processes that use what is learned (as in Q-learning algorithms and their derivatives).
+ +Some define RL as machine learning technique that direct the selection of actions along a path of behavior such that some cumulative value of the consequences of actions take is maximized. That may be an excessively narrow definition, biased by the popularity of Markov processes and Q-learning.
+ +This is the problem with the perspective expressed in the question. An XOR operation is not an environment through which a path can be blazed.
+ +If one were to construct an XOR maze, where the initial state is undefined and the one single action is to fall into either the quadrant 10 or 01, it is still not representing an XOR because the input was not a Boolean vector
+ +$\vec{B} \in \mathbb{B}^2 \; \text{,}$
+ +and the output is not a 1 or 0 resulting from XOR operation, as would be the case for a multilayer perceptron learning of XOR operation. There is no cumulative reward. If there was no input and the move was to divide in half and chose both 10 or 01 because their reward was higher than 00 or 11, then that might be considered a reinforcement learning scenario, but it would be an odd one.
+ +That the described setup leads to, ""Getting stuck,"" is no surprise when the tool is a wrench for the turning of a screw.
+ +If the design looses the reinforcement and the artificial network is reduced to a two layer perceptron, the convergence will be guaranteed given a labeled data set of sufficient size or an unsupervised arrangement where the loss function is simply the evaluation of whether the result is XOR.
+ +To experiment with reinforced learning, the agent must interact with the environment and make choices that have value consequences that direct subsequent behavior. Boolean expressions are not of this nature, no matter how complex.
+",4302,,,,,1/15/2019 8:09,,,,0,,,,CC BY-SA 4.0 +10003,1,,,1/15/2019 10:08,,2,138,"Is there any way to control the extraction of features? How do I determine which features are been learned during training, i.e relevant information is been learned or not?
+",21441,,2444,,5/18/2020 10:24,5/19/2020 4:05,How do I determine which relevant features have been learned during training in a CNN?,It exists networks built to learn how to differentiate between classes even if there are looking quite the same. Usually, a triplet loss is used in those networks to learn the difference between the target, a positive sample, and a negative one.
+ +For example, those networks are used to perform identity check with face images, the algorithm learns the differences between different people instead of recognizing people.
+ +Here are some keywords that are possibly relevant: discriminative function, triplet loss, siamese network, one-shot learning.
+ +Theses papers are interesting:
+ +Let's say I want to model purchase data (i.e. purchase records of many households across time). For simplicity, let's assume each household only picks one alternative at the time. A simple starting point is a multinomial logit model. Then, more flexible network architectures could be used. People have applied NN to this, but kept the number of alternatives (K) constant. In reality, the number of available options changes over time. Also, it would be interesting to predict how choices change when the number of alternatives is changed.
+ +in bullet points
+ +Any guidance or ideas?
+",21451,,,,,1/15/2019 16:04,NN: Predicting choices when number of alternatives changes,The image is one of many similar exam questions can anyone pelase help me understand it fully?
+ +'Internal node': This is simply every node except A?
+ +Move choices: His only options are B, C and D for this move?
+ +Focusing on B: E=8 F=4 and G are all opponent responses, therefore they will pick the minimum value.
+ +Now my confusion, are M N and P your known responses in the case the opponent picks G, so you should pick M=0 (the highest value), so then G gets passed 0 which the opponent should choose so B has a h-value of 0?
+ +Are the correct value then B=0, C=1 and D=2 so pick D as next move?
+",21459,,,,,1/15/2019 19:28,Can't grasp MiniMax diagram (no alpha beta pruning),Recurrent Neural Networks (RNN) With Attention Mechanism is generally used for Machine Translation and Natural Language Processing. In Python, implementation of RNN With Attention Mechanism is abundant in Machine Translation (For Eg. https://talbaumel.github.io/blog/attention/, however what I would like to do is to use RNN With Attention Mechanism on a temporal data file (not any textual/sentence based data). I have a CSV file with of dimensions 21000 x 1936, which I have converted to a Dataframe using Pandas. The first column is of Datetime Format and last column consists of target classes like ""Class1"", ""Class2"", ""Class3"" etc. which I would like to identify. So in total, there are 21000 rows (instances of data in 10 minutes time-steps) and 1935 features. The last (1936th column) is the label column.
+ +It is predominant from existing literature that an Attention Mechanism works quite well when coupled into the RNN. I am unable to locate any such implementation of RNN with Attention Mechanism, which can also provide a visualisation as well. Any help in this regard would be highly appreciated. Cheers!
+",21460,,21460,,1/15/2019 20:10,4/25/2019 17:46,How to use RNN With Attention Mechanism on Non Textual Data?,In addition to the points already listed in John's answer, some factors that can help to reduce / mitigate the risk of overfitting to commonly-used benchmarks as a research community are:
+ +Competitions with instances of problems hidden from entrants: as far as I'm aware this is particularly popular in game AI (see the General Game Playing competition and General Video Game Playing competitions). The basic idea is that submissions should be able to tackle a relatively broad class of problems (playing any game defined in a specified format, or generating levels for any video game with rules described in a specific format, etc.). To some extent, using a large suite of problems as a standard benchmark (such as the large collection of Atari games supported by ALE) also fits in with this idea, though there is value in hiding the problems that are ultimately used for testing from the people writing submissions. Of course, the idea is that entries submitted to these kinds of competitions will involve new research which may be published.
Using very simple toy problems: With simple I do not necessarily mean that they are simple to solve, but simple to describe / understand (it may still, for example, have a large state space and be difficult for current techniques to solve). Simple toy problems often help to test for a very specific ""skill"", and can more easily give insight into specifically why/when an algorithm may be expected to fail or succeed. Of course, large non-toy problems are also important to demonstrate ""real-world"" usefulness of algorithms, but they may often give less understanding / insight into an algorithm.
Theoretical work: Theoretical work can also give more insight and understanding of new algorithms. Algorithms with strong theoretical foundations are often more likely to generalize to a multitude of problem domains, assuming that the initial assumptions hold (big assumption here - there are plenty of cases where assumptions required for strong proofs do not hold!). This is not always possible / ""needed"", sometimes new research based purely on intuition and with relatively little theoretical foundations still turn out to work well (or theory is only developed after promising empirical results)... but it can certainly help. Theoretical work can take many different forms, such proofs of convergence (often under strict conditions), proofs for upper or lower bounds on important measures (such as regret, or probability of making a ""wrong"" choice, etc.), proofs that an algorithm or a problem is a more general or more specific case of an existing, well-understood algorithm or problem, proofs that a model has or does not have a certain representational capacity, proofs of algorithmic equivalence (that an algorithm computes exactly the same quantities as another well-understood algorithm, typically with lower computation and/or memory requirements), etc.
What is ""bad local minima""?
+ +The following papers all mention this expression.
+ +There are methods called ""scoring systems"" where you give a image scores such as ""0.9 stripes, 0.0 red, 0.8 hair, ..."" and use those scores to classify objects. It's an older idea, not used to determine if the network is learning. It's not in a standard CNN.
+ +To determine if relevant information is being learned or not, it's standard to use the testing accuracy, training accuracy, confusion matrix, or AUC.
+ +Determining what exactly a CNN is learning is a complicated research problem that's ongoing. In short - you can't really know. For a basic network, you can tell that it is learning something but not what it's actually using to make determinations.
+",21471,,21471,,2/4/2019 19:08,2/4/2019 19:08,,,,0,,,,CC BY-SA 4.0 +10017,2,,10013,1/16/2019 7:46,,0,,"As mentioned in the abstract of on of these papers, bad local minima is a suboptimal local minimum which means a local minimum that is near to a global minimum.
+",4446,,,,,1/16/2019 7:46,,,,0,,,,CC BY-SA 4.0 +10019,1,,,1/16/2019 9:20,,1,520,"I have this problem where I need to get information out of PDF document sent from a scanner. The program needs to be learnable in some way to recognize what different figures mean. Most of this should happen without human interference so it could just give a result after scanning the file. +Do anyone know if it's possible to do with a machine learning program or any alternative way?
+",21476,,,,,1/16/2019 10:36,"Could it be possible to detect text, symbols, and components directly in a scanned PDF file with a program like Tensorflow or another program?",Yes, that's possible. +I am working on a project in which I have to detect text in images. I did a quick search and found these two algorithms:
+ +1. EAST: (Efficient and Accurate Scene Text Detector)
+I am not sure if it is based on Machine Learning. Here are some links link1 link2 explaining how to use it with an example and using tesseract to extract the detected text.
2. CTPN: (Connectionist Text Proposal Network)
+This algorithm is based on Machine Learning. Here is its link in github. In the description, you will find a link to a pre-trained model that you can use. Or simply, you can prepare your own data and train your own model.
For me, I tried both of them, and the CTPN model gave better results especially when the image contains large text.
+",19059,,,,,1/16/2019 10:36,,,,4,,,,CC BY-SA 4.0 +10021,1,,,1/16/2019 11:28,,0,63,"For example, I have the following csv: training.csv
+I want to know how I can determine which column will be the best feature for getting the output prediction before I go for machine training.
+Please do share your responses
You should know your data 100%. That means knowing what each of your columns and rows represents (e.g. temperature column, humidity, rows representing days), the value units (e.g. Celsius or Fahrenheit?), accuracy, value format (strings or numbers). You may need to clean and reorganize the data if necessary to bring them to your desired form (e.g. change the structure, units, aggregating, etc).
+ +Then use your logic and experience to decide what columns are necessary. This is in general. I hope someone will give you a more specific answer.
+",21480,,,,,1/16/2019 11:41,,,,1,,,,CC BY-SA 4.0 +10025,1,10029,,1/16/2019 13:36,,1,127,"I have a Deep Feedforward Neural Network $F: W \times \mathbb{R}^d \rightarrow \mathbb{R}^k$ (where $W$ is the space of the weights) with $L$ hidden layers, $m$ neurones per layer and ReLu activation. The output layer has a softmax activation function.
+ +I can consider two different loss functions:
+ +$L_1 = \frac{1}{2} \sum_i || F(W,x_i) - y||^2$ $ + \ \ \ $ and $\ \ \ L_2 = -\sum_i log(F(w,x_i)_{y_i})$
+ +where the first one is the classic quadratic loss and the second one is cross entropy loss.
+ +I'd like to study the norm of the derivative of the loss function and see how the two are related, which means:
+ +1) Let's assume I know that $|| \frac{\partial L_2(W, x_i)}{\partial W}|| > r$, where $r$ is a small constant. What can I assume about $|| \frac{\partial L_1(W, x_i)}{\partial W}||$ ?
+ +2) Are there any result which tell you that, under some hypothesis (even strict ones) such as a specific random initialisation, $|| \frac{\partial L_1(W, x_i)}{\partial W}||$ doesn't go to zero during training?
+ +Thank you
+",21338,,21338,,1/16/2019 14:13,1/17/2019 4:00,Comparing and studying Loss Functions,Though there is no universal method which can be blindly used for all datasets, but here is what i usually do;
+ +Now look at the variance in each feature. Usually, features with more variance are more important.
Next, see the correlation among columns. If two columns are highly +correlated, you only need to keep only one.
There are many people trying to show how neural networks are still very different from humans, but I fail to see in what way human brains are different from neural models in anything but complexity.
+ +The way we learn is similar, the way we process information is similar, the ways we predict outcomes and generate outputs are similar. Give a model enough processing power, enough training samples, and enough time and you can train a human.
+ +So, what is the difference between human (brains) and neural networks?
+",20399,,2444,,5/17/2020 11:28,5/17/2020 11:28,What is the difference between human brains and neural networks?,Let's first express a network of arbitrary topology and heterogeneous or homogeneous cell type arrangements as
+ +$$ N(T, H, s) := \, \big[\, \mathcal{Y} = F(P_s, \, \mathcal{X}) \,\big] \\ + s \in \mathbb{C} \; \land \; s \le S \; \text{,} $$
+ +where $S$ is the number of learning states or rounds, $N$ is the network of $T$ topology and $H$ hyper-parameter structure and values that at stage $s$ produces a $P$ parameterized function $f$ of $\mathcal{X}$ resulting in $\mathcal{Y}$. In supervised learning, the goal is that $F(P_s)$ approaches a conceptually ideal function $F_i$ as $s \rightarrow S$.
+ +The popular loss aggregation norms are not quite as the question defines them. The below more canonically expresses the level 1 and 2 norms, which systematically aggregate multidimensional disparity between an intermediate result at some stage (epoch and example index) of training and the conceptual ideal toward which the network in training is intended to converge.
+ +$$ {||F-\mathcal{Y}||}_1 = \sum{|F_i - y_i|} \\ + {||F-\mathcal{Y}||}_2 = \sqrt{\sum{(F_i - y_i)}^2} $$
+ +These equations have been mutated by various authors to make various points, but those mutations have obscured the obviousness of their original relationship. The first is where distance can be aggregated through only orthogonal vector displacements. The second is where aggregation uses the minimum Cartesian distance by extending the Pythagorean theorem.
+ +Note that quadratic loss is a term with some ambiguity. These are all broadly describable as quadratic expressions of loss.
+ +Cross entropy is an extension of Claude Shannon's information theory concepts based on the work of Bohr, Boltzmann, Gibbs, Maxwell, von Neumann, Frisch, Fermi, and others who were interested in quanta and the thermodynamic concept of entropy as a universal principle running through mater, energy, and knowledge.
+ +$$ S = k_B \log{\Omega} \\ + H(X) = - \sum_i p(x_i) \, \log_2{\, p(x_i)} \\ + H(p, \, q) = -\sum_{x \in \mathcal{X}} \, p(x) \, \log_2{\, q(x)} $$
+ +In this progression of theory, we begin with a fundamental postulate in quantum physics, where $k_B$ is Boltzmann's constant and $\Omega$ are the number of microstates for the quanta. The next relation is Shannon's adaptation for information, where $H$ is the entropy in bits, thus the $\log_2$ instead of a natural logarithm. The third relation above expresses cross-entropy in bits for features $\mathcal{X}$ is based on the Kullback-Leibler divergence. the p-attenuated sum of bits of q-information in .
+ +Notice that $p$ and $q$ are probabilities, not $F$ or $\mathcal{Y}$ values, so one cannot substitute labels and outputs of a network into them and retain the meaning of cross entropy. Therefore level 1 and 2 norms are closely related, but cross-entropy is not a norm; it is the dispersion of one thing Cartesian distance aggregation scheme like them. Cross-entropy is remotely related but is statistically more sophisticated. To produce a cross-entropy loss function of form
+ +$$ {||F-\mathcal{Y}||}_H = \mathcal{P}(F, y) \; \text{,} $$
+ +one must derive the probabilistic function $\mathcal{P}$ that represents the cross entropy for two distributions in some way that is theoretically sound on the basis of both information theory and convergence resource conservation. It is not clear that the interpretation of cross entropy in the context of gradient descent and back propagation has caught up with the concepts of entropy in quantum theory. That's an area needing further research and deeper theoretical consideration.
+ +In the question, the cross-entropy expression is not properly characterized, most evident in the fact that the expression is independent of the labels $\mathcal{Y}$, which would be fine if for unsupervised learning except that no other basis for evaluation is represented in the expression. For the term cross-entropy to be valid, the basis for evaluation must include two distributions, a target one and one that represents the current state of learning.
+ +The derivatives of the three norms (assuming the cross entropy is properly characterized) can be studied for the case of $\ell$ ReLU layers by generalizing the chain rule (from differential calculus) as applied to ReLU and the loss function developed by applying each of the three norms to aggregate measures of disparity from optimal.
+ +Regarding the inference in sub-question (1) nothing of particular value can be assumed about the Jacobians of level 2 norms from level 1 norms, both with respect to parameters $P$ or vice versa, except the retention of sign. This is because we cannot determine much about the correlation between the output channels of the network.
+ +There is no doubt, regarding sub-question (2), that some constraint, set of constraints, stochastic distribution applied to initialization, hyper-parameter settings, or data set features, labels, or number of examples have implications for the reliability and accuracy of convergence. The PAC (probably approximately correct) learning framework is one system of theory that approaches this question with mathematical rigor. One of its practical uses, among others, is to derive inequalities that predict feasibility in some cases and produce more lucid approaches to learning system projects.
+",4302,,4302,,1/17/2019 4:00,1/17/2019 4:00,,,,0,,,,CC BY-SA 4.0 +10030,2,,10027,1/16/2019 16:04,,3,,"One incredibly important difference between humans and NNs is that the human brain is the result of billions of years of evolution whereas NNs were partially inspired by looking at the result and thinking ""... we could do that"" (utmost respect for Hubel and Wiesel).
+ +Human brains (and in fact anything biological really) have an embedded structure to them within the DNA of the animal. DNA has about 4 MB of data and incredibly contains the information of where arms go, where to put sensors and in what density, how to initialize neural structures, the chemical balances that drive neural activation, memory architecture, and learning mechanisms among many many other things. This is phenomenal. Note, the placement of neurons and their connections isn't encoded in dna, rather the rules dictating how these connections form is. This is fundamentally different from simply saying ""there are 3 conv layers then 2 fully connected layers..."". +There has been some progress at neural evolution that I highly recommend checking out which is promising though.
+ +Another important difference is that during ""runtime"" (lol), human brains (and other biological neural nets) have a multitude of functions beyond the neurons. Things like Glial cells. There are about 3.7 Glial cells for every neuron in your body. They are a supportive cell in the central nervous system that surround neurons and provide support for and insulation between them and trim dead neurons. This maintenance is continuous update for neural structures and allows resources to be utilized most effectively. With fMRIs, neurologists are only beginning to understand the how these small changes affect brains.
+ +This isn't to say that its impossible to have an artificial NN that can have the same high level capabilities as a human. Its just that there is a lot that is missing from our current models. Its like we are trying to replicate the sun with a campfire but heck, they are both warm.
+",4398,,,,,1/16/2019 16:04,,,,1,,,,CC BY-SA 4.0 +10032,1,10033,,1/16/2019 17:03,,2,785,"I tried to build a Q-learning agent which you can play tic tac toe against after training.
+Unfortunately, the agent performs pretty poorly. He tries to win but does not try to make me 'not winning' which ends up in me beating up the agent no matter how many loops I gave him for training. I added a reward of 1 for winning the episode and it gets a reward of -0.1 when he tries to put his label on an non-empty square (after the attempt we have s = s'). I also start with an epsilon=1 which decreases in every loop to add some more randomness at the beginning because I witnessed that some (important in my opinion) states did not get updated. Since I spend some hours of debugging without noticeable progress I'd like to know what you think.
+PS: Don't care about some print statements and count variables. Those where for debugging.
+Code here or on Github
+import numpy as np
+import collections
+import time
+
+Gamma = 0.9
+Alpha = 0.2
+
+
+class Environment:
+ def __init__(self):
+ self.board = np.zeros((3, 3))
+ self.x = -1 # player with an x
+ self.o = 1 # player with an o
+ self.winner = None
+ self.ended = False
+ self.actions = {0: (0, 0), 1: (0, 1), 2: (0, 2), 3: (1, 0), 4: (1, 1),
+ 5: (1, 2), 6: (2, 0), 7: (2, 1), 8: (2, 2)}
+
+ def reset_env(self):
+ self.board = np.zeros((3, 3))
+ self.winner = None
+ self.ended = False
+
+ def reward(self, sym):
+ if not self.game_over():
+ return 0
+ if self.winner == sym:
+ return 10
+ else:
+ return 0
+
+ def get_state(self,):
+ k = 0
+ h = 0
+ for i in range(3):
+ for j in range(3):
+ if self.board[i, j] == 0:
+ v = 0
+ elif self.board[i, j] == self.x:
+ v = 1
+ elif self.board[i, j] == self.o:
+ v = 2
+ h += (3**k) * v
+ k += 1
+ return h
+
+ def random_action(self):
+ return np.random.choice(self.actions.keys())
+
+ def make_move(self, player, action):
+ i, j = self.actions[action]
+ if self.board[i, j] == 0:
+ self.board[i, j] = player
+
+ def game_over(self, force_recalculate=False):
+ # returns true if game over (a player has won or it's a draw)
+ # otherwise returns false
+ # also sets 'winner' instance variable and 'ended' instance variable
+ if not force_recalculate and self.ended:
+ return self.ended
+
+ # check rows
+ for i in range(3):
+ for player in (self.x, self.o):
+ if self.board[i].sum() == player*3:
+ self.winner = player
+ self.ended = True
+ return True
+
+ # check columns
+ for j in range(3):
+ for player in (self.x, self.o):
+ if self.board[:, j].sum() == player*3:
+ self.winner = player
+ self.ended = True
+ return True
+
+ # check diagonals
+ for player in (self.x, self.o):
+ # top-left -> bottom-right diagonal
+ if self.board.trace() == player*3:
+ self.winner = player
+ self.ended = True
+ return True
+ # top-right -> bottom-left diagonal
+ if np.fliplr(self.board).trace() == player*3:
+ self.winner = player
+ self.ended = True
+ return True
+
+ # check if draw
+ if np.all((self.board == 0) == False):
+ # winner stays None
+ self.winner = None
+ self.ended = True
+ return True
+
+ # game is not over
+ self.winner = None
+ return False
+
+ def draw_board(self):
+ for i in range(3):
+ print("-------------")
+ for j in range(3):
+ print(" ", end="")
+ if self.board[i, j] == self.x:
+ print("x ", end="")
+ elif self.board[i, j] == self.o:
+ print("o ", end="")
+ else:
+ print(" ", end="")
+ print("")
+ print("-------------")
+
+
+
+
+class Agent:
+ def __init__(self, Environment, sym):
+ self.q_table = collections.defaultdict(float)
+ self.env = Environment
+ self.epsylon = 1.0
+ self.sym = sym
+ self.ai = True
+
+ def best_value_and_action(self, state):
+ best_val, best_act = None, None
+ for action in self.env.actions.keys():
+ action_value = self.q_table[(state, action)]
+ if best_val is None or best_val < action_value:
+ best_val = action_value
+ best_act = action
+ return best_val, best_act
+
+ def value_update(self, s, a, r, next_s):
+ best_v, _ = self.best_value_and_action(next_s)
+ new_val = r + Gamma * best_v
+ old_val = self.q_table[(s, a)]
+ self.q_table[(s, a)] = old_val * (1-Alpha) + new_val * Alpha
+
+ def play_step(self, state, random=True):
+ if random == False:
+ epsylon = 0
+ cap = np.random.rand()
+ if cap > self.epsylon:
+ _, action = self.best_value_and_action(state)
+ else:
+ action = np.random.choice(list(self.env.actions.keys()))
+ self.epsylon *= 0.99998
+ self.env.make_move(self.sym, action)
+ new_state = self.env.get_state()
+ if new_state == state and not self.env.ended:
+ reward = -5
+ else:
+ reward = self.env.reward(self.sym)
+ self.value_update(state, action, reward, new_state)
+
+
+class Human:
+ def __init__(self, env, sym):
+ self.sym = sym
+ self.env = env
+ self.ai = False
+
+ def play_step(self):
+ while True:
+ move = int(input('enter position like: \n0|1|2\n------\n3|4|5\n------\n6|7|8'))
+ if move in list(self.env.actions.keys()):
+ break
+ self.env.make_move(self.sym, move)
+
+
+
+def main():
+ env = Environment()
+ p1 = Agent(env, env.x)
+ p2 = Agent(env, env.o)
+ draw = 1
+ for t in range(1000005):
+
+ current_player = None
+ episode_length = 0
+ while not env.game_over():
+ # alternate between players
+ # p1 always starts first
+ if current_player == p1:
+ current_player = p2
+ else:
+ current_player = p1
+
+ # current player makes a move
+ current_player.play_step(env.get_state())
+
+ env.reset_env()
+
+ if t % 1000 == 0:
+ print(t)
+ print(p1.q_table[(0, 0)])
+ print(p1.q_table[(0, 1)])
+ print(p1.q_table[(0, 2)])
+ print(p1.q_table[(0, 3)])
+ print(p1.q_table[(0, 4)])
+ print(p1.q_table[(0, 5)])
+ print(p1.q_table[(0, 6)])
+ print(p1.q_table[(0, 7)])
+ print(p1.q_table[(0, 8)])
+ print(p1.epsylon)
+
+ env.reset_env()
+ # p1.sym = env.x
+
+ while True:
+ while True:
+ first_move = input("Do you want to make the first move? y/n :")
+ if first_move.lower() == 'y':
+ first_player = Human(env, env.x)
+ second_player = p2
+ break
+ else:
+ first_player = p1
+ second_player = Human(env, env.o)
+ break
+ current_player = None
+
+ while not env.game_over():
+ # alternate between players
+ # p1 always starts first
+ if current_player == first_player:
+ current_player = second_player
+ else:
+ current_player = first_player
+ # draw the board before the user who wants to see it makes a move
+
+ if current_player.ai == True:
+ current_player.play_step(env.get_state(), random=False)
+ if current_player.ai == False:
+ current_player.play_step()
+ env.draw_board()
+ env.draw_board()
+ play_again = input('Play again? y/n: ')
+ env.reset_env()
+ # if play_again.lower != 'y':
+ # break
+
+
+if __name__ == "__main__":
+ main()
+
+",21487,,2444,,10/31/2020 17:20,10/31/2020 17:20,Why isn't my Q-Learning agent able to play tic-tac-toe?,The $Q$-learning rule that you have implemented updates $Q(S_t, A_t)$ estimates as follows, after executing an action $A_t$ in a state $S_t$, observing a reward $R_t$, and reaching a state $S_{t+1}$ as a result:
+ +$$Q(S_t, A_t) \gets (1 - \alpha) Q(S_t, A_t) + \alpha (R_t + \gamma \max_a Q(S_{t+1}, a))$$
+ +The implementation seems to be correct for the traditional setting for which $Q$-learning is normally described; single-agent MDPs. The problem is that you have a multi-agent setting, in which $Q$-learning is not always directly applicable.
+ +Now, as far as I can see from a very quick glance at your code, it seems like you actually already have taken some important steps towards allowing it to work, and I think it should be quite close to almost working (at least for a simple game like Tic-Tac-Toe) already. Important things that you appear to already be doing correctly:
+ +I think the major issue that remains to be solved is in how you define the subsequent state $S_{t+1}$ after making a move in a state $S_t$.
+ +The update target that the $Q$-learning update rule moves its $Q$-value estimates towards consists of two components:
+ +The problem is that, in your implementation, $S_{t+1}$ is a state in which the opponent is allowed to make the next move $a$, rather than the RL agent. This means that $\max_a Q(S_{t+1}, a)$ is an incredibly optimistic, naive, unrealistic estimate of future returns. In fact, $\min_a Q(S_{t+1}, a)$ would be a much more realistic estimate (against an optimally-playing opponent), because the opponent gets to pick the next action $a$.
+ +I think switching in $\min_a Q(S_{t+1}, a)$ rather than the $\max$ may have a good chance of working in this scenario, but I'm not 100% sure. It wouldn't be a ""pretty"" solution though, since you'd no longer be doing $Q$-learning, but something else altogether.
+ +The proper $Q$-learning update may work well if you only present states to agents in which they're actually allowed to make the next move in the update rule. Essentially, you'd be plugging $\max_a Q(S_{t + 2}, a)$ into the update rule, replacing $S_{t+1}$ with $S_{t+2}$. Well... that's what you'd be doing in most cases. The only exception to be aware of would be terminal states. If an agent makes a move that leads to a terminal state, you should make sure to also run an additional update for that agent with the terminal game state $S_{t+1}$ (where $Q(S_{t+1}, a)$ will always be $0$ for any action $a$ if $S_{t+1}$ is terminal).
+ +For a very closely related question, where I essentially provided an answer in the same spirit, see: How to see terminal reward in self-play reinforcement learning?
+",1641,,1641,,1/16/2019 18:18,1/16/2019 18:18,,,,6,,,,CC BY-SA 4.0 +10034,1,,,1/16/2019 18:14,,1,60,"I'm a programmer with a background in mathematics, but I have no experience whatsoever with artificial intelligence/neural networks. I'd like to study it as a hobby, and my goal for now is to solve the following simple poker game, by letting the program play against itself:
+ +We have two players, each with a certain number of chips. At the start of the game, they are obligated to put a certain amount of chips in the pot. Then they each get a random real number between 0 and 10. They know their own number, but not the one of their opponent. Then we have one round of betting. The first player puts additional chips in the pot (some number between 0 and their stack size). The second player can either fold (put no additonal chips in the pot, 1st player gets the entire pot), call (put the same number of chips in the pot, player with highest number gets the pot) or raise (put even more chips in the pot, action back on 1st player). There is no limit to the amount of times a player can raise, as long as he still has chips behind to raise.
+ +I have several questions: +- Is this indeed a problem that can be solved with neural networks? +- What do you recommend me to study in order to solve this problem? +- Is it feasible to solve this game when allowing for continuous bet/raise sizes? Or should I limit it to a few options as a percentage of the pot? +- Do you expect it to be possible to get close to an equilibrium with one nightly run on an 'average' laptop?
+",21488,,1671,,1/16/2019 21:27,1/16/2019 21:27,What to study for this simple poker game?,I have source data that can be represented as a 2D image of many similar curves. They may oftentimes cross over one another, so regions of interest will overlap.
+ +My goal is to implement a neural network solution to identify each instance or the curves and the pixels that are associated with each instance.
+ +(Each image is simple in its representation of the data. A pixel in the image is either a point on one of these curves or it is empty. So the image is represented by one or zero at each pixel. For training purposes, I have labels for every pixel, and I have about 150,000 images. The information in the images can be noisy in that there may be omissions of points and point locations are quantized due to measurement limitations and preprocessing for the image preparation.)
+ +I started looking into what semantic segmentation can do for me, but since all of the instances are of the same class, distinguished mainly by their location in the image, I don't think semantic segmentation is the type of processing I would want to perform. (Am I wrong?)
+ +I am very interested in seeing how a neural network will work on this problem to separate each instance.
+ +My question is this: what is the terminology that describes the process I'm looking for? (How can I effectively research for this problem?) Is this an extension of semantic segmentation or is it referred to some other way?
+",8439,,,,,1/17/2019 15:43,Pixel-Level Detection of Each Object of the Same Class In an Image,At slide 17 of the David Silver's series, the soft-max policy is defined as follows
+ +$$ +\pi_\theta(s, a) \propto e^{\phi(s, a)^T \theta} +$$
+ +that is, the probability of an action $a$ (in state $s$) is proportional to the ""exponentiated weight"".
+ +The score function is then defined as follows
+ +$$ +\nabla_\theta \log \pi_\theta (s, a) = \phi(s, a) - +\mathbb{E}_{\pi_{\theta}}[\phi(s, \cdot)] +$$
+ +Where does the expectation term $\mathbb{E}_{\pi_{\theta}}[\phi(s, \cdot)]$ come from?
+",16313,,2444,,2/15/2019 18:59,2/15/2019 18:59,Where does the expectation term in the derivative of the soft-max policy come from?,